There has been a religious war going on for some time about which is the best cache for webservers. There are the everlasting purists who recommend a cache like Varnish Cache and the novices who recommend it NGINX FastCGI cache we are talking about in this article, as a faster, simpler, and less complex method of providing HTTP (and of course HTTPS) caching functionality.
There are those like us who argue that Varnish has more features and allows, thanks to its Vansh Configuration Language (VLC ed) to generate really complex and advanced caching setups and configurations, and there are those who argue that the same things can be done with a few more lines of configuration directly in NGINX, without adding cache layers like Varnish.
Normally in these comparisons, everyone brings grist to his mill, to what he usually uses, to what he knows how to use most or finds more comfortable. For example, you want to put the flexibility of Varnish Cache and the compatibility with WordPress thanks to market standard plugins such as W3 Total Cache, Proxy Cache Purge, WP Fastest Cache or the highly commercialized WP Rocket ?
Objectively, when we talk about WordPress and performance we are the first to point out how important it is for us to have a Varnish-based stack as a full page cache. However, are we really right when we assert that the NGINX Cache is a "toy" for inexperienced sysadmins? And when some of our colleagues criticize us for using a cache like Varnish that could in fact be saved using the NGINX Cache, criticizing us for weighing down a software stack that could be leaner, are they really right?
In short, between Varnish and NGINX which cache is better?
To answer this question, one must necessarily take a step back and distance oneself from one's convictions, seeking an open approach towards what we have not used or used badly until now and begin to understand the business models of Varnish and NGINX and how from an understanding of their respective limitations they can include and not exclude each other.
Business models of Varnish and NGINX
Varnish and NGINX are two popular proxy servers used to improve web application performance and ensure better user experience. What makes these software even more interesting is that both are developed under the open source philosophy, which means that they are freely available for anyone to use and modify.
However, there is also a commercial version of both software: NGINX Plus and Varnish Enterprise. These versions come with advanced features and additional functionality, which are not available in the free versions. Such features include official support from the company, software quality assurances, and even performance analysis tools.
What differentiates these commercial versions the most from the free versions is their price. Both commercial versions have licensing costs in the tens of thousands of dollars annually. This means that these licenses are aimed primarily at companies that need advanced features and are willing to pay for additional support and warranties.
Overcome the respective limitations by using both NGINX and Varnish
Aware that both NGINX and Varnish have limitations in their free versions, we can still overcome them by using both proxy servers in combination.
For example, NGINX can be used as a microcaching service. In this way, NGINX is used as the first level of caching, where data that is frequently requested by users is stored. When a request arrives at the server, NGINX checks its cache to see if the response is already there. If it is, NGINX immediately sends the response back to the client without having to go to the next stage of processing.
In combination with Varnish, NGINX can be used as a front-end for the Varnish caching server. In this scenario, client requests are initially handled by NGINX. If the requested data is not in NGINX's cache, the request is passed to the Varnish server, which has a larger and more robust cache. Varnish has the ability to store large amounts of data in RAM, making it much faster than classic caches like Memcached and Redis.
When a client requests information, NGINX looks it up in its cache. If the response is there, NGINX sends it directly to the client. Otherwise, the request is forwarded to Varnish, which in turn checks its cache to see if the response is there. If the response is present in the Varnish cache, it is returned to the client through NGINX.
This combination of caching can significantly improve web application performance. Furthermore, it is possible to customize the behavior of the cache using the configuration options offered by both proxy servers and with some precautions to ensure that the web server never returns a MISS as a response, thus guaranteeing excellent response times of less than 30ms.