Table of contents of the article:
In the world of modern hosting, the word "cache" has become something of a mantra. Server-side cache, CDN-side cache, browser cache, application cache, page cache, object cache. Everything must be cached, because caching "makes the site run faster." And it's true: when used properly, caching is one of the most powerful tools for improving perceived performance, reducing server load, and increasing scalability.
The problem arises when the concept of caching is oversimplified and translated into a crude rule: "better to disable everything, so we avoid problems." This is where the use – and often abuse – of the HTTP header comes into play. Cache-Control, in particular of the directive no-store, applied indiscriminately to entire websites, even on pages that have no technical reason to be excluded from modern caching mechanisms.
This practice, very widespread especially in the WordPress world and more generally in the traditional Apache/PHP stacks, has direct consequences on performance, on Core Web Vitals and on an often ignored aspect: how the browser Back/Forward Cache, the so-called BFCache, works.
What Cache-Control Really Is and Why It's So Important
Cache-Control It is one of the fundamental HTTP headers for governing the behavior of caches along the path of a request: browsers, intermediate proxies, CDNs, reverse proxies, and so on. Through directives such as max-age, s-maxage, public, private, no-cache e no-store, the server can indicate how and for how long a resource can be stored and reused.
In theory, this system allows for very fine-grained control: you can decide that a static page is cacheable for hours or days, that a user-customized resource is private, that a sensitive endpoint is never saved to disk or memory. In practice, however, many hosting environments and CMS configurations end up adopting brutal shortcuts, such as the global setting of Cache-Control: no-store on all HTML responses.
The reason is almost always the same: fear. Fear of serving the wrong content, fear of showing one user's data to another, fear of handling exceptions correctly. And so, instead of designing a sensible caching strategy, the simplest option is chosen: disabling everything.
No-store: What it really means and why it's so invasive
The no-store It is one of the most drastic. It explicitly tells any cache: "This response must not be stored in any way." Not in RAM, not on disk, not for a second. The browser must forget about it immediately after using it to render the page.
This has important implications. First of all, every navigation to that page involves a new, complete request to the server, even if the user just visited the same URL a moment before. But there's more: no-store It also disables some advanced mechanisms of modern browsers, including the Back/Forward Cache.
Many developers confuse no-store to no-cache, but they are two very different things. no-cache This doesn't mean "don't cache," it means "you must validate the resource before reusing it." In practice, it still allows caching with revalidation, using ETag or Last-Modified. no-store, instead, completely cuts out any form of memorization.
Using it on login pages, checkouts, and personal areas makes perfect sense. Using it on the home page, a blog post, or a category page is, in most cases, a resounding own goal.
The overuse phenomenon: why so many sites do it
If we look at the real landscape of the web, we discover that hundreds of thousands of WordPress sites (and more) they still send today Cache-Control: no-store even on public and non-sensitive pages. This doesn't happen for a single reason, but for a combination of structural factors linked to the ecosystem itself.
One problem is the average quality of the code of many plugins. A significant portion of the WordPress ecosystem is made up of actual “spaghetti code”, written by developers who often have only rudiments of PHP and MySQL, without any real knowledge of the object-oriented programming paradigm, software design principles or the architectural implications of their choices. It is not uncommon to come across code that It doesn't even distinguish between MyISAM and InnoDB tables, which ignores basic concepts like transactions, locking, or isolation, and treats the database as a simple makeshift key/value store.
In this context, it only takes one poorly written plugin, installed alongside dozens of plugins carefully developed by professional teams, for completely break all best practices on the correct use of HTTP headers, including Cache-Control. A single component can start sending restrictive headers, forcing no-store everywhere “just in case” or disable essential caching mechanisms, dragging the entire site into a performance-detrimental configuration.
Add to this the fact that many caching or security plugins choose ultra-conservative settings To reduce the risk of bugs: it's better to disable everything than to risk serving the wrong page. And on the same level are placed many shared or semi-managed hosting environments, which adopt standardized configurations applied indiscriminately to all sites, without any real distinction between static, semi-dynamic and truly sensitive content.
The result is amplified by a technical culture now obsolete, which can be summed up in the slogan: "PHP = dynamic = non-cacheable." An idea that was already questionable ten years ago and is simply anachronistic today. Modern frameworks, advanced CMS and reverse proxies such as Varnish or Nginx demonstrate daily that it is possible – and normal – intelligently cache even dynamically generated content, as long as it is done with judgment, competence and a real understanding of what is being put into production.
Cache-Control and performance: the impact on Core Web Vitals
When we talk about performance today, we're no longer just talking about "how long it takes a page to load." We're talking about specific metrics: LCP, CLS, INP, TTFB. All these metrics are influenced, directly or indirectly, by the ability to reuse resources already obtained or even pages already rendered.
If every navigation forces the browser to remake a complete request to the server, including TLS handshake, backend wait, HTML generation, and resource download, the TTFB increases. If the browser can't take advantage of local caches or instant recovery mechanisms, the LCP worsens. If every “back” in the browser results in a complete reload, the user experience suffers significantly.
In this sense, the abuse of no-store It's a hidden performance tax. It's not always visible in synthetic tests, but it's clearly evident in real-world use, especially on mobile devices and less-than-perfect connections.
What is BFCache and why is it so important for user experience?
La Back/Forward Cache (BFCache) It is a mechanism implemented by all modern browsers that allows you to store the complete state of a page in memory when the user navigates forward or backward in history. Not only the HTML is saved, but the entire state of the page: the already constructed DOM, the JavaScript state, the scroll position, the form values, and in general, everything needed to restore the view exactly as it was.
When the user presses “back” or “next”, instead of making a request to the server again and rebuild the page from scratch, the browser instantly restore snapshot previously stored in memory. The result is navigation that feels immediate, with no loading times, no interface flickering, and no needless layout recalculations or script re-executions.
For the user this difference is enormous in terms of responsiveness and fluidity of experience. For the site, it means fewer requests to the server and less work on the backend and frontend side. Finally, for performance metrics, the BFCache represents a concrete and measurable advantage, especially in internal navigation interactions, where it can effectively eliminate perceived loading times.
The Link Between Cache-Control: No-Store and BFCache
Here comes the critical point: one of the conditions that disable the BFCache is the presence of Cache-Control: no-storeIf a page is marked as "not to be stored in any way", the browser cannot keep a complete snapshot of it for rapid recovery, because that directive explicitly prohibits any form of storage, even temporary and only in memory.
The result is immediate and measurable: Every time the user goes back to that page, the browser is forced to reload it from scratchNew HTTP request, new TTFB, new rendering, new loading of static resources, and re-executing scripts. From the user's perspective, the site appears slower and less responsiveFrom the server's point of view, however, more useless requests arrive which could have been avoided thanks to the instant restoration of the page.
It is important to understand that This has nothing to do with traditional HTTP resource caching.. The BFCache is a mechanism internal to the browser, designed exclusively to improve the browsing experience when using the “back” and “forward” buttons. Disable it without any real functional or security reason It's like removing the automatic transmission from a modern car “for peace of mind”: technically possible, but at the expense of comfort, efficiency, and overall performance.
When no-store is really justified
Let me be clear: no-store It's not an absolute evil. It's a necessary tool in some contexts. All pages displaying sensitive, personal, or session-related data should use it. Login pages, restricted areas, shopping carts, checkouts, admin panels, account statements, health data—anything that absolutely must not be stored in any form.
In these cases, losing the BFCache is an acceptable tradeoff, because the priority is data security and accuracy. No one wants the browser to revert to an outdated state or display private information improperly.
The problem arises when this directive is applied indiscriminately even to public, static, or semi-static pages that do not contain any sensitive information and could greatly benefit from modern caching mechanisms.
The difference between SaaS sites and traditional CMS
Interestingly, many modern SaaS platforms do not suffer from these types of problems. Hosted e-commerce services, publishing platforms, and advanced web applications adopt architecturally designed caching strategies, with a clear separation between public and private content, between static and dynamic resources, and between user state and shared content. In these contexts, the cache is not an afterthought, but an integral part of the system design.
In the world of traditional self-hosted CMSs—and WordPress is the most emblematic example—we still often see a "one size fits all" approach: the same caching policy for the entire site, the same distrust of advanced caching mechanisms, and the same performance-impairing results. This isn't so much a result of an intrinsic limitation of PHP as a language, but rather the fact that WordPress is a project born and raised largely in an amateur context, where for years the priority has been ease of extension rather than architectural solidity.
The ecosystem that formed around it reflects this origin: a huge amount of plugins developed by non-professional programmers, with skills often limited to the rudiments of PHP and MySQL, and with a superficial understanding of concepts such as separation of responsibilities, state management, or caching policy design. In this scenario, it's natural that many choose the simplest and "safest" route—disabling everything—rather than implementing differentiated and truly correct cache strategiesThe result is not an inevitable structural problem, but the direct consequence of lazy technical choices and an ecosystem that has privileged quantity over quality.
Smart Cache: Separating the Public from the Sensitive
The key to getting out of this situation is conceptually simple: distinguish. Distinguish between public and private pages, between content that can be cached and content that shouldn't be, between user status and shared content.
A home page, a blog post, a category page, a landing page should almost never have no-storeThey may have conservative cache policies, they may require revalidation, they may have short TTLs, but they should not a priori exclude any form of caching.
On the contrary, account pages, shopping carts, payment flows, and administrative areas must be treated more strictly. This is where no-store finds its natural place.
The real impact on hosting and infrastructure costs
There's also an often overlooked aspect: cost. Every request that could be avoided thanks to browser-side caching or BFCache is a request that now reaches the server. On sites with significant traffic, this means more CPU, more RAM, more I/O, and higher infrastructure costs.
Many hosting providers spend time and resources optimizing their stack, PHP, databases, and server-side caching, only to undermine some of these efforts with overly restrictive HTTP policies that prevent the browser from doing its part. It's a fairly common paradox: you optimize the backend and sabotage client-side performance.
How to determine if your site is being penalized by BFCache
Modern browsers provide increasingly advanced development tools which allow for precise analysis of the behavior of the pages, including their eligibility for the BFCacheIn Google Chrome, for example, the developer tools offer specific panels and sections that allow you to check whether a “back/forward” navigation actually used the BFCache or whether the page was reloaded from scratch.
Through the panel Application and the sections dedicated to performance and navigation, you can see if the page was restored from BFCache and, especially, what conditions prevented its useChrome provides quite clear indications on the causes of exclusion: presence of certain HTTP headers, use of incompatible APIs, particular management of the page state or, very often, the setting itself Cache-Control: no-store.
And it is in these cases that the paradox emerges: Among the reasons for exclusion, it frequently appears Cache-Control: no-store even on perfectly public and harmless pages, devoid of any sensitive state or personalized content. In other words, it turns out that pages that could fully benefit from instant recovery they are instead forcing the browser to reload everything from scratch, giving up a free and very powerful optimization offered natively by the browser itself.
These tools make the problem not only theoretical, but easily observable and measurable: just analyze a simple internal navigation to see how a single incorrect HTTP directive can completely negate the benefits of BFCache, with a direct impact on both the user experience and the unnecessary load imposed on the server.
Conclusion: less fear, more strategy
Cache-Control isn't an on/off switch. It's a rich language, designed to describe complex policies precisely. Using it well means faster, more responsive, more efficient, and more enjoyable sites. Using it poorly, or abusing it with no-store everywhere, means throwing away a significant part of the optimizations that modern browsers offer for free.
BFCache is one of these technological gifts: invisible to the user, incredibly powerful in its effects, yet easy to disable with a single wrong directive. Rethinking your cache policies, distinguishing between what's truly sensitive and what isn't, is now one of the simplest and most effective steps to improve a website's real-world performance.
In other words: less paranoia, more planning. Cache isn't the enemy. Lazy use of cache is.