Table of contents of the article:
Remember the Volkswagen scandal? That famous case where the German giant's cars proved perfectly "eco-friendly" during laboratory tests, but then, on the road, emitted far from acceptable levels of pollutants? A true optimization tailored to pass the tests, but disconnected from the reality of everyday use.
Well, something similar happens in the world of websites. When it comes to performance, we too often rely on speed tests that analyze a single page—the homepage, most often—and we delude ourselves into thinking we've achieved excellence. Perfect scores, 50ms TTFB, everything optimized down to the millisecond. But the truth is that a website doesn't end with the homepage. Especially an editorial site, it is made up of hundreds, thousands of pages, each with different characteristics, stories and behaviors.
The deception of single tests
By using tools like LSCACHE on LiteSpeed, it's easy to show excellent response times, perhaps by running the same test twice, or after warming up the cache first. But the point is: How representative are those values?
A test performed on the homepage, recently refreshed, perhaps frequently visited and served from cache, can return a textbook TTFB. But try testing 50 randomly selected pages on the site: old articles, categories with dynamic content, pages with obsolete plugins, monthly archives... The results will change dramatically.
La cache hit ratio collapses, and with it the illusions of efficiency.
In reality, it's quite common to observe superficial excellent performance, while deep down the site hides slow response times, loading errors, and inefficiently served content. Added to this is the fact that old pages often represent a significant portion of organic traffic, being well-indexed and with a consolidated SEO history. Omitting them from testing means excluding a vital component of user experience and overall performance.
The actual site is not a laboratory
A website doesn't exist under controlled conditions. It's visited by users at unpredictable times, with different devices and connections. It's scanned by search engine crawlers that certainly don't limit themselves to the homepage.
Yet, many performance tests are interpreted as absolute truths, without taking into account the real variety and complexity. It's not uncommon to see excellent TTFB results on some pages and embarrassingly poor performance on others. And often, the most neglected pages are precisely those most visited by Google, perhaps because they're old and well-ranked.
A crawler, unlike a user, behaves systematically: it follows every link, visits every subpage, and archives the deepest and least-traveled paths. If the cache is inactive or inefficient in those areas of the site, the result is increased server load, an overall slowdown, and, in the worst cases, wasted crawl budget.
A more realistic approach to performance measurement
The solution? Test in breadth and depth.
A PageSpeed Insights screenshot isn't enough. A comprehensive analysis is needed, covering a large and representative sample of the site. At least fifty pages, randomly selected from those actually available, with various characteristics:
- Recent articles and old articles
- Static pages, archives, tags and categories
- Content with embeds, galleries, forms, and scripts
Each page has its own behavior, its own load, its own cache compatibility. Only in this way can a real average be calculated, which takes into account the actual distribution of traffic and site behavior under normal conditions.
A good test should also be run at multiple times of the day, under different load conditions, and account for variations that may occur based on planned events (like a nightly backup) or unexpected events (like a viral traffic spike). This type of evaluation requires more effort, but provides a true snapshot of the site's behavior.
Cache: Enabling it isn't enough to sleep soundly.
Let's be clear: caching is a powerful tool. LSCACHE, when configured correctly, can reduce response times. But not all pages are cached equally. Some are dynamic, others are frequently invalidated, and still others escape the cache entirely due to incorrect settings or overly variable content.
The result? A hit ratio that, on paper, should be above 80%, but which in reality, on complex editorial sites, it drops dangerously.
To make matters worse, poorly developed plugins, modules that generate personalized content for each user, or content generated through complex dynamic queries that prevent the creation of a cacheable version are common. All of these factors, if ignored, contribute to making the cache less effective than intended.
And here another topic opens up: the difference between what a user sees and what a crawler seesGoogle has no patience: if it finds high TTFBs on many pages, it penalizes them. If the cache only works on the home page, but not on the most searched articles, the damage is done.
TTFB: Watch out for variability
Time to First Response (TTFB) is an excellent indicator, but it needs to be contextualized. It is not an absolute value, but an average subject to strong fluctuations based on the type of page, traffic, server load, and the presence of dynamic content.
That's why it makes little sense to be fooled by a 50-second delay on the homepage. It's better to analyze:
- Distributing TTFBs across a large number of pages
- The Differences Between Cache Hits and Cache Misses
- The impact of plugins or widgets on generation time
- Peaks during high traffic time slots
This type of analysis requires tools, time and expertise. But it returns a real photograph, and not an idealized portrait.
Furthermore, continuous monitoring is equally important: perfect performance in one instance doesn't guarantee anything in the long run. A plugin update, a change in widget behavior, or the addition of external scripts can radically alter loading times. Only constant monitoring allows you to detect these changes and intervene promptly.
It's not always the same as going to the same stables
In dynamic environments, the cache is constantly invalidated. Update an article? The cache is purged. A new comment? Goodbye cache. A change to a category? Same story.
This is why it is essential to monitor:
- Purging frequency
- Pages excluded from the cache
- Interaction between plugins and caching system
All elements that can lead to a ineffective cache, which works well only under ideal conditions but fails in everyday reality.
A common scenario involves sites that receive numerous daily updates: each update triggers a chain of invalidations involving categories, tags, homepages, and often even related content. If the caching system isn't configured to manage these relationships intelligently, you risk serving fresh pages from the database most of the time, rendering the entire caching infrastructure useless.
Conclusion: Cache is nice, but you have to know how to use it.
At Managed Server Srl, when we evaluate a website's performance, we don't settle for a spot test. We look at the entire project, analyze dozens of pages, observe performance consistency over time, and evaluate real-world behavior, not lab results.
Why a website is not just a homepage.
Why A great TTFB on a page doesn't guarantee anything throughout the site.
Why a high hit ratio today is no guarantee for tomorrow.
In essence: The cache is nice, but we don't get hung up on it! A critical, comprehensive, and professional approach is required. Only in this way can true, stable, and sustainable performance be achieved.
Our job is to help clients see beyond perfect numbers, evaluate each site as a whole, and configure caches with awareness and attention to actual behavior. This means addressing performance not just once, but every day. Because sites change, grow, and update, and the strategy behind perceived speed must evolve with them.
If you want realistic advice, if you need large-scale testing, if you want to find out how your site really performs, not how it should, you know where to find us. And remember: caching is nice, but only if it actually works, for everyone, all the time.