November 29, 2025

The cache is nice but we don't get stuck on it.

Have you taken the speed test and are satisfied with a score of 100 out of 100? Maybe you didn't realize that the test doesn't mean anything and only serves to confuse you.

The-Cache-is-beautiful

Remember the Volkswagen scandal? That famous case where the German giant's cars proved perfectly "eco-friendly" during laboratory tests, but then, on the road, emitted far from acceptable levels of pollutants? A true optimization tailored to pass the tests, but disconnected from the reality of everyday use.

Well, something similar happens in the world of websites. When it comes to performance, we too often rely on speed tests that analyze a single page—the homepage, most often—and we delude ourselves into thinking we've achieved excellence. Perfect scores, 50ms TTFB, everything optimized down to the millisecond. But the truth is that a website doesn't end with the homepage. Especially an editorial site, it is made up of hundreds, thousands of pages, each with different characteristics, stories and behaviors.

The deception of single tests

By using tools like LSCACHE on LiteSpeed, it's easy to show excellent response times, perhaps by running the same test twice, or after warming up the cache first. But the point is: How representative are those values?

A test performed on the homepage, recently refreshed, perhaps frequently visited and served from cache, can return a textbook TTFB. But try testing 50 randomly selected pages on the site: old articles, categories with dynamic content, pages with obsolete plugins, monthly archives... The results will change dramatically.

La cache hit ratio collapses, and with it the illusions of efficiency.

In reality, it's quite common to observe superficial excellent performance, while deep down the site hides slow response times, loading errors, and inefficiently served content. Added to this is the fact that old pages often represent a significant portion of organic traffic, being well-indexed and with a consolidated SEO history. Omitting them from testing means excluding a vital component of user experience and overall performance.

The actual site is not a laboratory

A website doesn't exist under controlled conditions. It's visited by users at unpredictable times, with different devices and connections. It's scanned by search engine crawlers that certainly don't limit themselves to the homepage.

Yet, many performance tests are interpreted as absolute truths, without taking into account the real variety and complexity. It's not uncommon to see excellent TTFB results on some pages and embarrassingly poor performance on others. And often, the most neglected pages are precisely those most visited by Google, perhaps because they're old and well-ranked.

A crawler, unlike a user, behaves systematically: it follows every link, visits every subpage, and archives the deepest and least-traveled paths. If the cache is inactive or inefficient in those areas of the site, the result is increased server load, an overall slowdown, and, in the worst cases, wasted crawl budget.

A more realistic approach to performance measurement

The solution? Test in breadth and depth.

A PageSpeed ​​Insights screenshot isn't enough. A comprehensive analysis is needed, covering a large and representative sample of the site. At least fifty pages, randomly selected from those actually available, with various characteristics:

  • Recent articles and old articles
  • Static pages, archives, tags and categories
  • Content with embeds, galleries, forms, and scripts

Each page has its own behavior, its own load, its own cache compatibility. Only in this way can a real average be calculated, which takes into account the actual distribution of traffic and site behavior under normal conditions.

A good test should also be run at multiple times of the day, under different load conditions, and account for variations that may occur based on planned events (like a nightly backup) or unexpected events (like a viral traffic spike). This type of evaluation requires more effort, but provides a true snapshot of the site's behavior.

Cache: Enabling it isn't enough to sleep soundly.

Let's be clear: caching is a powerful tool. LSCACHE, when configured correctly, can reduce response times. But not all pages are cached equally. Some are dynamic, others are frequently invalidated, and still others escape the cache entirely due to incorrect settings or overly variable content.

The result? A hit ratio that, on paper, should be above 80%, but which in reality, on complex editorial sites, it drops dangerously.

Cache HIT RATIO

To make matters worse, poorly developed plugins, modules that generate personalized content for each user, or content generated through complex dynamic queries that prevent the creation of a cacheable version are common. All of these factors, if ignored, contribute to making the cache less effective than intended.

And here another topic opens up: the difference between what a user sees and what a crawler seesGoogle has no patience: if it finds high TTFBs on many pages, it penalizes them. If the cache only works on the home page, but not on the most searched articles, the damage is done.

TTFB: Watch out for variability

Time to First Response (TTFB) is an excellent indicator, but it needs to be contextualized. It is not an absolute value, but an average subject to strong fluctuations based on the type of page, traffic, server load, and the presence of dynamic content.

That's why it makes little sense to be fooled by a 50-second delay on the homepage. It's better to analyze:

  • Distributing TTFBs across a large number of pages
  • The Differences Between Cache Hits and Cache Misses
  • The impact of plugins or widgets on generation time
  • Peaks during high traffic time slots

This type of analysis requires tools, time and expertise. But it returns a real photograph, and not an idealized portrait.

Furthermore, continuous monitoring is equally important: perfect performance in one instance doesn't guarantee anything in the long run. A plugin update, a change in widget behavior, or the addition of external scripts can radically alter loading times. Only constant monitoring allows you to detect these changes and intervene promptly.

It's not always the same as going to the same stables

In dynamic environments, the cache is constantly invalidated. Update an article? The cache is purged. A new comment? Goodbye cache. A change to a category? Same story.

This is why it is essential to monitor:

  • Purging frequency
  • Pages excluded from the cache
  • Interaction between plugins and caching system

All elements that can lead to a ineffective cache, which works well only under ideal conditions but fails in everyday reality.

A common scenario involves sites that receive numerous daily updates: each update triggers a chain of invalidations involving categories, tags, homepages, and often even related content. If the caching system isn't configured to manage these relationships intelligently, you risk serving fresh pages from the database most of the time, rendering the entire caching infrastructure useless.

Conclusion: Cache is nice, but you have to know how to use it.

At Managed Server Srl, when we evaluate a website's performance, we don't settle for a spot test. We look at the entire project, analyze dozens of pages, observe performance consistency over time, and evaluate real-world behavior, not lab results.

Why a website is not just a homepage.
Why A great TTFB on a page doesn't guarantee anything throughout the site.
Why a high hit ratio today is no guarantee for tomorrow.

In essence: The cache is nice, but we don't get hung up on it! A critical, comprehensive, and professional approach is required. Only in this way can true, stable, and sustainable performance be achieved.

Our job is to help clients see beyond perfect numbers, evaluate each site as a whole, and configure caches with awareness and attention to actual behavior. This means addressing performance not just once, but every day. Because sites change, grow, and update, and the strategy behind perceived speed must evolve with them.

If you want realistic advice, if you need large-scale testing, if you want to find out how your site really performs, not how it should, you know where to find us. And remember: caching is nice, but only if it actually works, for everyone, all the time.

Do you have doubts? Don't know where to start? Contact us!

We have all the answers to your questions to help you make the right choice.

Chat with us

Chat directly with our presales support.

0256569681

Contact us by phone during office hours 9:30 - 19:30

Contact us online

Open a request directly in the contact area.

DISCLAIMER, Legal Notes and Copyright. RedHat, Inc. holds the rights to Red Hat®, RHEL®, RedHat Linux®, and CentOS®; AlmaLinux™ is a trademark of the AlmaLinux OS Foundation; Rocky Linux® is a registered trademark of the Rocky Linux Foundation; SUSE® is a registered trademark of SUSE LLC; Canonical Ltd. holds the rights to Ubuntu®; Software in the Public Interest, Inc. holds the rights to Debian®; Linus Torvalds holds the rights to Linux®; FreeBSD® is a registered trademark of The FreeBSD Foundation; NetBSD® is a registered trademark of The NetBSD Foundation; OpenBSD® is a registered trademark of Theo de Raadt; Oracle Corporation holds the rights to Oracle®, MySQL®, MyRocks®, VirtualBox®, and ZFS®; Percona® is a registered trademark of Percona LLC; MariaDB® is a registered trademark of MariaDB Corporation Ab; PostgreSQL® is a registered trademark of PostgreSQL Global Development Group; SQLite® is a registered trademark of Hipp, Wyrick & Company, Inc.; KeyDB® is a registered trademark of EQ Alpha Technology Ltd.; Typesense® is a registered trademark of Typesense Inc.; REDIS® is a registered trademark of Redis Labs Ltd; F5 Networks, Inc. owns the rights to NGINX® and NGINX Plus®; Varnish® is a registered trademark of Varnish Software AB; HAProxy® is a registered trademark of HAProxy Technologies LLC; Traefik® is a registered trademark of Traefik Labs; Envoy® is a registered trademark of CNCF; Adobe Inc. owns the rights to Magento®; PrestaShop® is a registered trademark of PrestaShop SA; OpenCart® is a registered trademark of OpenCart Limited; Automattic Inc. holds the rights to WordPress®, WooCommerce®, and JetPack®; Open Source Matters, Inc. owns the rights to Joomla®; Dries Buytaert owns the rights to Drupal®; Shopify® is a registered trademark of Shopify Inc.; BigCommerce® is a registered trademark of BigCommerce Pty. Ltd.; TYPO3® is a registered trademark of the TYPO3 Association; Ghost® is a registered trademark of the Ghost Foundation; Amazon Web Services, Inc. owns the rights to AWS® and Amazon SES®; Google LLC owns the rights to Google Cloud™, Chrome™, and Google Kubernetes Engine™; Alibaba Cloud® is a registered trademark of Alibaba Group Holding Limited; DigitalOcean® is a registered trademark of DigitalOcean, LLC; Linode® is a registered trademark of Linode, LLC; Vultr® is a registered trademark of The Constant Company, LLC; Akamai® is a registered trademark of Akamai Technologies, Inc.; Fastly® is a registered trademark of Fastly, Inc.; Let's Encrypt® is a registered trademark of the Internet Security Research Group; Microsoft Corporation owns the rights to Microsoft®, Azure®, Windows®, Office®, and Internet Explorer®; Mozilla Foundation owns the rights to Firefox®; Apache® is a registered trademark of The Apache Software Foundation; Apache Tomcat® is a registered trademark of The Apache Software Foundation; PHP® is a registered trademark of the PHP Group; Docker® is a registered trademark of Docker, Inc.; Kubernetes® is a registered trademark of The Linux Foundation; OpenShift® is a registered trademark of Red Hat, Inc.; Podman® is a registered trademark of Red Hat, Inc.; Proxmox® is a registered trademark of Proxmox Server Solutions GmbH; VMware® is a registered trademark of Broadcom Inc.; CloudFlare® is a registered trademark of Cloudflare, Inc.; NETSCOUT® is a registered trademark of NETSCOUT Systems Inc.; ElasticSearch®, LogStash®, and Kibana® are registered trademarks of Elastic NV; Grafana® is a registered trademark of Grafana Labs; Prometheus® is a registered trademark of The Linux Foundation; Zabbix® is a registered trademark of Zabbix LLC; Datadog® is a registered trademark of Datadog, Inc.; Ceph® is a registered trademark of Red Hat, Inc.; MinIO® is a registered trademark of MinIO, Inc.; Mailgun® is a registered trademark of Mailgun Technologies, Inc.; SendGrid® is a registered trademark of Twilio Inc.; Postmark® is a registered trademark of ActiveCampaign, LLC; cPanel®, LLC owns the rights to cPanel®; Plesk® is a registered trademark of Plesk International GmbH; Hetzner® is a registered trademark of Hetzner Online GmbH; OVHcloud® is a registered trademark of OVH Groupe SAS; Terraform® is a registered trademark of HashiCorp, Inc.; Ansible® is a registered trademark of Red Hat, Inc.; cURL® is a registered trademark of Daniel Stenberg; Facebook®, Inc. owns the rights to Facebook®, Messenger® and Instagram®. This site is not affiliated with, sponsored by, or otherwise associated with any of the above-mentioned entities and does not represent any of these entities in any way. All rights to the brands and product names mentioned are the property of their respective copyright holders. All other trademarks mentioned are the property of their respective registrants. MANAGED SERVER® is a European registered trademark of MANAGED SERVER SRL, with registered office in Via Flavio Gioia, 6, 62012 Civitanova Marche (MC), Italy and operational headquarters in Via Enzo Ferrari, 9, 62012 Civitanova Marche (MC), Italy.

JUST A MOMENT !

Have you ever wondered if your hosting sucks?

Find out now if your hosting provider is hurting you with a slow website worthy of 1990! Instant results.

Close the CTA
Back to top