Table of contents of the article:
In the modern e-commerce landscape, every millisecond counts. When it comes to web performance and search engine positioning, site speed is no longer an option but a fundamental requirement. This is especially true for high-traffic sites like inventivashop.com, which dominates the top spot on Google for competitive keywords like “Signs.”
The site in question represents an excellent case study of how an already extensively optimized infrastructure can still hide critical bottlenecks that, once identified and resolved, lead to dramatic performance improvements. In this technical article, we'll share our methodical approach to optimizing a particularly problematic SQL query, demonstrating how we achieved this. a 99,83% performance improvement — from 1,8 seconds to just 3 milliseconds.
The context: an already top-notch infrastructure
Before delving into the specific case, it's important to note that inventivashop.com certainly didn't start from scratch in terms of optimization. The site already benefited from a comprehensive arsenal of technologies and best practices:
Cutting-edge technology stack
- HTTP / 3: Next-generation protocol for faster and more resilient connections
- ZSTD Compression: Advanced compression to reduce response payload
- NGINX optimized: Webserver-level tuning with custom configurations for maximum performance
- Compressed TLS certificates: Reducing SSL/TLS overhead
- WebP support: Next-generation images with up to 30% smaller file size
- Core Web Vitals optimized: All best practices implemented for LCP, FID, CLS
- Percona Server 5.7: Optimized database server instead of vanilla MySQL
This last point deserves particular attention. The choice of Percona Server 5.7, despite being in End of Life from November 2024, was not a coincidence but the result of in-depth benchmarks that highlighted superior performance compared to MariaDB for the site's specific queries.
The Controversial Choice: Percona Server 5.7 at EOL
During preliminary testing, we identified a critical query that exhibited drastically different behavior across database engines:
- MariaDB 10.x: 1.700-1.800 ms
- MySQL 8.0: 1.500-1.600 ms
- Percona Server 5.7: 200 ms
The difference was so marked (about 9x faster) that we had to make a difficult decision: to use an EOL version but with significantly higher performance, naturally implementing all the necessary security measures (network isolation, hardening, continuous monitoring).
The problem: when a single module brings the site to its knees
Despite all these optimizations, performance monitoring continued to highlight a recurring and significant issue. Metrics showed inexplicable latency spikes, and Time To First Byte (TTFB) was well above our standards on certain pages.
The BlockLayered module: essential but heavy
The culprit turned out to be the module blocklayered PrestaShop (renamed “Faceted Search” or “Layered Search” in more recent versions). This module is essential for any modern e-commerce site because it manages:
- Category filters (price, color, size, brand, etc.)
- Faceted product navigation
- Dynamic product counts for each filter
- Complex aggregations and arrangements
The module is invoked on virtually every category, search, or product listing page—that is, on the most visited pages of an e-commerce site. Its inefficiency therefore impacts the entire user experience and, consequently, conversion rates.
The discovery: 1,8 seconds for a single query
Through systematic analysis of slow query logs and the use of Percona Toolkit (pt-query-digest), we identified the problematic query. The numbers were alarming:
- Average execution time: 1.700-1.800 ms
- Frequency: Thousands of times a day
- Total impact: Tens of minutes of wasted CPU time every hour
- pattern: Present in every category navigation with filters
For context: In a well-optimized architecture, we expect product catalog read queries to take 10-50ms. A nearly 2-second query is approximately 40-180 times slower of what is due.
The diagnostic approach
We could have used Application Performance Monitoring (APM) solutions like New Relic, which would have provided a holistic view of application performance. However, given the concentrated nature of the problem—a single, very frequent and very slow query—we opted for a more surgical approach:
- Enabling slow query log on Percona Server
- Data collection for 24-48 hours
- Analysis with pt-query-digest to identify patterns and normalize queries
- EXPLAIN ANALYZE to understand the execution plan
- Profiling of the specific query to identify bottlenecks
This approach allowed us to focus all resources on solving the real problem without waste.
The Problematic Query: Anatomy of a Performance Disaster
Here is the original query generated by the blocklayered module. The module blocklayered di PrestaShop — now known in more recent versions as “Faceted Search” o “Layered Search” — is one of the fundamental modules for the filtered product navigation within a category, search page, or product selection (e.g., manufacturer, attribute, tag, etc.).
SELECT
p.*,
product_shop.*,
stock.out_of_stock,
IFNULL(stock.quantity, 0) AS quantity,
MAX(product_attribute_shop.id_product_attribute) AS id_product_attribute,
product_attribute_shop.minimal_quantity AS product_attribute_minimal_quantity,
pl.`description`,
pl.`description_short`,
pl.`available_now`,
pl.`available_later`,
pl.`link_rewrite`,
pl.`meta_description`,
pl.`meta_keywords`,
pl.`meta_title`,
pl.`name`,
MAX(image_shop.`id_image`) AS id_image,
isecond.`id_image` AS id_image_second,
il.`legend`,
m.`name` AS manufacturer_name,
cl.`name` AS category_default,
DATEDIFF(product_shop.`date_add`, DATE_SUB(NOW(), INTERVAL 20 DAY)) > 0 AS new,
product_shop.price AS orderprice,
cp.position
FROM
`ps_category_product` cp -- Tabella di join Categoria-Prodotto
LEFT JOIN
`ps_product` p ON p.`id_product` = cp.`id_product` -- Informazioni base del Prodotto
INNER JOIN
ps_product_shop product_shop
ON (
product_shop.id_product = p.id_product
AND product_shop.id_shop = 1
) -- Informazioni specifiche del Prodotto per lo Shop (necessario)
LEFT JOIN
`ps_product_attribute` pa ON (p.`id_product` = pa.`id_product`) -- Attributi del Prodotto (combinazioni)
LEFT JOIN
ps_product_attribute_shop product_attribute_shop
ON (
product_attribute_shop.id_product_attribute = pa.id_product_attribute
AND product_attribute_shop.id_shop = 1
AND product_attribute_shop.`default_on` = 1
) -- Attributi dello Shop (solo default)
LEFT JOIN
ps_stock_available stock
ON (
stock.id_product = p.id_product
AND stock.id_product_attribute = IFNULL(`product_attribute_shop`.id_product_attribute, 0)
AND stock.id_shop = 1
) -- Stock disponibile
LEFT JOIN
`ps_category_lang` cl
ON (
product_shop.`id_category_default` = cl.`id_category`
AND cl.`id_lang` = 6
AND cl.id_shop = 1
) -- Nome della Categoria di default (lingua 6)
LEFT JOIN
`ps_product_lang` pl
ON (
p.`id_product` = pl.`id_product`
AND pl.`id_lang` = 6
AND pl.id_shop = 1
) -- Dati descrittivi del Prodotto (lingua 6)
LEFT JOIN
`ps_image` i ON (i.`id_product` = p.`id_product`) -- Tutte le immagini
LEFT JOIN
ps_image_shop image_shop
ON (
image_shop.id_image = i.id_image
AND image_shop.id_shop = 1
AND image_shop.cover = 1
) -- Immagine di copertina per lo Shop
LEFT JOIN
`ps_image` isecond
ON (
isecond.`id_product` = p.`id_product`
AND isecond.position = 2
) -- Immagine in seconda posizione
LEFT JOIN
`ps_image_lang` il
ON (
image_shop.`id_image` = il.`id_image`
AND il.`id_lang` = 6
) -- Legenda dell'immagine di copertina (lingua 6)
LEFT JOIN
`ps_manufacturer` m ON m.`id_manufacturer` = p.`id_manufacturer` -- Nome del Produttore
WHERE
product_shop.`id_shop` = 1 -- Filtro per lo Shop
AND cp.`id_category` = 66 -- Filtro per la Categoria specifica
AND product_shop.`active` = 1 -- Solo prodotti attivi
AND product_shop.`visibility` IN ("both", "catalog") -- Solo prodotti visibili in catalogo o ovunque
GROUP BY
product_shop.id_product
ORDER BY
cp.`position` ASC
LIMIT 0, 50; -- Limite di risultati (offset 0, 50 righe)
What makes this query so slow?
Analyzing the query's `EXPLAIN`, several critical issues emerged:
1. GROUP BY on a table with many rows : The grouping operation on `product_shop.id_product` requires sorting all intermediate rows generated by the JOINs, an O(n log n) operation on potentially large datasets.
2. Aggregate functions MAX() on JOIN : The two `MAX()` — on `product_attribute_shop.id_product_attribute` and `image_shop.id_image` — force the database to:
- Perform all JOINs
- Group by product
- Calculate the maximum for each group
This pattern is particularly inefficient when a product has many variations or many images.
3. Cascade of LEFT JOIN The cascade of 11 LEFT JOINs creates a very large Cartesian product. For a product with:
- 10 variants (product_attribute)
- 5 images
- Multilingual data
Potentially 50+ intermediate rows are generated which then need to be grouped.
4. Lack of filters in subqueries There is no pre-filtering. The database must process ALL variants of ALL products before applying filters.
5. Temporary table and filesort : The `EXPLAIN` showed: Using temporary; Using filesort
Two signs that the database needs to create temporary tables and sort them on disk — very expensive operations.
The hidden cost of GROUP BY
Il GROUP BY in this query it is particularly tricky because:
- Cannot use indexes efficiently (sorting is different from grouping)
- Requires significant memory buffers (or disk spills)
- It must process all rows before it can return even the first result.
- Prevents optimizations such as “loose index scan”
With a catalog of 10.000 products and 50.000 variants, this query could temporarily generate millions of intermediate rows.
The solution: Rewrite the query to eliminate the antipatterns
After days of analysis, benchmarking, and iterations, we arrived at this optimized but very, very, very, very, very, very, very long query:
SELECT
p.*,
product_shop.*,
stock.out_of_stock,
IFNULL(stock.quantity, 0) AS quantity,
pa_default.id_product_attribute,
pa_default.minimal_quantity AS product_attribute_minimal_quantity,
pl.`description`,
pl.`description_short`,
pl.`available_now`,
pl.`available_later`,
pl.`link_rewrite`,
pl.`meta_description`,
pl.`meta_keywords`,
pl.`meta_title`,
pl.`name`,
img_cover.id_image AS id_image,
isecond.`id_image` AS id_image_second,
il.`legend`,
m.`name` AS manufacturer_name,
cl.`name` AS category_default,
DATEDIFF(product_shop.`date_add`, DATE_SUB(NOW(), INTERVAL 20 DAY)) > 0 AS new,
product_shop.price AS orderprice,
cp.position
FROM
`ps_category_product` cp -- Tabella di join Categoria-Prodotto
INNER JOIN
`ps_product_shop` product_shop
ON (
product_shop.id_product = cp.id_product
AND product_shop.id_shop = 1
) -- Informazioni specifiche del Prodotto per lo Shop
INNER JOIN
`ps_product` p
ON p.`id_product` = cp.`id_product` -- Informazioni base del Prodotto
LEFT JOIN
(
-- Subquery per l'attributo di prodotto predefinito (default)
SELECT
pa.id_product,
pa.id_product_attribute,
pas.minimal_quantity
FROM
ps_product_attribute pa
INNER JOIN
ps_product_attribute_shop pas
ON (
pas.id_product_attribute = pa.id_product_attribute
AND pas.id_shop = 1
AND pas.`default_on` = 1
)
) pa_default ON pa_default.id_product = p.id_product
LEFT JOIN
ps_stock_available stock
ON (
stock.id_product = p.id_product
AND stock.id_product_attribute = IFNULL(pa_default.id_product_attribute, 0)
AND stock.id_shop = 1
) -- Stock disponibile (collegato all'attributo di default)
LEFT JOIN
`ps_category_lang` cl
ON (
product_shop.`id_category_default` = cl.`id_category`
AND cl.`id_lang` = 6
AND cl.id_shop = 1
) -- Nome della Categoria di default (lingua 6)
LEFT JOIN
`ps_product_lang` pl
ON (
p.`id_product` = pl.`id_product`
AND pl.`id_lang` = 6
AND pl.id_shop = 1
) -- Dati descrittivi del Prodotto (lingua 6)
LEFT JOIN
(
-- Subquery per l'immagine di copertina (cover)
SELECT
i.id_product,
i.id_image
FROM
ps_image i
INNER JOIN
ps_image_shop image_shop
ON (
image_shop.id_image = i.id_image
AND image_shop.id_shop = 1
AND image_shop.cover = 1
)
WHERE
i.id_product IN (
SELECT
cp2.id_product
FROM
ps_category_product cp2
WHERE
cp2.id_category = 66
) -- Filtra le immagini solo per i prodotti della categoria 66
) img_cover ON img_cover.id_product = p.id_product
LEFT JOIN
`ps_image` isecond
ON (
isecond.`id_product` = p.`id_product`
AND isecond.position = 2
) -- Immagine in seconda posizione
LEFT JOIN
`ps_image_lang` il
ON (
img_cover.`id_image` = il.`id_image`
AND il.`id_lang` = 6
) -- Legenda dell'immagine di copertina (lingua 6)
LEFT JOIN
`ps_manufacturer` m
ON m.`id_manufacturer` = p.`id_manufacturer` -- Nome del Produttore
WHERE
cp.`id_category` = 66 -- Filtro per la Categoria specifica
AND product_shop.`active` = 1 -- Solo prodotti attivi
AND product_shop.`visibility` IN ("both", "catalog") -- Solo prodotti visibili in catalogo o ovunque
ORDER BY
cp.`position` ASC
LIMIT 50; -- Limite di risultati (inizia da 0 per default)
Anatomy of Optimizations
Let's take a closer look at what makes this query so efficient:
1. Completely remove GROUP BY
Fine: GROUP BY product_shop.id_product
After: No GROUP BY
This is the most impactful change. By eliminating GROUP BY, we avoid:
- Creating temporary tables
- Intermediate sorting of potentially millions of rows
- Disk spill when memory buffers are insufficient
2. Replacing MAX() with targeted subqueries
For product attributes:
— BEFORE: MAX(product_attribute_shop.id_product_attribute)
— With JOIN that generates N rows per product
— AFTER: Subquery that directly returns the default attribute
LEFT JOIN
(
SELECT
pa.id_product,
pa.id_product_attribute,
pas.minimal_quantity
FROM
ps_product_attribute pa
INNER JOIN
ps_product_attribute_shop pas
ON (
pas.id_product_attribute = pa.id_product_attribute
AND pas.id_shop = 1
AND pas.`default_on` = 1
)
) pa_default
ON
pa_default.id_product = p.id_product
Advantages:
- The subquery is executed once and materialized
- Returns exactly one line per product (thanks to the filter
default_on = 1) - MySQL can use indexes on the column
default_on - No aggregate function to calculate
For cover images:
— BEFORE: MAX(image_shop.id_image)
— With LEFT JOIN on all images
— AFTER: Pre-filtered subquery
LEFT JOIN
(
SELECT
i.id_product,
i.id_image
FROM
ps_image i
INNER JOIN
ps_image_shop image_shop
ON (
image_shop.id_image = i.id_image
AND image_shop.id_shop = 1
AND image_shop.cover = 1 -- Solo cover
)
WHERE
i.id_product IN (
SELECT
cp2.id_product
FROM
ps_category_product cp2
WHERE
cp2.id_category = 66 -- Pre-filtro sulla categoria
)
) img_cover
ON
img_cover.id_product = p.id_product
Advantages:
- Filter
cover = 1select only the cover image already - Pre-filtering on category products drastically reduces processed lines
- The subquery is executed first and its result is reused
- Returns exactly one row per product
3. Strategic pre-filtering
The image subquery includes this preliminary filter:
WHERE
i.id_product IN (
SELECT
cp2.id_product
FROM
ps_category_product cp2
WHERE
cp2.id_category = 66
)
With a catalogue of 10.000 products of which only 150 are in category 66:
- Fine: Processed images of all 10.000 products (50.000+ images)
- After: Process only images of the 150 relevant products (750 images)
Reduction of the 98,5% of the lines processed in this phase.
4. Optimized JOIN order
Fine: It started from ps_category_product with a LEFT JOIN on ps_product
After:
FROM `ps_category_product` cp INNER JOIN `ps_product_shop` product_shop ON ... INNER JOIN `ps_product` p ON ...
By using INNER JOIN for the key tables, we force the query optimizer to:
- Apply filters to the category first
- Consider only active and visible products
- Drastically reduce the dataset before performing optional LEFT JOINs
5. Materialization of subqueries
MySQL/Percona can materialize subqueries in the FROM clause, creating very small, indexed temporary queries. In our case:
Subquery pa_default : ~500 lines (one per product with variants)
Subquery img_cover : ~150 lines (only category products with images)
These temporary tables are so small that they remain in memory (InnoDB buffer pool) and subsequent JOINs become O(1) operations via hash join or index lookup.
The subquery paradox: simpler, faster
There's a common misconception that subqueries are always slower than direct joins. This was partially true in older database engines (MySQL 5.1 and earlier), but is no longer true in modern engines.
Why well-written subqueries are faster:
1. Reduced scopeThey only process the necessary data
2. Automatic materialization: The query optimizer creates temporary optimized
3. Reuse: The subquery is executed once, the result is reused
4. Better use of indexesSimpler queries enable more effective indexing strategies
5. Cartesian product prevention: They avoid combinatorial explosion of rows
In our specific case, we replaced a query with:
0 subqueries
11 LEFT JOIN
2 MAX()
1 GROUP BY
With a query that has:
3 subqueries (one of which is nested)
8 LEFT JOIN
2 INNER JOIN
0 MAX()
0 GROUP BY
Yet the latter is 600 times faster.
The results: 1.800 ms to 3 ms
The benchmarks were conducted in real-world conditions, on a production database (replicated in a staging environment), obtaining an improvement from 1,8 seconds to 0,003 seconds (3 milliseconds) as shown in the following video where we illustrate the entire testing process.
The Implementation: From Database to PHP Code
Optimizing the SQL query was only half the battle. The real challenge was integrating this query into the existing PrestaShop module while maintaining full compatibility with all the features of the block-layered module.
The blocklayered module is located in: /modules/blocklayered/blocklayered.php
The original query was in the method getProductByFilters() of the class BlockLayered, approximately at line 1996.
PHP code adaptation
The most challenging change was managing the fact that the optimized query returns the same columns but through subqueries with different aliases. We had to:
- Update query construction:
$id_shop = (int)Context::getContext()->shop->id; $this->products = Db::getInstance(_PS_USE_SQL_SLAVE_)->executeS(' SELECT p.*, ' . ($alias_where == 'p' ? '' : 'product_shop.*,') . ' pl.`description`, pl.`description_short`, // … altri campi espliciti invece di pl.* img_cover.id_image, // Invece di MAX(image_shop.id_image) pa_default.id_product_attribute, // Invece di MAX(…) // … resto della query '); - Check the compatibility of the returned fieldsWe had to test that all fields used later in the code were present with the same names and types.
- Managing edge cases: Products without variants, products without images, etc. The optimized query with well-placed LEFT JOINs already handles these cases, but we've added specific tests.
- Maintain pagination and sorting: The clause
LIMITeORDER BYshould continue to work correctly with PrestaShop dynamic parameters.
Reflections and final considerations
Excellence requires time and method
This intervention represents one of the most interesting cases of "surgical" optimization on a system in full production. It was a precise, step-by-step process that required time, expertise, and great attention to detail. The entire process—from initial analysis to final validation—is not something that can be improvised: even for an experienced team, an activity of this type can require at least two full working days, often much more, considering the need to diagnose, test and verify every single change.
The most complex part wasn't the technical intervention itself, but rather the preliminary study phase: understanding where the bottleneck was, isolating the problematic query, and understanding the real impact of each possible change. From there, a long process of refinement began, in which expertise in SQL, in-depth knowledge of the internal workings of MySQL/Percona and familiarity with the architecture of PrestaShop they were found to be essential.
It's not just about writing a more efficient query: you need to understand how it interacts with the PHP code, the data model, and the application's business logic. Every change must be rigorously tested, both to verify performance and to ensure it doesn't introduce regressions or unexpected behavior. Even a small oversight can cause problems in production, so testing , code quality become fundamental phases, on a par with writing the query itself.
When is it really worth it?
Such a profound intervention is not always justified. It only makes sense when specific conditions exist: significant traffic, a concrete economic impact linked to performance, and a real and measurable bottleneck that impacts user experience or conversions. In practice, it makes sense to do this when the most common optimizations have already been applied and the only room for improvement lies within the core of the database and the application code.
In the case of inventivashop.com These conditions were all present. The site handled a significant volume of traffic, with a high value per single conversion. The problem had been precisely identified thanks to monitoring, and the infrastructure had already been optimized in every other aspect. In this context, rewriting the most critical query was not only a sensible choice, but the only way to achieve tangible and lasting improvement.
Alternatives and lessons learned
Before arriving at the rewrite, several approaches were considered, but all had obvious limitations. More superficial solutions might have masked the problem without actually solving it, while a more in-depth database intervention promised structural results, capable of improving long-term performance.
The most important lesson is that you can't optimize blindlyEvery decision must be based on concrete data, collected with profiling and slow query analysis tools. In many cases, a single query is responsible for the majority of the CPU load or overall slowdown, and identifying that critical point makes the difference between an efficient system and one that constantly struggles.
We also learned that the subquery, often demonized, if written with care and filtered correctly, can be more efficient than complex JOINs. indices, then, remain one of the most determining elements: the best query in the world becomes slow if the database is not able to access the data in the right way.
Finally, no result of this kind is sustainable without careful documentation. In our case, all the work was tracked and commented on in over twenty pages of technical documentation, ensuring that every change was understandable and replicable in the future.
Sustainability over time
Once the result is achieved, the work does not stop. An optimization of this level must be constantly monitored to promptly identify any regressions or performance changes. The optimized query has been versioned and documented in the project repository, and a comprehensive testing process is performed before any PrestaShop or database update. Furthermore, the internal development team has been trained to fully understand the new implementation, so they can maintain and adapt it over time.
Conclusions
Optimizing a single SQL query on inventivashop.com produced a dramatic improvement, reducing execution times from 1.800 milliseconds to just 3 milliseconds – a significant efficiency gain. 99,83%But more than the numbers, what matters is the effect on the user experience: instant loading, Core Web Vitals fully in the “Good” zone, and significantly reduced server resource usage.
This kind of result does not come by chance. It requires time, method and very specific skills, but above all, a data- and precision-oriented mindset. This isn't an approach to be applied to every project, nor is it the first option when performance issues arise. However, when all other optimizations have already been exhausted and a single bottleneck remains slowing down the entire system, a thorough query analysis can make the difference between a "good" and an "exceptional" site.
In our case, the investment of time and resources was fully repaid: inventivashop.com It has consolidated its position on search engines for competitive keywords, improved the user experience for thousands of customers, and seen a direct impact on conversions. The real key to success was the combination of an approach data-driven, strong technical expertise and the willingness to dedicate the time necessary to do things the right way.
Note: This case study represents a real-world intervention on a production system. The results were measured and validated in a real-world environment, but performance may vary based on specific hardware configuration, software version, database size, and usage patterns.