Why Core Web Vitals Assessment Failed When PSI Performance is Good?

In the second part of our blog series, we explore the issues that lead to the phenomenon where PageSpeed Insights (PSI) shows good performance in Lab data, but the Core Web Vitals Assessment fails.

What is the Difference Between Field Data and Lab Data?

Lab data for web vitals is collected in a controlled environment using emulated network and device conditions, while field data is collected from real users in real-world conditions. Lab data is useful for debugging and optimization, while field data provides insights into the user experience. Field data is a ranking factor for SEO, while lab data does not directly impact rankings or user experience.

What Causes the Discrepancies? Let’s Look at it Step by Step!

Network Speed

Lab data tests run under controlled conditions, simulating a slow 4G connection (~1.6Mbps) on mobile and a slower wired connection (~10Mbps) on desktop. In reality, better speeds are often available, but real-world conditions vary greatly. Extreme slow connections can occur (in underground garages, crowded places, large events, highways, trains, etc.), significantly affecting average speed.

It’s also important, that during tests, only the webpage loads, while in real life, many other applications generate background data traffic (email clients, Facebook, TikTok, other social media preloads, location services, etc.), slowing down the page load.


Although Google intentionally uses lower mid-range devices for tests, the CPU time is entirely devoted to loading the page. On real devices, it’s rare for only the browser to run when opening a page. Hence, real devices’ CPU performance is shared among running applications, leading to slower rendering in reality.

Different screen sizes in real life can also alter page layouts. This can cause display issues, but more importantly, the actual LCP element measured can change from the one in the tests. Consequently, the wrong image might be preloaded, or the LCP image might be lazy-loaded on some devices, which doesn’t show up in tests due to fixed screen sizes.

Different screen sizes often require different image sizes, which can be problematic if images are only optimized for test device sizes. In other sizes, the original image size might be loaded, slowing down real page loads compared to tests.

User Interaction

No user interaction occurs during tests, while in real life, users scroll and click, loading completely different elements than during tests. For example, if all images are lazy-loaded, only above-the-fold images load during tests. In real life, users start scrolling, and the browser begins loading images further down. Similarly, if users start a video immediately, it doesn’t load during the test.

It’s important to note that CLS (Cumulative Layout Shift) pertains to the entire page lifecycle, but no scrolling or other user interaction occurs during tests. Therefore, some CLS issues remain hidden in tests but cause problems in field data.


Many cache plugins try to trick the tests. Instead of genuine JavaScript optimization, they simply don’t load the scripts until the first user interaction. This means scripts don’t load during tests, resulting in less data, simpler, and faster rendering. However, in real user scenarios, this isn’t acceptable as functionality and analytics scripts are necessary.

Real users likely interact with the page immediately, so delayed JavaScripts won’t speed up the page in real life.

Moreover, delaying JavaScripts can even increase CLS, worsening field data results. Common issues include double-click errors, where users need to click twice on a video or a mobile menu to activate it.

What Makes Swift Performance AI Different?

Swift Performance AI automatically detects the user’s internet speed and adapts the webpage loading accordingly. Additionally, Swift’s frontend is lightweight, not using JavaScript for lazy-loading or image loading, and creates perfect critical CSS for optimal CPU usage.

Swift Performance AI can generate appropriate image sizes for all device sizes and recognize and preload the LCP image if present.

It also has an automatic CLS fixer, addressing not only JavaScript-induced CLS but also correcting other potential CLS issues.

Some JavaScripts can and should be delayed, especially those required only after user interaction (e.g., swiper, image lightbox, magnifier, compatibility scripts). Other scripts are loaded in the background at a lower priority, thus not slowing rendering but genuinely speeding up page loads.

Furthermore, Swift AI incorporates Real User Session Monitoring technology from BugMonitor, eliminating double-click issues.