What are Core Web Vitals and how do different tools measure them?

Josh Sciortino

Associate Director, Web Experience

1st November 2022

~ 9 min read

As of 2020, Google has been using their Core Web Vitals page speed and experience metrics as direct ranking factors. This has proven to be greatly beneficial within the field of technical SEO, as Core Web Vitals represent the most direct measures of true web experience that we’ve seen in a long time. However, this has come with some difficulty in understanding how they are measured and reported on. 

In this blog, we’ll explain the ins and outs of Core Web Vitals, explore how they are measured, and discuss how different tools report them differently.

What are Core Web Vitals?

Google launched three new metrics a couple years back, called Core Web Vitals. They represent the best and most direct measures of real user experiences of page speed and page render we have in the industry. That’s because they’re collected directly from Chrome, and the events they measure are truer benchmarks of actual user experiences, as detailed below. Additionally, they are aggregated so that sites can be ranked based on aggregate experience rather than biased sample sizes.

Each of the three Core Web Vitals have their own ranges for what qualifies as a Good, Needs Improvement, or Poor score.

LCP: Largest Contentful Paint

Largest Contentful Paint (LCP) refers to the total amount of time it takes from the server response to the paint/render of the largest content (or contentful) item on the page, which is usually a hero image or a video if present. 

It’s important to note that this isn’t the time it takes just to paint the largest content item itself; it’s from the beginning of the page render. So, this score will be impacted by anything render-blocking (in the critical render path) before the load of this item, which is why render-blocking code is a common diagnostic cause for Poor LCP.

Score ranges:

  • Good: < 2.5 seconds
  • Needs Improvement: 2.5 - 4.0 seconds
  • Poor: > 4.0 seconds

FID: First Input Delay

First Input Delay (FID) is the time delay between the first user input (e.g., a mouse click or a mouseover of an on-hover interactive element) and when the browser is able to process and respond to this input (e.g., by changing the color of the link or enabling the on-hover behavior). If the page is still rendering when the first user input happens, the FID may take longer.

This is the only Core Web Vitals metric that’s exclusively able to be collected via aggregate Field metrics, and isn’t measurable in Lab tools, as it requires real human input/interaction. Later in this blog, we’ll explore the difference between Field and Lab measurement.

Score ranges:

  • Good: < 100 milliseconds
  • Needs Improvement: 100 - 300 milliseconds
  • Poor: > 300 milliseconds

CLS: Cumulative Layout Shift

Cumulative Layout Shift (CLS) is the ratio (not time) measure of the amount of shift (moving around) that page elements experience during page render. This is measured as a decimal fraction (e.g., .25) rather than a percent of shift (e.g., 25%), but you can think of them as a percent of shift for ease. 

CLS is measured by multiplying the total percentage area of the viewport that a shifting element occupies throughout all of its shift during render (the impact fraction), multiplied by the distance across the viewport that it travels during shift (the distance fraction).

Score ranges:

  • Good: 0.10
  • Needs Improvement: 0.10 - 0.25
  • Poor: > 0.25

To better understand how to measure CLS, let’s use a hypothetical example. If an element that’s full-width shifts all the way from the top of the viewport to halfway down the viewport, then its impact area is  50% - this is a rough estimate since I haven’t given the height of this hypothetical element - and the distance it travels is also roughly 50%. Multiplying 0.50 by 0.50 gives us a CLS score of 0.25, which is considered Poor.

This is the only Core Web Vitals metric that’s about page experience, rather than page speed. It’s Google’s attempt to penalize the commonly frustrating experience of when a user tries to click on something, and since the page is still rendering, it moves as you click, which can cause you to click on something else unintentionally.

Other metrics

Historically, there have been a myriad of different metrics used to measure different phases of page render, with the intent of trying to understand what users are really experiencing. They’ve included metrics like:

  • Time to First Byte (TTFB) - the initial server response time
  • First Contentful Paint (FCP) - the time to paint/render the first content item, which is usually an above-the-fold image
  • DOM Content Loaded (DCL) - the time for the Document Object Model to be completely loaded and parsed by the browser

However, these metrics were considered imperfect, as they’re only rough gauges of the page loading experience and time. Core Web Vitals represent the most direct measures of the true user experience. Along with these ‘legacy’ metrics, Core Web Vitals are collected and aggregated directly from Chrome sessions in the Chrome User Experience (CrUX) report, and considering Chrome browser’s large market share, these represent a highly representative sample set of true user experience across all sessions to your site.

How are Core Web Vitals measured?

In a later article, I’ll outline my process for auditing Core Web Vitals, but that process relies heavily on understanding how Core Web Vitals are measured, and how different tools and collection methods report on them.

We must first understand how Google aggregates its Field measurements, and how they report on that data in various ways.

Lab vs. Field measurements

The metrics in question are the same across both Lab and Field measurements - the three Core Web Vitals - but how they are measured can elicit different scores. For those who aren’t already familiar with this distinction, the way I like to think of it is as an anthropologist.

Lab metrics refer to scores that’re measured ad hoc by tools like Pagespeed Insights at pagespeed.web.dev or its application programming interface (API). These are measured just once each time you run it  in ‘lab conditions,’ - meaning they are only susceptible to whatever internet connection you’re on when running it, how many browser plugins you have, and whether your roommate or partner is streaming the latest Netflix binge in the other room. Because these are ad hoc and not aggregate scores, they can vary wildly from one ‘lab test’ to the next, and thus should only be used as a diagnostic tool rather than an actual benchmark.

Field metrics refers to user scores that real users experience in their Chrome browsers out in the wild, or ‘in the field.’ This is akin to an anthropologist observing users as they live in the wild and visit your website. Since they’re real world measurements, these are susceptible to variations in different users’ internet connections, like WiFi vs 3G, and different users locations, like at home or far from a cell tower and driving through a tunnel. But overall, they average out to the most accurate, aggregate true experiences your users are getting. These are the metrics, via the CrUX report, that Google uses to score your site, based on the 75th percentile.

CrUX vs Pagespeed Insights vs Google Search Console

One of the most common questions we receive from clients is ‘what’s the difference between CrUX, Pagespeed Insights and Google Search Console?’. Marketers are also often unsure of which of these tools report on what, and why they report on things differently - given they’re all from Google. Here’s a quick and easy summary. 

CrUX

The CrUX collects aggregate Field data for Chrome users and identifies where Google pulls Core Web Vitals Field data from for your site. As a result, this is largely synonymous with the terms Core Web Vitals, Real User Monitoring (RUM), and Field data.

Pagespeed Insights

Pagespeed Insights is a Lab tool, but it does include Field metrics at the top of the report. 

This top section is the aggregated Field metrics from CrUX.

This bottom section includes the Lab test results. Remember that these scores can vary wildly between tests, and are only meant to be used as a diagnostic tool. These scores are not used for search rankings.

Google Search Console

Google Search Console primarily provides high-level reporting on whether your URLs are Good, Needs Improvement, or Poor, aggregating all of the three Core web Vital scores into one on a per-URL basis. 

The way they do that is as simple as this - whichever of the three Core Web Vitalss scores the lowest for a given URL, that’s the score that a URL is assigned to in the Search Console report. Therefore, if your homepage has a Good LCP, and a Good FID (common), but a Poor LCP, then the homepage will be assigned a Poor in the Core Web Vitals report in Search Console.

That’s why you tend to see a lot more red and yellow in the Search Console report than you may see in the CrUX color bars at the top of Pagespeed Insights.

Again, it’s important to note the way in which Search Console aggregates the already-aggregated Core Web Vital scores.

Only below do they show how many URLs are in each bucket, and which of the Core Web Vital scores is dragging them into that bucket.

The 75th Percentile Threshold

If you run a test in Pagespeed Insights, at the top of the results, you’ll see the ‘Discover what your real users are experiencing’ section - this is known as the Field metrics section. These aren’t Lab scores. 

Notice at the top right, there’s a switcher for ‘This URL’ vs ‘Origin’ – the latter refers to the site as a whole. The Field scores for the whole Origin (domain) are what’s aggregated into Google Search Console.

When you click Expand View, you’ll notice a green-yellow-red bar, with a pin on it. That pin is located exactly 75% along the color bar, and includes the score that falls in the 75th percentile of all page loads. This is because the 75th percentile is the threshold of page loads where Google scores you in the Good, Needs Improvement, or Poor bucket. 

You want at least 75% of page loads to score in the Good range in order to achieve a Good score for the test page or the Origin domain as a whole. 

In the example shown below, the 75th percentile falls at 2.6 seconds, which for LCP is just above the Good bucket. Thus, this page is given a Needs Improvement score.

As an important side note on this, you’ll notice that it says ‘page loads’, and not ‘users’ or ‘sessions’. This is because Chrome is measuring all pages visited on your site via Chrome, regardless of what page users visit. That means that higher traffic pages will have a higher weighted impact on your Core Web Vital scores, so you’ll want to focus your optimization efforts on those pages.

As a result of this aggregation - and the 75th percentile threshold - CrUX data is not available for every page on your site. This is because they simply haven’t received enough Chrome user sessions to be able to collect and aggregate data. 

This is why it’s important to catch this ‘insufficient real-user data’ note in Pagespeed Insights, because in those cases, you will instead be provided with Origin data instead of CrUX data for the URL you’ve input. When using the Pagespeed Insights API, for example in Screaming Frog, you’ll just be given a blank result.

Summary

Core Web Vitals represent the most direct measures of real user experience used for ranking in SEO today. The data is collected from real user sessions in the Chrome browser, and made available in the CrUX report. Each of the three Core Web Vitals, in addition to being measured differently, has its respective ranges to determine whether a page scores Good, Needs Improvement, or Poor. 

However, at least 75% of page loads to a page must score Good (under real world Chrome sessions) in order for that page to earn a Good score. Google Search Console groups the three Core Web Vitals together for easier high level reporting, and assigns your URLs the score of the lowest scoring of the three Core Web Vitals. However, Search Console does further break down which Core Web Vital is contributing to a given Poor or Needs Improvement score.

In the next blog of this series, we will explore how marketers can diagnose and fix Google's Core Web Vitals. If you’re interested in learning more about Core Web Vitals, please get in touch with our Organic performance team.

Sign up to Croud’s Digital Digest

Get Croud's monthly newsletter, which is packed with the latest news from Croud from across the globe, along with updates and commentary around the latest developments in the digital marketing space.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.