PageSpeed Insights (PSI) assesses the performance of web pages on both mobile and desktop, offering suggestions for optimization by analyzing lab and real‑world data.
PSI reports on the user experience of a page on both mobile and desktop devices, and provides suggestions on how that page may be improved.
PSI provides both lab and field data about a page. Lab data is useful for debugging issues, as it is collected in a controlled environment. However, it may not capture real‑world bottlenecks. Field data is useful for capturing true, real‑world user experience, but has a more limited set of metrics.
Real‑user experience data in PSI is powered by the Chrome User Experience Report (CrUX) dataset. PSI reports real users’ First Contentful Paint (FCP), Interaction to Next Paint (INP), Largest Contentful Paint (LCP), and Cumulative Layout Shift (CLS) experiences over the previous 28‑day collection period. PSI also reports experiences for the experimental metric Time to First Byte (TTFB).
Make your web pages fast on all devices
PageSpeed Insights tells you how your page performs on both mobile and desktop, and how it compares to other pages.
PSI uses the Chrome User Experience Report to show how real‑world users experience your page, and Lab data generated by Lighthouse to show how to improve your page’s performance.
Real‑user experience data
In order to show user experience data for a given page, there must be sufficient data for it to be included in the CrUX dataset. A page might not have sufficient data if it has been recently published or has too few samples from real users. When this happens, PSI will fall back to origin‑level granularity, which encompasses all user experiences on all pages of the website. Sometimes the origin may also have insufficient data, in which case PSI will be unable to show any real‑user experience data.
PSI classifies the quality of user experiences into three buckets: Good, Needs Improvement, or Poor. PSI sets thresholds in alignment with the Web Vitals initiative for First Contentful Paint, Largest Contentful Paint, Cumulative Layout Shift, Interaction to Next Paint, and Time to First Byte (experimental).
PSI presents a distribution of these metrics so that developers can understand the range of experiences for that page or origin. This distribution is split into three categories: Good, Needs Improvement, and Poor, which are represented by green, amber, and red bars. Above the distribution bars, PSI reports the 75th percentile for all metrics, so that developers can understand the most frustrating user experiences on their site.
Core Web Vitals are a common set of performance signals critical to all web experiences. The Core Web Vitals metrics are INP, LCP, and CLS, and they may be aggregated at either the page or origin level. For aggregations with sufficient data in all three metrics, the aggregation passes the Core Web Vitals assessment if the 75th percentiles of all three metrics are Good. Otherwise, the aggregation does not pass the assessment. If the aggregation has insufficient data for INP, then it will pass the assessment if both the 75th percentiles of LCP and CLS are Good. If either LCP or CLS have insufficient data, the page or origin‑level aggregation cannot be assessed.
Differences between Field Data in PSI and CrUX
The difference between the field data in PSI versus the CrUX dataset on BigQuery is that PSI’s data is updated daily, while the BigQuery dataset is updated monthly and limited to origin‑level data. Both data sources represent trailing 28‑day periods.
Lab diagnostics
PSI uses Lighthouse to analyze the given URL in a simulated environment for the Performance, Accessibility, Best Practices, and SEO categories.
At the top of the section are scores for each category, determined by running Lighthouse to collect and analyze diagnostic information about the page. A score of 90 or above is considered good. 50 to 89 is a score that needs improvement, and below 50 is considered poor.
The Performance category also has the page’s performance on different metrics, including: First Contentful Paint, Largest Contentful Paint, Speed Index, Cumulative Layout Shift, Time to Interactive, and Total Blocking Time. Each metric is scored and labeled with an icon: Good is indicated with a green circle, Needs Improvement is indicated with an amber informational square, and Poor is indicated with a red warning triangle.
Within each category are audits that provide information on how to improve the page’s user experience. See the Lighthouse documentation for a detailed breakdown of each category’s audits.
Lab conditions and score interpretation
Currently, Lighthouse simulates the page load conditions of a mid‑tier device (Moto G4) on a mobile network for mobile, and an emulated desktop with a wired connection for desktop. PageSpeed also runs in a Google datacenter that can vary based on network conditions; you can check the location that the test was run in by looking at the Lighthouse Report’s environment block.
Any green score (90+) is considered good, but note that having good lab data does not necessarily mean real‑user experiences will also be good. The performance score can change from run to run due to variability in performance measurement introduced via local network availability, client hardware availability, and client resource contention.
If you have a question about using PageSpeed Insights that is specific and answerable, you can ask it in English on Stack Overflow. If you have general feedback or questions about PageSpeed Insights, or you want to start a general discussion, you can start a thread in the mailing list. If you have general questions about the Web Vitals metrics, you can start a thread in the web‑vitals‑feedback discussion group.
PageSpeed Insights API
The PageSpeed Insights API helps measure webpage performance and provides improvement suggestions for performance, accessibility, and SEO. It offers real‑world data (soon to be discontinued) and lab data from Lighthouse for comprehensive insights.
The API can be used with or without an API key, although a key is recommended for frequent, automated queries. Developers can access the API through an explorer, command‑line tools like cURL, or JavaScript for easy integration.
You can use the PageSpeed Insights API to measure the performance of a web page and get suggestions on how to improve the page’s performance, accessibility, and SEO. The API returns real‑world data from the Chrome User Experience Report and lab data from Lighthouse.
- API Explorer: To make calls to the PageSpeed Insights API without writing any code, you can use the API Explorer.
- Using an API key: You can get an API key in the Credentials page; after you have an API key, your application can append the query parameter key=yourAPIKey to all request URLs.
- Try it with cURL: You can try out the PageSpeed Insights API from the command line using a cURL request against https://www.googleapis.com/pagespeedonline/v5/runPagespeed with your URL and API key.
- Use it in JavaScript: You can fetch PageSpeed Insights data in JavaScript and then display the CrUX and Lighthouse metrics on your page.