The Checkbot Page Speed Guide will teach you how to create high performance web pages that download in less time and start rendering more quickly. Website performance is a crucial factor for increasing conversions as fast pages keep visitors engaged and get them to stay on your site for longer. Search engines even treat page speed as a ranking signal because they know fast websites are important for a good user experience. To accelerate your site, we'll guide you on ways to reduce the amount of data that needs to be sent to the browser, how to take advantage of browser caching, techniques that make pages render more quickly and tips on avoiding redirects that can slow down browsing.
When you speed up service, people become more engaged - and when people become more engaged, they click and buy more.
A key factor in making pages faster is to reduce the size of each page and their resources using compression and minification.
Next to eliminating unnecessary resource downloads, the best thing you can do to improve page-load speed is to minimize the overall download size by optimizing and compressing the remaining resources.
Content-Encoding response header is being returned and 2) check the value of that header is set to the name of a compression scheme such as
br. If you’re checking from your own machine that runs antivirus software, be aware there’s been cases where HTTP scanning features have been found to disable compression before responses reach your browser. Also, it isn’t unusual to see misconfigured servers which compress some compressible file types like HTML but forget to do it for others like CSS so be on the look out for this.
Avoid recompressing data
Content-Encoding response header is used to specify a compression scheme only for compressible files. The header should be omitted for already compressed files.
Avoid inline source maps
/* sourceMappingURL=data: in your files.
Caching should be used to decrease server load and reduce the amount of data browsers need to download while browsing your site.
Fetching something over the network is both slow and expensive. Large responses require many roundtrips between the client and server, which delays when they are available and when the browser can process them, and also incurs data costs for the visitor. As a result, the ability to cache and reuse previously fetched resources is a critical aspect of optimizing for performance.
Enable caching of your page resources so browsers can reuse resources that have already been downloaded. The
Cache-Control response header is used to specify the caching policy of each URL: the
no-store setting prevents all browser caching and the
no-cache setting forces the browser to check with the server if a cached version is out-of-date before using the cached copy. For page resources, we recommend that
no-store are not set. This configuration means a browser will always cache resources and will immediately reuse them if needed without having to contact the server again. Warning: You must have a strategy for how updated versions of your page resources are going to replace the cached versions. Generally, each updated resource should use a new URL to force browsers to fetch the new version. This is referred to as “cache busting” and is built into many web frameworks.
Use long caching times
Configure page resources to have long caching times so browser caches will retain them for longer. The cache duration of each resource URL can be specified by either 1) setting an
Expires response header which specifies the point in time the response becomes stale such as
Expires: Fri, 10 Aug 2019 20:00:00 GMT or 2) adding a
max-age directive to the
Cache-Control response header that specifies the number of seconds the response is valid for such as
Cache-Control: max-age=3600 for 1 hour. If
Expires are both used,
max-age takes priority. We recommend setting the cache time of page resources to at least 24 hours.
Avoid duplicate resources
The same page resource should always be served from the same URL to improve caching efficiency. Browsers will only reuse a cached resource if the resource is requested from the exact same URL as before. If the same resource is available over multiple URLs, this can lead to extra browser requests. For example, say a browser cached a resource from
http://example.com/jq.js and later had to request identical content from these URLs while browsing (where the URL differences are highlighted):
http:// www. example.com/jq.js,
http://example.com/ jquery .js,
http://example.com/ libs/ jq.js,
http://example.com/jq.js ?a=1&v=3.2. As none of the URLs are exact matches, the same content would need to be downloaded when each URL was first seen as a cached response from a different URL cannot be used instead. For caching to happen, URLs must have matching protocols, filenames, folders, query parameters and even query parameter ordering. Avoid caching issues like this by making sure each unique page resource is referenced using a single consistent URL.
CSS delivery should be optimised by avoiding inline CSS and avoiding the use of
Before the browser can render content it must process all the style and layout information for the current page. As a result, the browser will block rendering until external stylesheets are downloaded and processed, which may require multiple roundtrips and delay the time to first render.
Avoid excessive inline CSS
Prefer CSS in external files over inlining large amounts of CSS into pages to improve caching efficiency. CSS can be inlined using
style attributes and
<style> tags. This can be convenient during development but any CSS that is shared between pages won’t be cached by the browser. However, Google does promote inlining just enough CSS at the start of each page so the top portion of the page renders immediately so browsing feels faster. Google’s AMP project for mobile also promotes inlining where each page should have all the CSS it needs inlined to reduce requests as long as only a modest amount of CSS is involved. We recommend you keep the amount of inlined CSS per page under 50,000 bytes.
Avoid CSS @import
@import in CSS files as this prevents parallel loading of CSS. Inside a CSS file,
@import can be used to include the contents of another CSS file by specifying a URL. This can be convenient but impacts download times because browsers can only start fetching the imported URL after the CSS file containing the
@import has been fetched. To get a page to load CSS files in parallel, you should instead add a
<link rel="stylesheet" href="…"> tag for each of the CSS files you need to load to the HTML code of the page.
Before the browser can render a page it has to build the DOM tree by parsing the HTML markup. During this process, whenever the parser encounters a script it has to stop and execute it before it can continue parsing the HTML. In the case of an external script the parser is also forced to wait for the resource to download, which may incur one or more network roundtrips and delay the time to first render of the page.
<script> tags from blocking rendering by placing them directly before the closing
<script defer src="…">, which delays script execution until the DOM is ready or 2)
<script async src="…">, which will execute the script as soon as it has loaded. Note that
defer scripts execute in the order they appear on the page like inline scripts. However,
async scripts execute whenever they have downloaded so their execution order can change. This difference is important when scripts have dependencies.
Following redirects can significantly slow down network requests so you should avoid using page and resource URLs that trigger redirects.
Redirects trigger an additional HTTP request-response cycle and delay page rendering. In the best case, each redirect will add a single roundtrip (HTTP request-response), and in the worst it may result in multiple additional roundtrips to perform the DNS lookup, TCP handshake, and TLS negotiation in addition to the additional HTTP request-response cycle. As a result, you should minimize use of redirects to improve site performance.
Avoid internal link redirects
To speed up browsing between pages on your site, avoid hyperlinks to URLs that perform redirects. For example, say you had a hyperlink pointing to
/news which now redirects to
/updates because the page was moved. A user visiting the hyperlink would experience a significant page loading delay as the redirect was being followed. You can avoid delays like this by hyperlinking directly to the redirect destination.
Avoid resource redirects
Avoid loading page resources via URLs that perform redirects as redirects will slow down page loading. For example, if a CSS file was loaded using the URL
example.com/styles.css that then redirects to
www.example.com/styles.css, the redirect would introduce a delay when the file was fetched. This can be fixed by linking directly to the redirect destination.
Avoid redirect chains
When a redirect must be used, use a single redirect instead of a chain of several redirects for a faster response. For instance, a redirect chain such as
https://www.example.com can be common when redirect rules haven’t been optimised. Chains of redirects add significant delays to fetching URLs while browsing and search bots may give up following long chains which can impact search rankings. You should optimise your server redirection rules to eliminate chains of redirects when it isn’t possible to eliminate redirects entirely.