If something behaves differently than it looks, then visitors are frustrated.
This is especially true if it happens when a potential customer interacts with your website for the first time. A severe case is when a website appears ready visually but doesn’t respond to user input. How to measure and track this behavior? For that, there’s the Google Lighthouse performance metric, Time To Interactive.
Time To Interactive (TTI) is one of the metrics tracked by Google Lighthouse in the Performance section. It measures how much time passes before the page is fully interactive.
Technically, a fully interactive web page means:
- First Contentful Paint (FCP) happened. FCP is a timestamp of how long it takes a browser to render an image, text block or non-white
<canvas>element on the page.
- No more than two in-flight GET requests happening at the same moment.
This is how Lighthouse interprets the TTI score:
- From 0 to 3.8 seconds — Green (fast)
- From 3.9 to 7.3 seconds — Orange (moderate)
- More than 7.3 seconds — Red (slow)
These values are based on the data taken from HTTP Archive. Basically, Lighthouse compares your results to other websites from the database and assigns a score based on which percentile your website is placed in. This approach is also used for other metrics that we cover in our Google Lighthouse series.
There could be different reasons. Now that you know the definition of TTI, let’s go over the typical bottlenecks.
The First Contentful Paint event is triggered when the first “meaningful” element appears on the page. No matter what type of element it is, displaying it can be delayed if there are render-blocking resources embedded in the page above the element which download large files over the sometimes not-so-good internet connection, for example, a
<link> tag pointing to a stylesheet.
To solve this, I suggest optimizing the code to make it “lighter” by reducing its file size to a possible extent and also detaching from third-party libraries where possible. With good browser support, you may not need lots of language polyfills like several years ago.
Very often caused by general-purpose plugins or libraries, this happens when a script tries to affect many elements on the page right when the page has been loaded (an overinflated jQuery
$(document).ready() is often the culprit here).
Last but not least, the number of network requests is an important factor to keep in mind. Let’s jump right into an example and talk about images.
Images are not render-blocking. That means that if a browser encounters an
<img> tag when parsing an HTML file, it will start loading the image and go on to further tags without waiting for the image to load fully. This may sound good at first.
On the other hand, it will also make heavy non-optimized images on the page load simultaneously, and this most likely will take much more time to load and keep those request connections open, extending the Time To Interactive.
A solution here would be to utilize the lazy loading technique together with optimizing images on demand. At the end of this article, I will describe how to do this with a practical tool, so stay tuned.
Those gray rectangles with the diagonal red lines are long tasks. A task itself may consist of several function calls, which are placed beneath one another in chronological order. Using this tab, you can go to the roots and search for the problem there.
On one of my projects, I had a problem with the WordPress Ninja Forms plugin: it was causing these long tasks. Some forms were hidden in modal windows but still initialized on page load. Applying lazyloading to those forms drastically improved the performance and TTI score.
This info looks similar to what was provided in the Performance tab. URLs are sorted using the Total CPU Time parameter, which demonstrates how resource-hungry a particular script is. This is also a useful tab to investigate third-party scripts, especially analytics and ad-related ones; they tend to put down roots quite rapidly.
You can get the most out of the advice from the previous section if there are no network or payload-related problems on the website. At this point, let me get back to the practical tool I mentioned earlier.
There’s a tool by Uploadcare called Adaptive Delivery. It allows you to:
- Enable auto lazy loading for images
- Optimize image dimensions, size and quality based on visitor device’s capabilities. (AI-aided)
- Apply image transformations on the fly
In a nutshell, it’s a little script that will take care of all the heavy frontend lifting when it comes to images. There’s no need to build and manually compress different versions of the same image.
To check whether this solution fits your needs, try auditing your website with another Uploadcare tool: PageDetox. It will give you info about how much bandwidth can be saved in your particular case by optimizing images. And, as we now know, optimized network payload size leads to better website performance scores.
Improving a website's performance score is usually a matter of taking a systematic approach to identifying problems and fixing them. When doing so, I’d recommend thinking of the people who will visit your website and the whole ease of use first rather than just the raw numbers.
Good luck on your way towards a high-performing website!