Using Google Lighthouse for Web Pages
This is part one of a three part series on Lighthouse for Shiny Apps.
- Part 1: Using Google Lighthouse for Web Pages (This post)
- Part 2: Analysing Shiny App start-up Times with Google Lighthouse
- Part 3: Effect of Shiny Widgets with Google Lighthouse
Intro
This blog post was partly inspired by Colin Fay’s talk “Destroy All Widgets” at our “Shiny In Production” conference in 2022. In that talk, Colin spoke about HTML widgets and highlighted how detrimental they can be to the speed of a Shiny app. Speaking of which, the next Shiny In Production conference is taking place on 9th and 10th of October 2024, and recordings for this year’s events are coming soon to our YouTube channel.
I wanted to see if I could measure the speed of a collection of shiny apps. To do so, I was directed to Google Lighthouse, and this blog is dedicated to the use and understanding of lighthouse before I start using it on Shiny Apps.
Google Lighthouse
Google Lighthouse is an open source tool which can be used to test webpages (or web hosted apps like Shiny apps). For a specified webpage, Lighthouse generates a report summarising several aspects of that webpage. For Shiny, the most important aspects are summarised in the “Overall Performance Score” and the “Accessibility Score”, with one of the best parts being the feedback given by the report on how you can improve.
Before you can use Lighthouse you must install it (and npm if you don’t already have it):
npm install -g lighthouse
Then to run a Google Lighthouse assessment in the command line you simply run:
lighthouse --output json --output-path data/output_file.json url
Where you specify:
- the output format, either json and csv are available, I used json as more information is stored.
- The output path for where you would like the data to be stored.
- The url of the Shiny app you would like to test (the location of your deployed app
or, if developing locally, the URL that Shiny prints out when the app starts:
Listening on http://127.0.0.1:4780
).
One cool feature of Lighthouse is that you can test apps in both desktop and mobile settings. The
default is mobile but you can specify desktop by adding --preset desktop
after the url argument.
When you run the command a new Chrome browser will open with the specified URL, where Lighthouse will run the report. This browser will automatically be closed by Lighthouse when it is finished. For all the Lighthouse demos in this blog I am going to use our website for consistency.
Another way to access Lighthouse is to simply use it in a Chrome browser and open the DevTools panel, as described in the Chrome Developer documentation. A Lighthouse tab should be visible in the “more tabs” section, where you can run performance checks interactively.
From DevTools all you do is tick the boxes to specify the device type and performance metrics you want to assess. Then press “Analyze page load” to start the Lighthouse report generation.
Lighthouse Output
Depending on how you’ve run the Lighthouse report, the way you access the results will be different. Firstly if you have used the terminal and saved the lighthouse output you will have a csv or json file containing the data displayed in the report (json output contains more in depth data).
Alternatively from the terminal you can add --view
after the URL and the Lighthouse report will open in your browser to view it when ready. Here is an example of this:
Lastly, if you have run Lighthouse through DevTools in a Chrome browser, the report will become
visible in the DevTools panel. Location aside, the report should look identical to the browser
version created with the --view
option. It should look similar to this:
You may have noticed that I have got different scores in the separate screenshots even though I am using the same URL for both. This gives me a great opportunity to bring up one of the drawbacks of Lighthouse, and that is the variability in results. For example you could run a test on our website and get a different score. There are a number of reasons for this including internet or device performance and browser extensions, so the Lighthouse developers recommend running multiple tests. This topic is covered in more detail here.
Lighthouse Performance Metrics
Lighthouse scores apps on 5 measures: Performance, Accessibility, Best Practices, SEO (search engine optimization) and PWA (progressive web app).
Here, we will look at the overall performance score. This is based on a weighted combination of several different metrics. As of Lighthouse 10 (8 was slightly different) the score is made up of:
- 10% First Contentful Paint - This is the time from the page starting to any part of the page’s content is rendered on the screen. “Content” can be text, images, <svg> elements or non-white <canvas> elements.
- 10% Speed Index - This is how quickly the contents of a page are visibly populated.
- 25% Largest Contentful Paint - This metric is the time between the page starting and the largest visible image or text block loading.
- 30% Total Blocking Time - This is the time between first contentful paint and another metric called time to interactive, which measures how long the app takes to become interactive for the user.
- 25% Cumulative Layout Shift - This is measure of the largest layout shift which occurs during the lifespan of a page, a good explanation can be found here.
Performance scores lie in a range between 0 (worst) and 100 (best).
Lighthouse Performance Suggestions
Another cool feature of Google Lighthouse is the performance improvement suggestions. I am going to use the Surfline website as an example for this section. These suggestions can be found underneath the performance score on the report and should look similar to the image below.
For each suggestion you have the ability to expand for more information along with the visible estimated time savings from implementing the suggestion. These suggestions can be helpful if you want to improve a particular aspect of your website or just generally streamline it.
This was an overview of Google Lighthouse covering the many ways to run reports on web pages and some guidelines for interpreting Lighthouse reports. We can also use it to analyse Shiny applications, which will be covered in the next installment of this blog series.