SECRET OF CSS

Boost the Performance of Fullstack JavaScript Applications | by Laurent Zuijdwijk | Jul, 2022


Part 1 — Performance measurements

1*TlUGX3laAXhzDPRReeHTSQ

In this article series, we will see the resources to measure and improve the performance of your full-stack application.

If you are reading this, you might have a problem already. Your users might have to wait too long for relevant content to load, leading to a high bounce rate and missed sales. It doesn’t need to be this way. Don’t make your users wait. The structured approach to performance improvement is the systematic process your site needs.

A modern web application consists of many layers, from databases to legacy systems, a modern frontend, authentication systems, and sometimes a large variety of microservices.

If the user experience is slow, we know that the problem is in one of these many layers. Knowing where to start to make your application blazingly fast might be difficult. In this article, we will look at some ways to measure and monitor. One lesson you should take to heart is to document your performance metrics.

It is good to be accountable and it is part of the learning process to see what worked. In a follow up I will present some ways to handle the common performance problems that we might encounter.

In 2018 Google started a push for performance awareness among developers. Of course, they started improving the speed of the web a decade earlier with the launch of the Chrome browser in 2008 and the javascript V8 engine.

But in 2018 at Google I/O they rolled out a whole range of tools for developers. They launched http://web.dev in 2018. In 2020 they introduced Web Vitals, an objective way of measuring user experience, relating to loading experience, interactivity, and visual stability of page content. It is a collection of metrics, which is also available as a browser package to measure and store these measurements.

The core weibb vitals are:

  • LCP or largest contentful paint is a measure of how fast your application loads from the server. This value should fall within 2.5 seconds.
  • FID or First Input Delay measures if there is any lag on the page. Can the user interact with the screen without delay?
  • CLS or Cumulative Layout Shift shows the stability of the page. Are elements on the page shifting position or changing size?

Lighthouse is a tool that comes for free with modern WebKit browsers and is a great way to measure performance and get tips for improvements.

It performs a whole range of performance and content measurements from a user-centric viewpoint. Lighthouse can also be run from https://web.dev/measure to get performance data about your website or it can be integrated in a continuous measuring system using the standalone CLI version.

Size matters, especially for mobile devices. The size of your application is about more than your main JavaScript file and the time it takes to load it. It is also about the size of CSS files, analytics libraries, ads, and any other marketing or monitoring tools you are using.

JavaScript files need to be loaded, but the device needs to compile them from text into machine code as well. While the V8 engine is very fast and optimized, the average mobile device costs less than $200,-.

If you don’t notice any parsing and compile delay on your top-of-the-range device, the experience might be very different for someone with a cheaper phone. Make sure to always test your applications on real devices.

We should take note of the total size of our application. Are we dealing with a fast and nimble <100Kb JavaScript app, or do we have a behemoth that requires >1Mb to load? There are various ways to get the size of our application and unfortunately, we have to mix and match them:

  • Look at the total bytes downloaded from the network tab in developer tools. Take note of the total number of requests and the total size.
  • Look at the output files from Webpack build scripts. Install webpackBundleAnalyzer to get a tree view of the JavaScript output
  • Our lighthouse report has a view treemap button that shows a bookmarkable overview of all javascript files, the packages within them, and even what percentage of code is used in the startup phase.

Now that we have a baseline idea of the performance of our application, it is time to dig a bit deeper to find out where the bottlenecks are.

Open the network tab in developer tools to get an overview of the requests and the timing of requests on the page we are looking at.

We are most interested in the following metrics:

  • The waiting or time-to-first-byte metric
  • The content download time
  • When the request starts relative to the initial page load

The browser can only start to display anything when it receives the initial HTML. How long does it take to load this? If the waiting time is large, then we should look at our server. Is the server performing authentication before sending the HTML? Is the server rendering the page but has to wait for slow backend services? For each request that the initial page load is dependent on we should aim for a very short waiting metric.

If the content download time is large, it means the content of the requests is too large. Imagine how much worse it would be on a slow mobile network. Do we truly need that much data or would it be better to break it up into smaller requests that we can spread over time?

0*FugCnysAwj2EPu3F

The relative start time of fetch/data requests gives us information about dependencies between requests. Does the first data request have to wait 2 seconds for the entire application to load? Can we only start to request A after request B is completed, or can they be done in parallel? It is a good time to create a sequence diagram for the data requests in your application.

The performance insights tab in the web-developers tools gives a more extensive overview of timings in the application. It combines the HTTP requests with web vitals and screenshots so you can get a full overview in one place. It has similar to Lighthouse’s actionable suggestions to improve performance. The downside is that it is somewhat more complex and difficult to interpret.

In case you are more interested in a full user journey instead of initial loading times, chromium browsers have the Recorder feature. It allows you to record and replay user actions which can then be exported to the performance insights tool. If you are looking to improve your performance in complex scenarios, this might be the way to go. I hope to write an article about this subject in the future.

We like to see services that load in the blink of an eye. But what if in the previous steps we saw data endpoints that were performing less than lightning-fast? How do we measure and understand the performance of our Node.js services?

Let’s say that we found that a certain request easily takes 1000ms and we consider that too long. If the understanding of our application is limited, we should first make an architecture diagram and answer the following questions.

What does the service do? Which services or databases does it connect to? What data does it serve and how much of it?

There are three main types of measurement we can use in our Node.js services.

We can use microbenchmarks using console.time(), measure end-to-end performance using a benchmarking tool like wrk, autocannon, or more complex scenario-based ones like artillery or JMeter. The type of Node.js service we run dictates the type of benchmarking and usefulness to a certain extent. If we have a fully elastic setup, like AWS lambdas, we might not be benchmarking our services since they will just keep scaling up. We have to think about our approach before we get started.

In JavaScript it is very easy to use console.time(‘myMeasurement’) and console.timeEnd(‘myMeasurement’). These run basically a microbenchmark and will output a log entry with the duration. This allows us to measure specific parts of our application, which could be the time it takes for authentication to finish, the speed of a database request, or the amount of blocking time for an intensive calculation. During development, we can use these microbenchmarks to optimize our code.

A more advanced web API is available to make custom measurements and reporting. The Performance API is documented on its own MDN page. When we want to optimize to an even further extent, we have to look elsewhere. Timers don’t give a statistically significant result when we run them only a few times. Both a service like https://jsbench.me/, or the benchmark NPM package are suitable to benchmark short snippets of (synchronous) code.

Server monitoring services like Datadog, X-Ray or Newrelic (also see this 2022 Gartner report) give both an overview of total request time as well as timings per upstream HTTP request. These services show the average duration for service requests, but also include flame graphs.

Flame graphs show hierarchical representations of your application flow. Each operation has a stack showing depth and duration. What this allows you to do is really understand why long-running requests take long.

The application architecture of your Node.js services should be fairly straightforward and standard. If you stick to best practices, you shouldn’t run into any memory leaks or extensive memory usage problems. If you do run into these problems, your best approach would be to use the Node.js debugger and profile your application.

Profiling is out of scope for this article, but for in-depth info see this article “Easy profiling for Node.js Applications”. Memory leaks are mostly caused by a non-conventional set-up by developers re-inventing the wheel and a re-write is often the best remedy.

No matter which database you use, be it Postgres, MySQL, DynamoDB, a graph DB, or something else, you have to make sure you have some monitoring in place. Most often it is taken care of by your cloud provider. This would be good material for another article.

When your database is highly normalized and uses very complex queries or contains a large amount of data, you have to make sure that individual queries are not a bottleneck. Run microbenchmarks for queries and see if you can integrate with a third-party monitoring service.

If you made it this far, thank you for reading, I hope it provided some good insights.

In the next installment of this series, we will focus on resolving the most common performance issues.



News Credit

%d bloggers like this: