New Relic Now Vous rêvez d'innover plus ? En octobre, faites de ce rêve une réalité.
Réservez votre place maintenant
Pour le moment, cette page n'est disponible qu'en anglais.

Performance is often overlooked as part of user experience (UX) and even user experience experts like Paul Boag admit that sometimes developers have a greater impact on UX than UX teams do because they directly affect your application's performance. An innovative and intuitive layout won’t drive your users to engage if your site is slow and error prone or users can't access it.

Developers have the power to affect user experience, so let's use that power for good! Measuring the user experience of web applications requires reaching beyond typical approaches that only look at application health.

To make sure you’re maximizing returns on site design and navigation, you need to know the users’ experience of performance. Once you know how to measure it, you can optimize it. If you’re using New Relic synthetic monitoring and browser monitoring, you already have all the data you need.  

1. An engineering view of performance

Reviewing weekly performance on a typical Monday, you can look at the browser monitoring summary screen of your web application to see how it has been behaving. It might look like this:

There are no steep drop-offs in performance and throughput, so all seems fine. The largest contentful paint (part of Core web vitals) is in yellow, but that’s just a warning, and you can see from the blue line in user-centric page loads that it’s always like that. Maybe there’s an opportunity to optimize—you know you can discuss it with the team later.

2. A UX view of performance

Next, look at the same web application, but in a different view, using a quality foundation dashboard, which highlights four core factors that affect your users' experience:

  • Availability (Is it reachable?)
  • Performance (Does it perform well enough to be usable?)
  • Content quality (Does it have what users need and can they find it?)
  • Product and content relevance (Does it have what users care about?)

 A large part of the dashboard is focused on Core web vitals, which are page load user experience metrics defined by Google. For this blog post, you just need to be familiar with largest contentful paint, which measures the time from when a user first navigates to a page (usually by clicking on a link) to when the largest image or block of content loads.

You’re still looking at the performance of the overall application, but you’ve narrowed the metrics down to the ones that give you the best understanding of your users' experience. Furthermore, you’ve broken out metrics from desktop vs mobile because you know how much it can vary.

Here’s that quality foundation dashboard:

In this dashboard, there are some indications that server response times and Ajax response codes could be optimized, but the most interesting and actionable data lies with the largest contentful paint. 

According to Google, 2.5 seconds or faster for the largest contentful paint is ideal for a good user experience. 2.5 seconds to 4.0 seconds is okay, but anything slower than 4.0 seconds represents a bad experience. 

You can see in this example that for desktop web, the largest contentful paint is better than what you saw for the overall application. It’s almost in the ideal range. The experience for mobile web users is likely frustrating. In this particular data set, mobile web page loads made up 25% of the overall traffic to this application, but largest contentful paint is in the red for this user group. That’s a high percentage of frustrated users.

So why couldn’t you see this in the summary view from the browser monitoring summary? The information was lost in averages.

Looking at desktop versus mobile usage is just one way to partition out user performance. You can also break out performance metrics by device, region, product, product journey, or user journey.

The takeaway here is that to understand the user’s view of performance you need to focus on user experience metrics and partition the data where experience is likely to vary. There’s more on this in the next section.

3. A UX view of performance for steps in the user journey

Another way to partition user experience metrics is by different parts of your application. This screen shot shows an example dashboard with an application’s performance for user logins:

The most interesting thing here is the performance for largest contentful paint. It is too slow for desktop users and even slower for mobile web users. The largest contentful paint for the login page takes 4.25 seconds for desktop users and 6.375 seconds for mobile users.

For most applications, logins make up a small percentage of page views. A sluggish login page is unlikely to impact aggregated performance metrics enough to require investigation. But turning users away at the beginning of their journey with a site can greatly affect your overall performance. Other pages that are less frequently accessed but important often occur at the end of a user’s journey, also known as the bottom of the funnel. These pages could be things like the final page of an insurance application, an ecommerce payment, or confirmation before making an account change. 

Tips for measuring and improving UX

When monitoring user experience, it's not enough to just look at averages across your application. These tips can help you drill down and find user pain points:

  • Focus on user experience metrics, like core web vitals, to measure the user’s experience of page performance.
  • Break out performance metrics by device, region, and product or user journey.
  • Measure parts of the user journey that have a high impact on the user’s experience but don’t have as many page views, such as login or payments.