From Our CEO: Save Money, Do More in Economic Downturn -
Read the Article

New Relic is a software as a service (SaaS) offering, so our users’ primary interaction with us occurs through their web browsers. Unfortunately, we can’t support every browser that’s available, and we also can’t manually test every version of the browsers we do support.

But we do support applications on eight different browsers across three different platforms, so automated cross-browser testing is essential to our ability to ship products with browser-based points of interaction.

What is automated cross-browser testing?

Automated cross-browser testing allows us to automate test runs of our applications across many browser and platform combinations. Testing eight browsers across multiple platforms is tedious for a human being, but easy for a machine.

To automate the process, we write tests, configure the browsers we support, and the testing software runs the tests in the right browsers. Essentially, the tests simulate a user interacting with your software in the browser and let you know when things go wrong.

Benefits of automated testing

Here are several examples of the benefits of automation testing and some pitfalls to avoid.

Key benefit 1: Testing all the pieces working together

Developers frequently use automated cross-browser testing as a form of integration testing. The tests run against the full application running in the browser. These types of tests can catch things missed by unit tests, which only test components of the code in isolation. Testing all the code running together can catch potential problems such as:

  • A bug in complex interactions across several components of the code.
  • A change in a backend API that breaks the client-side code.

Key benefit 2: Testing a browser’s full UI

Many teams use unit testing tools that depend on headless browsers, like Headless Chrome, to run tests without loading the graphical UI. Browser emulators, such as jsdom, are also used by teams to run their tests against a subset of a browser.

Such tools are great for most unit tests, but testing without a real UI makes it near- impossible to test all the workflows and intricacies of an application against a full browser. Browser automation tests can help cover the tests that are hard to write in such situations. They also help make sure your application really works in a browser’s UI.

Key benefit 3: Scaling your tests

For a long time, browser testing relied strictly on manual tests, where a real human tested your application. Manual testing will always be important because there are things a person notices that they can’t automate. However, development teams often have limited resources available for manual testing.

As an application grows, it’s impractical to cover everything manually, and things start falling through the cracks. Automated cross-browser testing, on the other hand, helps you scale your testing and allows you to focus your manual testing on the areas where it’s most needed.

Common errors with automated testing

The biggest pitfall of automated cross-browser tests is flaky tests. If your tests intermittently fail, people stop trusting them. They start ignoring the failures, comment out tests, and eventually turn them off completely. Below are some common causes of flaky tests and recommendations for avoiding them.

Flaky software

Sometimes your tests are flaky because the thing they are testing is flaky. If your automation tests regularly hit a bug, there’s a reasonable chance your users do, too. The problem here isn’t the automation tests—it’s the bugs.

Solution for testing with lots of bugs

The obvious solution for this issue is to make time to fix the bugs. However, if your team has decided those bugs will never get fixed, you should remove the automation tests that regularly hit them. This allows you to focus on reliable tests for code paths you plan to maintain.

Timing issues

Timing issues are a common problem when content is loaded asynchronously. A user intuitively waits until loading indicators are finished before they try to interact with the page. Automation tests need to be programmed to do this.

Selenium (discussed below) allows you to wait for elements to exist (or not exist) on the page before moving forward. Using these methods thoughtfully allows your automation tests to handle these issues.

Solution for timing issues

Timing issues can come from slow applications. Most testing frameworks have timeout limits. If your application is slow, you may hit these timeouts and have test failures as a result. One solution is to increase the timeouts. Another solution is to investigate why your application is slow and take steps to improve the performance.

Historically, there have been timeout issues related to specific browsers because of how slow they are. If you’re using New Relic to automate your testing, we only support modern browsers, so we are less likely to hit this specific problem.

Unstable data

If an applications depends on data that can change often, automation tests can be flaky. For example, if an application in New Relic suddenly starts issuing alerts because it’s suddenly stopped collecting data, our test will undoubtedly fail.

Solution for unstable data testing

We recommend running automation tests against stable sets of data. If your tests rely on unstable datasets in testing, you’ll just end up ignoring them when they fail.

Time-consuming tests

As you build up large applications, the number of automation tests can get large and unwieldy, especially if you run them as part of your pull request process. If a pull request job takes 20 minutes to run, it can be hard to get changes through quickly.

Solution for time-consuming tests

In such cases, you should break your tests up into different sets for different use cases.

A smaller set of tests for critical functionality, often referred to as smoke tests, can be run alongside pull requests. The full list of longer-running tests can then be run against your staging environment after deploys or on a regular interval to notify you if failures occur. This blended approach allows you to catch problems before they make it to production while still maintaining a relatively fast pull request process.

The best cross-browser automation testing tools

At New Relic, we use two tools—Selenium and Sauce Labs—to perform our cross-browser automation testing. The best part about these tools is that you don’t need a deep understanding of their internals to use them. Here’s how they work for us:


Selenium is a suite of tools to automate web browsers across many platforms, and includes Selenium WebDriver, an incredibly popular tool for browser automation tests. 

WebDriver makes direct calls to each browser using the browsers’ native support for automation. Each browser handles these direct calls in its own way and has its own set of supported features. Refer to the documentation for each browser driver:

WebDriver provides an API that is not tied to any particular test framework. There are test libraries and frameworks that work with WebDriver for just about every language you might want to write your automated tests in. At New Relic, we use the following tests and frameworks:

In addition to test libraries, Selenium offers a whole ecosystem of services and tools to help you run your automation tests.

Sauce Labs

While it’s totally possible to run tests yourself, managing the infrastructure needed to test all the platforms and browsers you want can be a huge resource drain. Sauce Labs, however, provides the infrastructure so you can write automation tests without having to hire an entire team to manage the resources necessary to run them. 

With a focus on security, Sauce Labs provides single-use VMs from their own data center on which you can run Selenium tests in real browsers. After you run your tests, Sauce Labs destroys any VM you use, so your data is never exposed to future sessions. They also provide a proxy service, so you can safely test applications from behind your firewall.

Catch issues before your users do

In an ideal world, testing your application in one browser means it will work as expected in all browsers. Unfortunately, browsers don’t all behave the same way, and this can potentially lead to degraded experiences for your users on browsers and platforms you’re less likely to test and develop on but are still expected to support. Use automated cross-browser to catch problems early, before your users catch them.