If you rely on a content delivery network (CDN) to host assets for your frontend web applications, you know how frustrating it can be if your customers start reporting 404 errors or slow load times. It can be impossible to tell whether those errors are coming from your end or from the CDN, which you likely don’t have access to. That’s a problem.

If you’re able to track application transaction errors and load times with a tool like New Relic, shouldn’t you also be able to track performance data from your CDN in that same tool? After all, the ability to correlate error data between your application and your CDN is critical when troubleshooting errors that may impact your customers or business.

At New Relic, we use the Fastly CDN to host the front-end assets (the HTML, JavaScript, and CSS) that make up our website, so visitors can enjoy quick load times from wherever they are around the globe. We’ve also heard from New Relic customers who use Fastly, and we all agree that it would be great to match quickly errors coming from Fastly with errors in our web apps. To this end, we did some experimentation and created a Fastly to New Relic Insights service, which pipes real-time analytics data from your Fastly account to your New Relic account. While New Relic does not officially support the service, it allows you to monitor Fastly just like you would any other application. This service is specific to Fastly because it draws from its API. To enable a similar service with another CDN you would need access to their API, which may or may not be available.

Using the Fastly-to-Insights pipeline

The Fastly real-time analytics API sends aggregates of metrics—for example, response size, request size, cache hits and misses, and a count of each status code—and returns a new set of results every time you query it. To capture this data, we built a proxy server to query Fastly's API and pipe the results as custom events over to Insights, via the Insights API.

In order to use the Fastly-to-Insights service, you will need the following:

Here’s how it works:

Visit the Docker Hub and pull down our Fastly-to-Insights Docker image, and then run it on your dedicated server with the following environment variables: ACCOUNT_ID, FASTLY_KEY, INSERT_KEY, and SERVICES.

Here’s an example of the command to run:

$ docker run \

-e ACCOUNT_ID='yourNewRelicAccountId' \

-e FASTLY_KEY='yourFastlyKey' \

-e INSERT_KEY='yourNewRelicInsertKey' \

-e SERVICES='list of services' \

newrelic/fastly-to-insights

Fastly lets you query only one service at a time, so the value of SERVICES needs to be a string with the IDs of the services you’re sending to Insights, separated by a space between each service. (Note: We use a string because you can’t pass an array to Docker via the command line.)

Once you’re up and running, you’ll see the default Fastly Metrics dashboard in Insights, but you’ll need to run some queries against your Fastly data to populate the dashboard.

Default Fastly Insights dashboard

For example, we ran these queries to create the widgets shown above:

  • 4xx Status codes by service

    SELECT sum(status_4xx) FROM LogAggregate since 6 hours ago TIMESERIES 3 minutes facet service
  • 5xx Status codes by service
    SELECT sum(status_5xx) FROM LogAggregate since 6 hours ago TIMESERIES 3 minutes facet service
  • 2xx Status codes by service
    SELECT sum(status_2xx) FROM LogAggregate since 6 hours ago TIMESERIES 3 minutes facet service
  • The number of cache hits by service
    SELECT sum(hits) FROM LogAggregate since 6 hours ago TIMESERIES 3 minutes facet service
  • The number of cache misses by service
    SELECT sum(miss) FROM LogAggregate since 6 hours ago TIMESERIES 3 minutes facet service
  • The total amount of time spent processing cache misses (in seconds)
    SELECT sum(miss_time) FROM LogAggregate since 6 hours ago TIMESERIES 3 minutes facet service

Our Fastly-to-Insights service is available on Github. We originally created it to monitor our own website performance, so as noted, New Relic does not officially support this open source service. Nevertheless, we want it to be as easy to use as possible, so we welcome any contributions or feedback. Feel free to fork the repo, try it out, and make suggestions in the repo.