In the years since New Relic has been in the Node.js monitoring world, our service has grown significantly in popularity. According to stats from JavaScript package manager npm, our agent is downloaded more than 400K times a month!
Flexibility and customization are key reasons for this success. The New Relic Node.js agent comes with a powerful set of monitoring capabilities right out of the box, but you can also add additional custom monitoring for particular applications.
Customizing the Node.js agent
At the recent Node Summit in San Francisco, several Relics got to chatting about some of the new types of custom analysis that can be done with our Node.js agent. In this post, we will share examples we came up with of how to use New Relic Insights to monitor and analyze stats related to the Node runtime:
- Memory usage: perhaps the most common custom metric.
- Garbage Collection: a “stop the world” event worth tracking more closely.
- CPU Timing: Node v6.1.0 added a new API for getting CPU metrics that can be analyzed.

Our Node.js agent has a rich API that lets us publish these metrics for deeper analysis. The API brings a ton of flexibility to Node.js monitoring, including the ability to:
- Customize or modify the transaction name being used.
- Track and time a specific asynchronous callback function.
- Record or increment arbitrary metrics.
- Track discrete events for deeper analysis.
To explain how it works, we will use the recordCustomEvent()
API call, so that you can use Insights to slice and dice the memory, GC, and CPU data. We will use this sample Express application in the following examples.
Memory usage
Probably the most common metrics of interest are related to memory usage. The Node core API has a method for getting memory consumption: process.memoryUsage()
. This call returns the total memory consumption (RSS) and also V8 heap total and heap used.
The code example below shows how to periodically collect memory usage and send it to New Relic Insights for further analysis:
setInterval(function sampleMemory() {
var stats = process.memoryUsage()
newrelic.recordCustomEvent('NodeMemory', stats)
}, 5000)
The first argument value, ‘NodeMemory’
(in the recordCustomEvent()
call), is the name of the custom event that will be collected. The second argument is an object containing the data. In this case, the stats variable will contain properties rss, heapTotal, and heapUsed.
Once the data is sent to New Relic, you can use the following query to display memory usage over time:
SELECT average(rss), average(heapUsed), average(heapTotal)
FROM NodeMemory SINCE 1 HOUR AGO TIMESERIES

Garbage Collection analysis
Another interesting type of metrics related to memory management is Garbage Collection (GC). The V8 JavaScript engine in Node.js uses Garbage Collection to determine what memory is no longer being used and can be freed. Because JavaScript is single threaded, time spent in GC needs to be kept to a minimum.
The Node core API does not currently include a method for getting GC information. A native C++ module is required to get the GC data, which the Node agent does not provide out of the box. However, a number of modules from the Node community can be used to send the data to New Relic using the agent’s API.
Let’s assume that you have a module that emits an event every time GC runs. When this event occurs, you can send a custom event to New Relic:
gc.on(‘run’, function(data) {
// data contains properties duration and type
newrelic.recordCustomEvent(‘NodeGC’, data)
})
Furthermore, let’s assume that the data variable contains the properties duration and type. In New Relic Insights, the following query would show GC frequency over time:
SELECT count(*) FROM NodeGC FACET type SINCE 1 HOUR AGO TIMESERIES

And the query below displays average duration (in milliseconds) of GC events:
SELECT average(duration) FROM NodeGC FACET type
SINCE 1 HOUR AGO TIMESERIES

Detailed CPU timing analysis
Traditionally, it has been challenging to get CPU data from Node.js. Although the New Relic Server Agent can see the CPU, many users would rather collect the data directly from Node.js. Finally, as of Node v6.1.0, there is an API that can be used to pull in this CPU data.
The example below shows how to collect CPU usage over time. The cpuUsage()
call returns total CPU consumption since the Node process started. Therefore, to record CPU usage periodically, a little more work is required:
if (process.cpuUsage) {
var lastUsage
// sampling interval in milliseconds
var interval = 60000
setInterval(function sampleCpu() {
// get CPU usage since the process started
var usage = process.cpuUsage()
if (lastUsage) {
// calculate percentage
var intervalInMicros = interval * 1000
var userPercent = ((usage.user - lastUsage.user) / intervalInMicros) * 100
newrelic.recordCustomEvent('NodeCPU', { userPercent: userPercent })
}
lastUsage = usage
}, interval)
}
In Insights, the following query displays CPU usage over time:
SELECT average(userPercent) from NodeCPU TIMESERIES

Analyzing the data further
Since you can send any attributes with the custom event data, it is easy to facet on any custom attribute. For example, when running multiple instance of the same application, you may want to see stats for each process.
The example below includes PID with the CPU custom event, and then facets on this attribute in Insights.
The query would look something like:
SELECT average(userPercent) from NodeCPU FACET pid TIMESERIES

Here you can see the CPU detail broken up by process id. Thanks to the power of Insights, it’s easy to slice and dice the data any way you please.
Summary
One wonderful quality of the Node community is that things move quickly. There are often new frameworks and libraries that you may want to include in the monitoring data. The Node.js API is designed to make it easy to add, track, and analyze this extra data.
(Don’t forget to check out the Node.js API 2.0 beta, which adds new APIs for instrumenting datastores and the ability to distribute instrumentation modules independent of the agent. Learn more in the documentation here!)
Martin Kuba and Tim Krajcar contributed to this post.
The views expressed on this blog are those of the author and do not necessarily reflect the views of New Relic. Any solutions offered by the author are environment-specific and not part of the commercial solutions or support offered by New Relic. Please join us exclusively at the Explorers Hub (discuss.newrelic.com) for questions and support related to this blog post. This blog may contain links to content on third-party sites. By providing such links, New Relic does not adopt, guarantee, approve or endorse the information, views or products available on such sites.