The Go programming language’s performance, ease of deployment, and simple concurrency model are common reasons developers switch to Go. I didn’t know much about Go, so I recently converted a personal Python project and started monitoring it with the New Relic Go Agent. Using my app as an example, I’ll overview the agent and dig into some of its coolest features, including transactions, segments, and attributes that give me insight into my application’s throughput, error rate, and response time like I used to have with the Python Agent.

See also: The Most Popular Programming Languages of 2016


The Go agent works a little differently than other New Relic agents, such as PHP or Ruby, which you import as a library or module and they “automagically” collect data. This magic is possible because those languages provide hooks into their virtual machine that the agents use to access and wrap functions. However, Go is a compiled language; it doesn’t use a virtual machine. This means the best way to monitor Go applications is to use an API.

While this makes the Go agent a little more work to install, it provides tremendous flexibility and control over what gets instrumented. And it still has the magic, too! Simply importing the agent and creating an application gives you useful runtime information about the number of goroutines, garbage collection statistics, and memory and CPU usage.

We even made a special “Go runtime” page for these metrics in New Relic APM. It’s useful, actionable information for any Go developer to have at a glance.

Go runtime metrics
Go runtime metrics

Reporting these metrics for your application is easy. First, download the library from GitHub or via go get. Add it to your application’s import block, then create a config and an application in your main() or init() function:

config := newrelic.NewConfig("Your App Name", "YOUR_LICENSE_KEY")

app, err := newrelic.NewApplication(config)

The agent periodically records information sourced from the MemStats structure in the runtime package. We’ve already had several customers report that the goroutines chart has identified leaks they didn’t know existed. While Go provides several good monitoring tools, they are valuable only when used consistently. Integrating New Relic into your app automates monitoring and helps you quickly spot new issues with each deploy.

My example

Have you ever heard of the Kevin Bacon Six Degrees of Separation game? In that spirit, I had built a Python app called wikiGraph to query the shortest path between any two of Wikipedia’s 4.5 million pages, which I stuffed into a Neo4j graph database. This allows you to answer all sorts of interesting questions, like how many clicks does it take to get from Grace Hopper to sushi? [See bottom of post for answer.]

grace hopper chart

When a user inputs two page names, the server queries a smaller SQLite database for page information and queries the graph database for the shortest path. It then asks the graph database for a selection of “neighboring” pages, assembles a graph, and passes it back to the frontend.

go chart

Converting my Python app to Go was fun! I especially enjoyed using structs that could easily unmarshal into JSON and using goroutines to make my database queries concurrent. Once my app was in Go, I was ready to install the New Relic APM agent and start monitoring it.

Instrument all the things!

As mentioned earlier, runtime metrics are only the beginning of the information our Go agent can provide. With so much flexibility, though, how do you decide what to instrument? Should you collect information about every function? What if you want information about only a small block of code?

I suggest starting with your most important areas of concern. I decided to focus on these two questions:

  1. What is the timing information for my server’s routes?
  2. What is the breakdown of time spent in the databases?

Whatever questions you decide to tackle first, the next decision is how to handle the application structure you just created. You’ll need to access it within your handlers in order to create transactions and custom events. Should the application be a global variable or should it be passed around as a parameter? The answer mostly depends on your developer worldview.

Choosing a global structure certainly makes access convenient. However, passing around parameters makes the application an explicit dependency of the function that uses it. The code becomes more modular and easier to test, since you can just swap in applications for a particular test. As a first pass, I recommend making the application global, but I don’t think I would leave it this way long term.


What is a Go transaction? The short answer is that it’s whatever you want it to be. You can start and end a transaction around a route, a function, or just a block of code. In the New Relic realm, transactions are traditionally equated with server requests. For my app, I use half a dozen routes to serve frontend requests so that paradigm makes sense for me.

With the New Relic APM Go agent, you have the power to (and you must) start and stop the transaction explicitly. If you don’t need to scope the transaction to a particular bit of code, defer statements are a convenient way to end a transaction:

txn := app.StartTransaction("/query", responseWriter, request)

defer txn.End()

But wait, there’s more: the Go agent has built-in support for request handling! If you are using the http standard library package, you can use the agent’s wrappers to automatically start and end transactions with the request and response writer. This is how I created transactions for each route:

http.HandleFunc(newrelic.WrapHandleFunc(app, "/query", queryHandler))

One important restriction is that each transaction should be used in only a single goroutine. If you want to access the transaction in a new goroutine, just start a new transaction for it.

You’ll probably want to access the current transaction within the handler to do fun things with segments and attributes. The newrelic.Transaction structure actually embeds the response writer, so be sure to use the transaction in place of the original writer. This means you can get the transaction with a simple type assertion.

func queryHandler(w http.ResponseWriter, r *http.Request) {

     if txn, ok := w.(newrelic.Transaction); ok {

          txn.NoticeError(errors.New("my error message"))



Once I converted the handlers in my app to use the wrapper, I checked out what that looked like in New Relic APM:

top 5 web transactions
top 5 web transactions

Sadly, it wasn’t a very colorful chart, but it did answer my first question: the vast majority of time is spent in the “/query” route. What is that handler spending all its time doing? Scoping transactions to entire routes can obfuscate important information. So let’s dig into the details using segments.


Segments are meaningful chunks of a transaction. The Go agent currently supports external, datastore, and generic segment flavors. I suspected the bulk of the query time in my app was spent in the datastores, so I started by adding segments for each call.

As with transactions, you are responsible for stopping and starting your segments. Segment calls are safe to use even without checking whether the transaction is nil. If your call spans an entire function call, you can use a rather elegant single-line defer statement:

defer newrelic.DatastoreSegment{

     StartTime: newrelic.StartSegmentNow(txn),

     Product:   newrelic.DatastoreSQLite,

     Collection: "pagenames_table",

     Operation: "SELECT",


err := db.QueryRow(query, item.value).Scan(&result)

Because I found it easiest to interact with the Neo4j database through its REST API, I needed to make POST requests rather than use a driver. I wrapped the request with a Datastore segment (notice how I added Neo4j as a new product):

segment := newrelic.DatastoreSegment{

     StartTime: newrelic.StartSegmentNow(txn),

     Product:   newrelic.DatastoreProduct("Neo4j"),

     Operation: r.operation,


response, err := http.Post(url, "application/json", bytes)


Let’s see what that looks like in New Relic. Under the “/query” transaction, I found the breakdown of time. As I expected, most of the time was spent in the graph database. The shortest-path query was about 50% of the total response time, with the neighbor page query making up another 30%. (The SQLite calls aren’t in this transaction since they are made by another handler.)

app server breakdown
app server breakdown



This chart answered my second question and I could keep an eye on these numbers as I further optimized the query times.


Finally, I wanted to build a New Relic Insights dashboard to track some of the fun paths users were finding in my app.

For those not familiar with custom attributes, they are key-value pairs you attach to events, which you can then use in Insights. I was particularly interested in database timing and the pages in the path. If there was an especially long response time, the start and end pages allow me to reproduce it.

You get the total database duration for free with the transaction event, but in order to add the query results, I concatenated the pages into a string for the “path” value:

txn.AddAttribute("path", path)

I added other attributes for the start/end pages, the path length, as well as the original query. Then I created a dashboard to explore the most recent path found, the longest path found, and the path with the longest response time:

insights dashboard
insights dashboard



In case you’re curious, here’s the NRQL query for the longest shortest path found:

SELECT max(length) FROM Transaction WHERE query IS NOT NULL FACET query

My favorite part is the filter I set up on the query attribute, which lets me click on an interesting path to filter the whole dashboard for that path result. Did you know the Louisville Zoo Wikipedia page links to Brazil nuts through just one other page, the Hyacinth macaw?

That’s my tale of getting up and running with the New Relic APM Go agent. Hopefully it will give you ideas of how to use it on your own Go applications. We invite you to try it out and tell us what you think on the New Relic forums. For more details, check out our Go Agent guide and public Docs site.


[Answer] Three clicks. In 1991, Grace Hopper won the National Medal of Technology and Innovation, the same award that Frederick McKinley Jones and Joseph A. Numero won for advances in refrigeration, which had a huge influence on the rise of the Japanese sushi industry.


Gopher image cc 3.o original by Renee French.