Your data. Anywhere you go.

New Relic for iOS or Android

Download on the App Store    Android App on Google play

New Relic Insights App for iOS

Download on the App Store

Learn more

Close icon


How to Create a Modern DevOps Toolkit


What makes a successful DevOps team? Two key components are real-time visibility and communication.

Troubleshooting and debugging is difficult without the ability to investigate and immediately understand what’s happening across all your systems and applications. And once issues are identified, developer and operations teams need to communicate effectively in order to quickly find a resolution.

In a DevOps environment, finding the right tools and combining services is critical. When considering this process, it’s crucial to consider interoperability: Will the tool give you the ability to interface and work more effectively with each other? For example, is your incident management software integrated with your team’s group chat? Is everyone working on the same version of software?

Get your questions answered during the live Q&A with DevOps practitioners from Logentries and New Relic.

Join us as we discuss:

  • The Importance of Interoperability: Why it matters

  • The Challenges of Interoperability: How to get your tools and teams talking

  • A Few Favorite Pairings: What tools do modern DevOps teams use

Full Script

Audrey: Hi everyone and welcome to today's webinar "How to Create a Modern DevOps Toolkit."

Moderator: Thanks for joining us today. Just a little bit of housekeeping. Please ask questions via the questions panel. There's actually two panels that we'll be playing with today. One is the questions panel and the other is the chat panel. Audrey, what else will happen afterwards?

Audrey: Just a few other housekeeping items. Today's webinar is being recorded and we will email you a link to the recording tomorrow. If you want to follow us on Twitter and the DevOps conversation, you can do that @logentries or @newrelic. Today's speakers are Trevor Parsons, he is the Co Founder and Chief Scientist at Logentries and Abner Germanow he is the Senior Director of Solutions Marketing at New Relic. Trevor.

Trevor Parsons: Thanks guys. Hi everybody. Maybe let's give people an overview for what we're going to cover. Today's webinar is all around how to create the modern DevOps Toolkit. What we're going to cover initially is actually just some of the challenges today when we look at today's systems.

Actually, Abner's going to cover initially how managing your software systems today is actually becoming a big data problem. It's really down to lots of different data sources that we need to track data. For and how systems have likely changed over the last couple of years. Abner's going to cover that.

He's also going to cover how DevOps is really helping to drive innovation in this space, and how we're trying to break down silos not only between dev and op but across all different parts of the organization. Then we're going to dive into some examples of DevOps tools, about the different categories of DevOps tools, and some of the requirements that people have as they look at this tool stack.

We're going to then look at important characteristics that you need to look at as you're selecting your different tools. One of the really important characteristics that we're really going to focus on is the interoperability.

What you'll see as you go through the webinar is that the modern DevOps toolkit really has a lot of different specialized tools that tend to work very well together as opposed to a single tool that maybe people used in the past to monitor their systems or a smaller set of tools that people would use in the past to monitor their systems.

We're going to look at interoperability and some of the advantages of having tools that can work well together, and also some of the challenges on things that you should look out for when you're selecting your tools.

Then we're going to give you some examples of the full DevOps stock and how these tools can work together to solve particular problems, and we're going to hope to see them in some key examples towards the end. Then I think we'll leave about 10 minutes at the end of the webinar for questions and answers.

Abner, I think I'm going to pass control back over to you and let you take away some of the challenges that we're seeing today.

Abner Germanow: Great. Thanks so much, Trevor. I'm Abner, and what I'd like to start with today is let's look at typical application architecture. When we think about the apps today with your standard three tier architecture, you've got quite a bit going on. Maybe you're thinking about splitting this up into micro services or adding new services as micro services off to the side.

That's certainly the buzz word of probably 2015. Then you're also thinking about, probably, public cloud services. Maybe you're using a variety of different data stores. Then, out in what the customers experience, you have rich WebEx, you have mobile applications, and from the challenge of managing software, you have a set of application data.

Typically, this is the realm of the developers and the ops people who are trying to optimize the performance, and the cost, and the efficiency of the application. Then you typically have a set of customer experience data.

Very often this might be Google Analytics, or Omniture, or other tools that show you what the customer is doing, and what they're trying to accomplish, what are they reading, what are they shunning.

Then, of course, what we really care about is the business data. Does the actions that the customers take, does the application itself make the business go? Does it serve the goals of what you're trying to do? When you look at all of these different data points, and all the containers that you're spinning up, or all the various components managing software has become a big data analytics problem.

Just to give you a little bit of a hint into how big it really is, today New Relic monitors about four million op instances. Those op instances across all those different components generate about 800 billion metrics a day. That's a lot of data, and when you think about how do you make sense of that for the people that you work with, there's a big part of your operational or developer career success that's tied up in not just saying, "Hey, application performance, we're going faster, " or not.

It has to be measured in the context of the customer experience. Yeah, we went faster and that speed resulted in some change in the customer experience.

When you look at that old notion of how do you actually build software, used to be we would build software especially in the enterprise for back office stuff the business would say, "Hey, I want something," then they pass it over to the developers, and they pass it over to the ops people, and then the app would go to the customer, and the customer would go, "Bah, I don't like this."

How do you get to a point where the business owner has a sense of where the customer is behaving and whether or not they're doing the right things? Then, how the dev teams and the ops people can optimize for that. If we think about these teams in the context of the data that they're consuming, you have that application data between the dev and the ops that I already talked about.

You have the customer experience data that you want everybody to understand so that people understand our best customers are experiencing the platform in one way, and our worst customers in a different way.

One that's really key that oftentimes gets left out of many DevOps conversations is, what is the business view into what the customer behaviors are? How can you make sure that everyone on the team understands the impact of how building software better and faster, and with more stability impacts the overall business, and making sure that everyone's included in that activity.

We know that silos are slowing those innovations. One of the reasons why the DevOps moniker has been so hot is that, if you can create a culture that fuels this agility and that is data driven, then you can start to really move your application and your software toward really powering your business.

One of the big things that I think is really fascinating in this shift is, as the software moves from being the project based to being software products where you behave a lot more like a software company, that is changing a lot of the technologies that we use. It's changing the organizational behaviors and all that.

We're interested in what are the tools that you're already using. If you could all go to the chat panel and type in a couple of the tools that you have in your stack, that'll give everybody else who's on the webinar a feeling for the types of tools that people are already using. We have some ideas on what people are using, but we'd love to hear from all of you.

If you could go to the chat panel and type in what your stack looks like, that would be awesome. What are the types of tools that you're already using?

Trevor: I think, Abner, we'll probably do a roundup of a poll for people afterwards and send it out to people to have some good insight into the different categories that, at least, we're seeing and which ones are quite useful.

I think one of the really interesting points you've made there is that people often think of these DevOps as removing the buyers between dev and ops, but it sounds like it's a lot more now, right? Not just removing that wall between your devs and your operation team, but it's actually removing silos across the organization, so that dev units...

Abner: Trevor.

Trevor: ...ops units, and analytic...

Abner: Hold on one second.

Trevor: Sure.

Abner: Can you show your screen?

Trevor: Sure, absolutely.

Abner: [laughs] There we are. Now we're back in business. Awesome.

Trevor: OK, great. The point I was making Abner, I think, really just following your point is that DevOps is really about breaking down silos right across the entire organization so that everybody gets access to this data, and everybody's working together to understand the customer experience at the end of the day, right.

Abner: Yeah, for sure. Let's take a look at some of the tools and maybe the classes of tools might be a better way of describing it that the people use today.

Trevor: Yeah.

Abner: If we think about the DevOps from a marketing perspective, clearly configuration automation, the puppet, and the chefs have been at the forefront of being known as tools that people who are trying to do continuous improvement in the way that they ship software. There are lots of other tools that also come into play.

I think, clearly, incident alert management if you're more from the ops side. You tend to experience the software in your infrastructure through alerts. That area has blossomed significantly over the last several years and has started to get very sophisticated. That happens to be a favorite of mine, given the burnout that can occur.

I think the other interesting one on incident alert management is that very often, when people talk about alerting and DevOps, there's this fear from the developers that, "Hey, I'm going to get a pager, and I'm going to have this...and now I'm going to get woken up at two o'clock in the morning all the time."

There's a lot of these tools and, whether it's the collaboration tools or the alert tools, are designed to make sure that nobody gets woken up at two o'clock in the morning. [laughs] The big news flash here is that no one wants to get woken up at two o'clock in the morning, but it does happen on occasion.

Those are two of my favorites, and I know we're going to talk a little bit more about the chat thing in a bit. What are some of your favorites, Trevor?

Trevor: The way I look at it, Abner, is actually from different categories here. This probably isn't an exhaustive list. I think it really depends on what your own specific requirements are from a DevOps engineer based on the systems you're running in the organization you're in. Some of the big categories I do like, and you've called some of them out. I think configuration automation is huge.

In particular, as people are building cloud based systems when people are moving to container environment systems are becoming so dynamic, and so big in terms of the number of instances that you can't really divide anymore without these type of tools to auto scale, to automate deployment and things like that. I think that's a really important area.

It even ties in to two of New Relic or Logentries, where we would see that without supporting or integrating with puppet chef makes it really difficult for people to be able to deploy these tools in an automated way and in large scale environments. That's a really important category.

I think, then, one of the other big categories is monitoring. I think whether you're talking server monitoring, whether you're talking APM, whether you're talking log management or health check, this a big part of any operations team. Team's set of tools where one of the big areas, as a DevOps, you need to focus on is monitoring your system.

I think one of the really interesting parts is 5, 10 years ago often you had one single tool or maybe a set of tools from a single vendor that gave you the ability to instrument to log across your whole system.

These days we're seeing a lot of specialized tools in these different areas. I think that's why you see these broken arrows into some of the different apps of categories. You mentioned chat.

I think chat offers something that's becoming a big part of DevOps, and it's something I know we've embraced at Logentries, where you can now get better communication, better visibility using a chat tool across deployment, across errors that are occurring in your system, across even things like where people are signing off and customers are making payments.

You can integrate all that now to your chat ops tool as well. Then alerting, obviously very important to make sure that nothing is going awry across your systems.

The last area I think is also key, and I think it's becoming more popular, where people want to be able to visualize output from a lot of these different tools in a single location, and the ability to be able to get data out of systems, and send them on to maybe a centralized dashboard.

Whether that's within New Relic or Logentries, or whether that's some external tool like Geckoboard, for example, being able to do it and have a single operations dashboard, I think, is also key.

Abner: Yeah, and it occurs to me as we go through here that one of the big ones that we left off was all the continuous integration and deployment tools. I seem to come across a new one of those every day. I see in looks like people have put things into the question bar, which is great. In the CI there's tools like Solano, and Chipio, and a bunch of others.

There's CircleCI. Definitely another big chunk of tools there.

Trevor: Yeah, I think that's a really good spot. I think Cloud Ship is probably another one out there. I think, actually, when we give some of the examples later on, I'll give an example of a workflow and how all these tools fit together. I think we cover some of these.

Abner: Yep, all right, let's keep going.

Trevor: One of the big questions, then, is whether you point to a characteristics when you're looking across your tool chest, and what has it that you really need to look out for. Again, this is an exhaustive and it's hands on on some of the different tool categories, but I'll give some insight from my perspective.

One of the big ones that I see being very important from a DevOps perspective. This is something we see from our own user base as they come to try out Logentries, tons of value is really important. The old model enterprise sales model where it took you a few weeks to get up and running and sign a contract and get a license.

I think that standard is dying, and I think New Relic is doing a really good job. We didn't do any of this. And it's shown how we did it when GetItAll started using the technology right away and start seeing valuable metrics within a couple of minutes. We see that consistently from our own users, that people don't have a lot of time on their hands to go evaluating 200 different technologies.

They want something that works, that works well, and that they can get up and running quite quickly. I think an important part of that is, again, something I've seen that New Relic do really well.

I think you guys were refer to it as creating opinionated software. Creating software that has intelligence built in, that knows the right metrics to surface, that knows the right graph to show, that gives you the ability to very quickly hone in and focus on the important information.

Monitoring software is becoming a big data problem. One of the big challenges of big data is separating the signal from the noise. I think opinionated software helps with that, where you can very quickly highlight those key pieces of information. It's something that we've also done at Logentries.

One of the things we introduced was what we call community packs, which is this concept of having particular intelligence for different components. For example, if you're sending in data from AWS, from CloudTrail for example, we provide a community pack that will highlight the important event within that particular log data so that you don't have to spend a whole lot of time writing complex queries and doing all the thinking yourself.

I think the idea would allow the DevOps tools is that they should be there to take the hard work away from you and allow you to concentrate on what it is you're good at. I think...sorry go ahead.

Abner: This has been a fairly hot topic recently around what's opinionated and what's not, and what makes for great software. I think from a DevOps perspective one of the things that we've seen is that when you democratize the data, that opinionated analysis is very important. Then you're also going to have things that are unique to your business.

The balancing act is having tools that are opinionated, and I totally agree with what you said. Then also having the ability for the experts to dive in or be able to customize a view that's unique to your business or your application.

Trevor: Yeah, exactly. I often refer to prefer that, allowing people to go freeform. You got to allow people, once they've identified a particular issue, if you've got to drill in deep down and start correlating whatever data sources they find useful. I think it is an important balance. You have a store for storing information without allowing people to dig in themselves.

Moderator: Their tools?

Trevor: Some of the other characteristics that are important. For us, we always use real time as something that.. It's a differentiator that we provide at Logentries. It is something that we are biased on in one sense. Being able to give people and particularly for certain systems real time visibility, up to the second, looking at what the latest log events coming from the production system.

What are the latest alerts and notifications? Being able to see that up to the second, in some scenarios, is really important. In other scenarios where maybe you're doing post monitoring analysis, it's less important. That's one that we always call out as something that if you can get your hands on real time data, why not. OK.

That's something to consider for people after looking across their toolset and as to whether they're getting that data. What do they need in the real time, and how quickly they can get it if they do?

I think one of the other points we call out here is low buyer to entry, but also, I think, what I often refer to as low buyer to scale. Very often people will choose a tool maybe at a prototype stage where they are building out a new system. They look at some tools that they can use to help them with managing that system as they're building it out.

It's also important to look ahead and to make sure that what tool you selected will scale with where you're heading with this system, and where you're heading with your business. In some cases people can outgrow tools. That can be quite painful trying to displace an existing tool that maybe isn't cut out for the job down the road.

That can start with just an appointment process. I think I mentioned it a couple of slides back, where you got to make sure that you can deploy...from a monitoring tool perspective, you can deploy that system across all of your different types of systems. Make sure that it integrates with all of your different co bases.

For example, with New Relic, you guys can plug into all sorts of different languages, for example. As systems grow, these things become very important. Just another one to be aware of, I think, in terms of that buyer to scale.

Finally, the one we're really going to dive in on is interoperability. For me, this is probably the most important of them, given the DevOps tooling landscape these days. As we saw from a couple of slides back and I'm sure we'll see from the poll with everybody on the webinar, there's no single solution.

There's a ton of different solutions out there. Even within the logging space, within the IDM space, there's lots of options, but within all the different categories there's lots of options. You no longer have the situation where maybe 10 years ago you went to IBM Tivoli, you got all of your monitoring tools, and that was that.

Today, you've got lots of specialized solutions. I think from a tool and a people perspective, interoperability is very important. Those tools are going to be able to integrate together. We'll give some examples later on showing exactly how you can integrate your different tools and some of the different scenarios.

Also, I think, from what you mentioned towards the start of the webinar, Abner, from a people perspective, we now have blurred lines between different parts of the organization, so it's important that they can actually use tools together across the different business units.

I think interoperability so that different tools can plug in together, and interoperability so that different people can use tools for different use cases is also key.

Then, in terms of some of the advances of interoperability, I really think it goes back to those same two points where having tools that interoperate, that really drives efficiency across your organization. It helps break down those silos. Because different people have access to different tools, they can use the information from those different tools.

It means that you're less likely to have silos within your organization. The other big advantage of interoperability is that if you've got your APM solution, like New Relic or Logentries for example, working together. It's probably one of our most popular use cases, I might add. A huge percentage of our users would use Logentries and New Relic together, and we'll show an example of that later on.

If you're doing that, it gives you extra data points. If you can also throw in server monitoring informations into the mix, then you've got another data point. Very often, when you're in a DevOps role, the more data you can get your hands on from different sources, the better and the easier it is to troubleshoot different things.

One example I give is, maybe correlating server monitoring information with CloudTrail information from your analytic environments that along with, maybe, New Relic information where maybe you get a performance issue, you can see within New Relic that the spike in response time within one of your components.

You correlate that maybe with the fact that you're running out of server CPU from your server monitoring information, and maybe you correlate that with AWS CloudTrail information where somebody has spun down a number of different instances.

All of a sudden you can not only see what the problem is, but you can very quickly get through a cart for taking all of these different data sources and correlating them together. Here's a simple example of a workflow on an everyday example from a DevOps professional.

What you'll see from this is it's a very simple workflow of identifying an area in your system and working through it to try and find a resolution, but you'll see a number of different tools that are evolved from just a simple example.

I think it goes back to what we've shown on previous slides with all the different categories and how these tools really need to work together. In this simple example you might use chat, for example, to bring up a new set of server instances as part of a new project stat that you're building in advance of a new deployment, for example.

You might use some of your continuous integration tools, that Abner called out, to help with deployments and to continuously deploy new versions of this application as you're building that and as you're adding new features, for example.

If you're smart, you might use something like New Relic to monitor the application performance and watch the regression or improvement of your performance over time as new releases come out. Then maybe at some point down the line you get a notification from New Relic, via pager duty, that you've had some significant performance issue after a new release.

You can dig in with New Relic. You might also dig in at a more fine grained level using error tracking tools like Airbrake, or maybe you want to look in the logs to get even more verbose information like with the live hands on with Logentries.

You may get a lot of communication, either via chat off the integrations where you could've got New Relic, you could have Jenkins. You could have PagerDuty. You could have Logentries all sending information into Slack, for example, or HipChat, and you can communicate those issues in real time across your DevOps team.

Once you've identified and resolved that issue, maybe you keep an eye on things at a high level across a centralized back dashboard like Poster Graphic, for example, or Refiner, or one of these dashboards where you can send all of this information from your different systems.

You can see, even in this very simple example, where simply monitoring a system, identifying an issue in the system. We've gone across that entire...that DevOps toolkit stack and we've hit about eight different categories of tools just to monitor and manage that situation.

Some of the important characteristics, these cohorts, but the next thing that is probably important to look at is why is interoperability hard? What are the things that make it difficult to integrate these different tools? What are the things that can catch you out in a situation like this, where maybe you don't have interoperability.

For me, and particular from a monitoring and logging perspective, there are some things that you really need to look out for. One of the first things is really getting data into your system. If you're using a log management solution or you're using an APM or several monitoring solutions, they're not much use if you can't get the data in.

It's very important to make sure that the solution that you select can handle data in different formats from different systems. Again, this comes down to scale, where you may be OK when you get started. Maybe you have a small co base, using a single language, or maybe you're deployed on a relatively small hardware footprints, or maybe you've got a small net log data streaming in.

As that system grows chances are you're co base is going to grow. The languages you use are going to grow. Your server footprint is going to grow. Maybe the operating systems you use are going to grow. It's important that the two that you choose can handle those different formats.

It's important to be forward looking as you're selecting your DevOps tools to make sure that your tools are interoperable with your different platforms and with your different languages that you're using as you're building your system.

Then, once you've got the data into the system, you can analyze, you can visualize this. You can do all sorts of cool stuff usually by logging in to the UI AP system. In many cases, again, as you scale you want to automate a lot of these actions.

It's important to make sure that any of the DevOps tools that you're using have open APIs and they'll allow you to connect via the API so you can pull out that information, so you can automatically analyze it, so you can integrate it with other systems.

I think New Relic is a good example of a tool that's done a good job of providing a nice API for integration purposes. I think tools like Slack have done a really good job of making sure that you can integrate data in and data out very, very well.

It's something that we've tried to do at Logentries as well by creating a restful API where people can use both from a configuration perspective, but also in terms of pulling data out of your system so that you can use it in an automated way as well as looking for APIs. In particular, restful APIs are usually recommended.

Things like WebHooks can be nice little simple integration points as well. Maybe not as powerful as a full API, but can be really useful for integration of alerts and things like that with other systems and with other third party providers.

One of my favorite integrations I've seen from some of our own users is integrating our own WebHook wish. That gets generated when you get an alert of vacation, for example, with a tool like Twilio. I've seen developers integrate with Twilio such that other developers will get phone calls in the middle of the night, but when a particular exception occurs.

I think it was more of a prank than anything else, but it was one that I found a pretty useful way to use WebHook and a pretty funny way of doing that. I think it's a great example of some cool ways in which you can use your different endpoints to integrate with other systems.

From a team perspective, I think it's also very, very important to think about interoperability. One of the real challenges that I've seen with trying to introduce new tools is actually getting universal agreements across an individual tool team or, in particular, across different business units.

I've seen this, in particular, with maybe chat tools, for example, where maybe your dev team is used to using HipChat, maybe an alert team is used to using Slack. Trying to get universal agreement that can be challenging because nobody likes change. I've seen that being a challenge.

How I've seen it best is probably overcome is to start small. As your product typing a new system, you can work with a single team on a small project, start using a particular tool. As that system grows, you can get more people onto it, and you can grow it out over time.

A really good opportunity for this often rises where lots of organizations are in the middle of or just mutual to legacy systems in a cloud, or are now building...these old systems are moving them to, maybe Microservice Architectures. That's very often a very good opportunity to start using some of these new tools and to start prototyping them as these projects get going.

Then you can slowly move to Meta across the organization. Abner, I'm not sure if you have any comments, any of the points that I just hit there. If not, we can move on to some of the integration that you guys have done with New Relic, and we can start covering some of these different integrations.

Abner: I think the biggest thing as we move into some of the examples is to go back to that point of there's no one right toolset for every company. There's always going to be multiple tools. There's always going to be multiple data sets. We've taken the assumption from way back that we're going to co exist with multiple tools.

The challenge isn't in running multiple tools. The challenge is making sure that the people who are running the software have access to the right visibility. Maybe if you go into what does it mean to do end to end full stack monitoring? For those of you who haven't seen this, this is a very common overview in New Relic's APM tool, which is the application performance monitoring dashboard.

What you see here is basically, here's what's happening within a particular application. Then up on the upper right there, there's a ability to transition from this tool to searching the logs with Logentries. That ability to give everyone on the team access to this sort of information is pretty key as well as the ability for them to very easily transition from one view of the data, which is...

This is agent data that comes out of an agent that we embed in the application that compiles with an application to go to the logs, which come out of the various infrastructure and application components. That ability to transition back and forth and making sure that people can have that visibility is pretty key.

Maybe Trevor, if you want to talk a little bit more about what do you get when you go through there? Actually, I think we're going to do that in a second, so maybe...One option is to go have this access. Then the other piece is, how you going to collaborate with these tools? There are a few different ways of collaborating.

The buzz word for a lot of the collaboration is around chat ops. When you think about what happens in all the changes that occur within your application, and where do people hang out, where do they collaborate, chat rooms have become the place to do that.

Trevor, why don't you talk a little bit about that?

Trevor: Certainly. I love this concept. What it's done for us at Logentries is really optimal visibility. Not just, it started I guess with the dev and ops teams, but now it's across the whole company.

For example, what you're looking at here is just deployment information, so you can see we've got, I think it's Jenkins hooked up to one of these chat tools where every time somebody makes an appointment you can see what happens. Every time there's an issue in the system, we can see what that issue is across the entire organization, but also as well as things like when customers sign up.

You can see who signed up when as well as just being able to very easy contact different people and have conversations with different parts of the business. What it does is it opens up both the actually running of the systems to the entire organization, but also opens up how the business is doing through your organization.

It's a really nice way of improving and getting better communication between teams and systems that are used to run your business. One of the other things it does is it cuts down a huge amount on email flow between your teams internally. It's a really nice way of just improving communication and reducing the random noise.

One of your things that it also likes to do is...I've seen people use things like QBoss as part of this, to be able to automate commands through chat windows. They can resolve issues. They can kick off servers. They can kick off deployments. They can do all sorts of cool things. Again, almost using a chat tool as a command line.

It's just something that I think is really changing. It's something that seems very simple, but it's really changing the way teams are working together across these different divisions.

Abner: Yes, I was noticing, however, this particular screen shot seems to have a distinct lack of gifts and emojis in it, which on one hand can be really fun as part of the collaboration and culture process. I was talking with one of our ops people the other day.

They have problem with the thumbs up emoji in our chat tool because there was some confusion around does it mean "Hey, that's great?" Does it mean "We're good to go?" Does it mean like, "Hey, I heard you and I agree with that problem?" That's definitely one thing you want to keep an eye out for when you're collaborating with chat tools.

Trevor: Yeah, I know. I think it seems trivial, but if I watched the amount of gifts and thumbs up and thumbs down from some of our own engineers and ops guys, it's really how AC DevOps can communicate. It's great for morale. It's great for guys just having a bit of fun on a daily basis.

Audrey: Yeah.

Abner: Guys and girls. All right. Let's move on here to configuration automation.

Trevor: Yeah, absolutely. Actually, I think we're moving on to tracking deployments. After integration, we're tracking deployments.

Abner: Oh, OK.

Trevor: I think this is something that you guys have actually dove quite a bit around. I know you've introduced some new capabilities around being able to track deployment and see that within the New Relic dashboard. It's something that we've seen.

We've seen a lot of people actually do at a log level as well, where their continuous integration systems can log when deployments have occurred. Just being able to correlate that with other application log data is a very simple and very easy way to see when there's been a regression. You made a deployment. All of a sudden you're seeing a bunch of errors get generated off of that.

Just having that information together from different sources, again, shows how and why interoperability is so important. Abner, you might want to explain some of different things that New Relic do.

Abner: This is a picture of a good deployment outcome. We see people posting these to Twitter and other social networks on a regular basis. What you see there is that little green line that says...and that's a deployment marker. The team, basically, can celebrate that they had a problem in their response times and they've now made forward progress here.

If we go to the next slide, when we grab deployment, we do a couple different things. If you enact deployment markers and we'll show you a couple different ways of doing that one of the things that we do is we track all your deployments over time.

We take a snapshot of the performance five minutes after every deployment marker so that you can then go show progress to your management teams and other people you need to show progress to.

If we go to the next slide, there are a couple of different types of integration points. I've pulled these in here not to necessarily show you how to integrate for New Relic appointment markers, but rather that when you're evaluating a DevOps tool you should look for the ability to integrate in a couple of different ways.

You see the example with Krull Air where you might have a continuous integration tool, or for something like Jenkins, that runs a process and then sends a notice to New Relic through the API. Alternatively, inside our agents there are options for doing that same kind of signaling directly out of the agent.

When the agent senses a new deployment, they're able to send a note to New Relic to deploy put a marker in the graph. Finally, tools like Capistrano have ways of putting those same markers in. If you think about all the various events that occur within the infrastructure, you think about every time you deploy, you want to be able to capture the fact that a change occurred.

One of the things I don't think we put in this stack, but there's a whole slew of information that you can collect when you're putting this deployment marker in. It can be everything from details around what the co text was to the person who did it.

That drives a lot of accountability so that people can say, "Hey, George just made this change and we either need to give him a high five or make sure we run out to the parking lot and don't let him go home because it blew everything up." Those types of integrations are pretty key.

Let's talk a little bit about what happens when we look at deployment through a logging view.

Trevor: Absolutely. It's very much complimentary, Abner, where we do very simple things. We've seen a lot of our customers do this as well. They will simply log when a new deployment occurs. You can see here this green indicator, for example, shows, "Hey, a deployment occurred at 13:44."

Subsequently after that you see that there's already some critical error starting to occur in the logs. It shows a very simple way where, if you have a simple deployment log and you correlate that with your application logs, you can very quickly see a very similar to what you guys are showing a deployment happened at this stage.

What happened directly after this? What worked? Did good or bad things happen subsequently? It's just a really nice, clean way of being able to debug that from taking sources of information from multiple different data points.

Great. I think that brings us to the wrap up stage, Abner. Maybe they'll start with you if you have any parting advice for the audience when it comes to building a DevOps toolkit.

Abner: One of the big things that I always come back to is figure out what the problem is that you're solving for and then choose the tools that's all for that. When I talked to, especially, larger enterprise customers who maybe are fairly early in their DevOps endeavors, it's not as bad now.

For a while there was this notion of I can deploy Puppet and I can start using Puppet and then I'll be doing DevOps. That's not the goal. The goal isn't the word DevOps. The goal is better software faster with drama.

When we think about the things that will enable your teams to reduce the amount of drama, to increase the amount of visibility and accountability across the entire team, including...I describe the team as not just the application developers and the ops engineers, but also the business people who own the app. Make sure that you empower your teams to do this.

The other thing that we didn't really talk about a whole lot is that all of these tools and I've yet to come across one as seen in a serious software shop are things that you can try very easily. The notion that the old school way of buying technology and trying technology out involves steak dinners and golf courses.

One of the things we always talk about is adoption before golf. Seek out tools that you can try. See if they help your culture. See if they help empower your teams. That's typically what we like to direct people toward.

Trevor, maybe you want to talk about Zapier a little bit.

Trevor: Before I do I just think my advice on choosing the tools is based similar to what you said. Start with identifying the problem. Don't start with trying to use a particular tool, like you mentioned. I think once you identify the problem, what are the tools available to solve it? As you said, it's so easy these days to go out and try a few tools, see which one you like.

Most tools, there are free versions, or free trials, or unlimited trials, or what not. You can very quickly get up and running with them, and the proof in the pudding is often in the easing. Check them out. Give them a try for yourself. Don't listen to guys like us saying, [laughs] what tools to use. Find out for yourselves and figure it out.

The other thing to think about is, "Hey, is thing on a scale?" I think we touched on some of the things. Is it going to scale in terms of will it interoperate with different platforms or other tools? Will it scale into my systems and let my platforms grow? For me, that's where I see people often falling down, choosing their own tool up front and then having to rip it out later on.

One of the other things I think we didn't hit on was Zapier. I think if we're talking about integration and interoperability, it's really like a piece of glue, if you will, that allows you to plop the different tools all together. I think it's something I've used recently as part of our chat ops operation where you very easily link payment systems with Slack or with HipChat.

You can very easily link lots of different tools together that maybe don't have direct integrations. I think if you go to you see a whole list of lots of integrations there that can be really, really useful in this type of scenario.

Abner: I think this might actually be a good place, and that comment about Zapier is probably a good place to take one of our first questions, which was on ChatOps. The question is, I could see how it would eliminate a lot of extraneous communications and email, but it seems like there'd be a lot going on all the time. How do you avoid it becoming a firehose of distractions from all directions and business units all day long?

Trevor, I think you've had your head in ChatOps more than I have.

Trevor: Yes.

Abner: I know that people do things like create separate rooms for different tools, but what's your experience been?

Trevor: That's exactly it. You can easily ignore a lot of things that...conversations are going on, but you can see them on your screen popping up now and again. If you want to dive in, you can. What I do, personally, is some rooms I'm interested in we have a room around support, we have a room around operations, we have a room around new accounts.

There's lots of different rooms that are available. You can sign up for the ones that you're interested in and then you're part of the conversation. If you want to be in and out of a room, you can easily just do that as well. There is nice ways to be able to filter in and filter out.

To be honest, I prefer to see that noise in a chat client's, not in my inbox where you really have to dig into everything, and it can be a much more painful process when you've got a flood of emails that you got to go through every night, and you know the vast majority of them are of no interest to you.

Abner: Cool. Let's go to some of our other questions. Audrey.

Audrey: Yeah, we've got a few questions in. Someone asked, I'm new to DevOps. What can I do to start working with DevOps?

Abner: We've got a bunch of really great content if you go to the site. There is a couple pieces. One called DevOps 101 that is a really great overview of why, how you get started, so that's probably a great place to get going.

Generally, the advice that I give people is I say pick something...pick a software project within your company that has some significance. Maybe, it's not the business, but has maybe a senior executive project or something that's not tiny but it's not huge either.

Then, make sure that you put a team together that is directed to help get that software out running and have it evolve quickly over time and put the emphasis on the ability to change and experiment and give them the elbow room to behave and to operate however they see fit. One of my favorite examples of people getting started is basically a mall who said, "Hey, we need to experiment. We'll take on a small project; but, as part of the project and by taking it on, we want to use our own technology. We want to make those choices. We want to essentially split from what's been done before."

We've seen some people be very successful doing that, at least to get started and to prove out the value of iterating software very quickly. The other advice I usually give is to focus on a piece of software that is a product, where it's a living, breathing thing for a long period of time.

You're running a marathon, not a project where people will walk away from it and not have responsibility or accountability to it after it's quick, quick done.

Audrey: Trevor, did you have anything to add to that?

Trevor: No, I think Abner's done a pretty good job of covering it. I think the only thing I would add is that DevOps is not a bad tool to use necessarily. It's more of a way of thinking about things and a way of doing things. As Abner said, there's lots of resources out there that very quickly you can get up to speed, but we haven't thought that way of thinking it.

Then, by actually implementing it, the particular project is the only way you can really learn how to get involved, and the right tools really help with only part of the picture.

Audrey: We have a few questions around best practices. One in particular around security. Someone wants to know, can you touch on privacy and security compliance when it comes to managing the data generated by DevOps tools? Notably user behavior analytics and detailed application logs?

Abner: Trevor, you want to take that one first, or do you want me to take that?

Trevor: Yeah, absolutely. We always recommend that people try not to put sensitive information into log data, but you can never be certain. Sensitive information can leak into logs. It can leak in even to APM information.

What we've done at Logentries is we've actually worked with some of our bigger customers to put in place capabilities that allow people to make sure that sensitive information is protected in the right way. For example, for any information that's sent to us will be encrypted on the wire. Anything that is stored in our system is encrypted at rest.

But also, we provide users with tools that allow them to obfuscate sensitive information if it goes into their log. For example, within a customer's environment, when they're collecting log data, we have a collector that looks up the log events before they leave your network, will identify any sensitive information as defined by the user, and can obfuscate that information.

It will actually strip out anything sensitive before it leaves your network. You can choose to store that in an encrypted database within your environment.

Then the log data, without that sensitive information, is sent to log entries. What I usually do is just replace that sensitive information with a one way hash so that you will see the rest of the log data is still quite useful for you to be able to use.

On top of that, you can actually reverse that one way hash if you so wish. We actually have an ability to reverse the information by talking to a database within your environment so that you can actually look at the original information if you so wish. It's password protected, so that information always resides within your environment and never comes into the log entries cloud.

In short, make sure you're using a service that is encrypted and uses all the best practices. But also look out for capabilities that allow you to manage and deal with sensitive information, or look for sensitive information that you may not want in your logs, but that may have crept in.

Abner: I agree with all that. The one piece I would add is that one of the things that we've seen is a number of these DevOps tools are SaaS services like New Relic.

Very often, while the business has gone off and deployed Salesforce or Workday, or a number of different SaaS applications, very often, ops and IT are late to that SaaS adoption game. In many cases, a lot of people don't understand what they should be looking for from a security and privacy standpoint.

We could do a whole webinar just on this, but for me, the biggest things are to look at the maturity of the security team that works for the service. Because the larger and more mature that security team is, the better job they're going to be doing.

The other piece is to look at how the service has built security functionality in from the ground up to do things like what Trevor was talking about. In the case of APM data, we don't take parameter data. It's very easy for you to audit and dive into the data so that you can see what's there and what's not.

I think we have time for maybe one more question.

Audrey: Yeah, I know we're at the top of the hour here, now, but I want to make sure we ask one more question since I think this is top of mind for a lot of people on the webinar. You guys did not mention DevOps for Docker. Can you touch a little bit on that?

Abner: Docker's really interesting on two fronts. One is that it's awesome because it extracts the application from the underlying infrastructure, which is great from a development and application architecture and isolation, and all sorts of things like that.

On the other hand, it tends to also break monitoring in some instances, especially application monitoring. That's something that we've fixed recently. The other thing to think about for Docker is that these containers often don't last for more than several minutes or longer.

The issue there is you want to monitor both the application and the type of container. Not necessarily each individual container. Otherwise, your head will explode. Definitely a big shift in the way that people are building applications and deploying applications from a DevOps perspective, and clearly something to go check out and think about when you're choosing tools.

Trevor: Yeah, I completely agree, and I think Docker...I think it's at a...Abner, we can do a whole webinar on security, we can certainly do one on Docker container systems. In fact, if people are interested in resources, we have a whole bunch of resources on this, on the blog at

We did a recent webinar with the guys at Core OS and also on Docker and sort of monitoring and logging a particular in these environments, and you hit on some really good points. I think because these systems are so dynamic and because they've got so many container instances, they have, over the last while, have caused problems for APM vendors and logging vendors, even from a UI perspective.

It's something that we've had to address over the last while, but there has been some great advances in logging and monitoring for that container systems as well. I think that Docker 1.5, they brought out a new stack of API. They also have made some significant improvements from a logging perspective.

If people want to log in those departments, we actually brought out...and I think most logging writers will have a logging container that will allow you to have a specialized container that will deal with logging from all your other containers. Again, I think a lot of that the APM vendors are doing something similar.

If you're looking for resources, check out our blog and you'll find some old webinars and some resources there that might be quite useful.

Audrey: Great. Thank you, Trevor. Thank you, Abner. If we didn't get to your question, someone from Logentries and/or New Relic will follow up with you. Another reminder today's webinar was recorded, so we will be sharing a link to that recording in an email tomorrow.

Thanks, again, for attending today's webinar, and look out for more webinars from us in the future. Thanks so much.

Read more

Back to top icon