Derzeit ist diese Seite nur auf Englisch verfügbar.

image of Ishan Mukherjee and Zain AsgarIn this episode, Zain Asgar and Ishan Mukherjee of Pixie Labs talk about joining forces with New Relic to accelerate the Pixie Community. The goal at Pixie has been to try to provide a unified experience between what you see on the command-line and what you see on a web UI or mobile phone, instantly troubleshoot applications on Kubernetes–no instrumentation needed, and to run community, team, or custom scripts to debug as code: publish and share sessions as code with a team and/or community members.

Should you find a burning need to share your thoughts or rants about the show, please spray them at devrel@newrelic.com. While you’re going to all the trouble of shipping us some bytes, please consider taking a moment to let us know what you’d like to hear on the show in the future. Despite the all-caps flaming you will receive in response, please know that we are sincerely interested in your feedback; we aim to appease. Follow us on the Twitters: @ObservyMcObserv.

Jonan Scheffler: Hello and welcome back to Observy McObservface, the observability podcast we let the internet name and got exactly what we deserve. My name is Jonan. I’m on the developer relations team here at New Relic, and I will be back every week with a new guest and the latest in observability news and trends. If you have an idea for a topic you’d like to hear me cover on this show, or perhaps a guest you would like to hear from, maybe you would like to appear as a guest yourself, please reach out. My email address is jonan@newrelic.com. You can also find me on Twitter as thejonanshow. We are here to give the people what they want. This is the people’s observability podcast. Thank you so much for joining us. Enjoy the show.

Jonan: I am joined today by Zain & Ishan of Pixie Labs, a recent acquisition of New Relic. We now have Pixie by New Relic, thanks to the two of you. How are you doing today?

Zain Asgar: I'm doing well. Thanks a lot for having us here, Jonan.

Ishan Mukherjee: Same here. I'm super excited to be on the podcast.

Jonan: I'm super excited to have you. I am in love with your product. I had heard of Pixie from a friend, and I took a peek at it but didn't have time to play with it. And then I found out that New Relic was acquiring Pixie Labs, and I went through the roof. I was playing with the CLI, and the CLI that I've seen demoed online has graphs built into the CLI. I can get all of the data right there on the command-line; that's a thing, right? Am I mistaken?

Zain: Yeah, I think we have a mode in the CLI where you can basically look at tables and all the data. Part of our goal at Pixie has been to try to provide a unified experience between what you see on the command-line and what you see on a web UI or even your mobile phone, and hopefully, it will accomplish that for the most part.

Jonan: It's amazing. And I feel like now we've gone too far into talking about what this thing even is without telling anyone, but it's all right. I'm going to give my explanation of what Pixie is, and then you, the people who built it and just sold a company about it, you probably can correct me. But I think Pixie is a way to do observability specifically for Kubernetes clusters in a very low overhead way that uses eBPF, eBPF being a way to run some code in a safe way inside a kernel. It's not a rootkit, although it's technically a rootkit, and it lets you get observability in your Kubernetes cluster without writing all of the code. So, true or false, you just sold New Relic a rootkit.

Zain: False, I would say, but close.

[Laughter]

Jonan: Okay. So tell me a little bit more about what I missed technically there, I guess. Can you break down the actual technical implementation here? eBPF is a thing. What does that even stand for?

Zain: eBPF is the Enhanced Berkeley Packet Filter. And I'd say at this point, it's pretty much outgrown its original name. It was originally written as a way to do packet filtering in the Linux kernel for firewalls. But part of I think what they initially realized is that sometimes packet filtering can be a lot more complicated, so you want to start inserting code to actually do that. But you don't really want to add kernel modules because it's a lot of work and a lot of security issues. So they built up this framework to do secure code execution where they usually run a restricted form of C to make sure that it's safe to execute within the Linux kernel. What happened over time is that eBPF itself evolved, and now it allows you to access other things inside of the Linux kernel specifically, you can access various types of probes that lets you read kernel data structures and also read user memory, and take a look at system calls. So what we ended up doing was use eBPF for the observability use case by intercepting the appropriate system calls and the appropriate functions and then making it available to a developer.

Jonan: So I, as a developer, I can just read my users' passwords directly out of memory rather than having to compromise my own database publicly and let people download that. This is the kind of thing, though, that technically, eBPF has access to. I’m joking, of course. But if these were not in this isolated VM inside the kernel and accessing things in a safe way, that's exactly the sort of thing you'd be able to do, right?

Zain: Yeah. That's right to some degree because eBPF is designed to make sure that it guards you against the kernel. It basically guards the kernel against you, but it doesn't guarantee that you're not going to mess up or be able to read the application state. So part of what we do at Pixie is to make sure that all of that access is federated. So you can go and only access certain pieces of information, so you can't just go read random pieces of memory. And we try to make sure that that you're allowed to read a few messages. You're allowed to read certain SSL traffic, and we guard all that through Pixie.

Jonan: And this is exactly why companies probably would hesitate to try and implement something like Pixie themselves. Even if people were able to really catch up to where Pixie is in technology, it's not a great idea for people to set out and create their own amateur version of Pixie. Does that sound right, Ishan?

Ishan: So, in terms of the technological capabilities, as Zain talked about, eBPF is extremely powerful. Security is one vector which some developers call God Mode for observability—you can access as much data as you can. But the other side of it is just an earliness of implementing the eBPF program, so if you are an individual developer, you can definitely go out and build eBPF probes of your own, but it's prohibitively difficult to do so right now. Even before the security case, it's just building scalable eBPF probes where you can deploy them reliably in production. Once you have that, then there is the question around how much visibility you get and the governance around that, and Pixie does address that. So to talk from a community standpoint, they're leading engineering teams who've contributed to eBPF and leveraged eBPF like engineering teams at Cloudflare, Segment, and Netflix, who use it on a day-to-day basis. So if you have the right engineering chops and the governance practices, you can go out and build it. Pixie obviously reduces the bar. You now have dev teams who don't have the bandwidth or the skillset to invest in this essentially leverage this technology, I guess, in minutes. You can just install Pixie and get all this power without it being much work.

Jonan: That's exactly the thing. It is about the security and doing things securely. It is about writing all of the code yourself, but it's just so much easier and faster. Have you ever timed yourselves? Have you had a race to see who can get Pixie installed and get data out as fast as possible? How long does it take, start to finish, if you've got a Kubernetes cluster and no Pixie Labs code running on it to get some metrics in my CLI? How long?

Zain: So we had this joke for a long time where we were like, okay, it's got to be data in seconds, but the reality is we kind of landed on somewhere between one to two and a half minutes depending on the Kubernetes cluster.

Jonan: That's amazing.

Zain: And our goal is to get to five minutes to joy like, within five minutes, get enough data that it riles somebody and that's what we are targeting. But realistically, we're mostly bounded by the amount of time it takes Kubernetes to go deploy all of our services.

Jonan: And that is how you surprise and delight your customers, five minutes to joy, and you halved it. The new bar is two and a half minutes to joy. All the other tech companies have to keep up now.

Zain: Yup. Yup. And we always try to make it faster. We're pretty up against that number.

Ishan: And to build on that, eBPF is in the service of delivering that joy. The premise for starting Pixie wasn't to innovate with eBPF; it was how do we get developers’ data in seconds? I guess 60 to 90 seconds, as Zain mentioned, and to do that, eBPF was an enabling factor. On top of that was this idea of now that we have this fire hose of data, how can we build a data system which runs in your cluster without having to track all of this off-prem and then all of the programmatic interfaces. So delivering on this developer experience point was the North Star, and eBPF enables that.

Zain: To be fair, inside of Pixie, we have a lot more—even from a data capture perspective than just eBPF eBPF is one source of data that allows us to get lots and lots of instant visibility. And as Ishan just mentioned, there's an entire data system that enables doing all of this in real time and being able to capture data without sampling. We look at every single piece of traffic that goes by.

Jonan: And so you look at all of it. But when I install this in two and a half minutes, what I get—for example, I have now a Pixie script that I could run to see all of the HTTP requests going out of my cluster or coming into my cluster, that's a thing that I can do with Pixie.

Zain: Right, or within your cluster. Or you can even write code to find out what is the traffic between any two services. Pixie is completely scriptable, so you can go write scripts, and then we'll figure out and give you the results.

Jonan: And I could filter those by request or response code or anything that I'm looking for with these scripts, and many of them already exist. You already have custom scripts that have been developed by the community.

Zain: That's right. So a lot of our scripts, actually, all of our scripts are open source, and people can contribute new changes to the scripts, which means you can add different types of filtering. You can even create dashboards and views based on the scripts. So if you have a service that talks to Kafka and you want to have a specific monitoring need, like, you want to know how the data is sharded across different topics, that's something that you could actually do and visualize and create a view for it all using a Pixie script and then share that with the community.

Ishan: And these scripts are already open source. And the idea is as we open up Pixie more to the broader CNCF ecosystem is essentially codify these debug workloads across the world so that this repository of scripts becomes this canonical source, which reduces the bar for developers across an engineering team who do debug autonomously. And what these are doing is they're just not codifying the visualization. It’s codifying debug flow and, in some regards, codifying capture heuristics as well.

Jonan: So these scripts, what are they written in?

Zain: So the scripts are in PxL, which is a Python dialect that's based on Pandas. If you're familiar with it, Pandas is this data processing library that a lot of data people use to work with data, and it's a Python dialect essentially following the Panda's API.

Jonan: So it's like a DSL written in Python for writing these scripts specifically.

Zain: Yeah. So, it just makes it easy for you to manipulate tons and tons of data. And ultimately, we can then figure out the best way to execute it in the Pixie system.

Jonan: If I wanted to make modifications to this DSL, I want to modify PxL itself. Do I have access to do that? That's open source as well, the actual language the scripts are written in.

Zain: All the specifications for the language are open source, but in order to actually add new functionality to the language, you'd have to create new functions and operators, which is currently not open source, but we plan to make that part of our open core model.

Jonan: Oh, okay. So the new functions and operators because that script ties directly into the eBPF piece that's happening inside of the kernel. Or am I mistaken?

Zain: Yeah. So some of it ties into the code that gets generated for eBPF, and some of it is part of our data processing system. So when you write code in PxL, we'll figure out where the data is located. And we'll also figure out if you want to add new instrumentation, how to generate the code for that instrumentation, so it's like our single point of encapsulation.

Jonan: And this is the code that's going to run on your cluster. So I sit down with my new Kubernetes cluster. And I played a lot with Raspberry Pis and Kubernetes on Raspberry Pis.

So I set up a brand new fresh Kubernetes cluster on my Raspberry Pis, and I install this agent, and it has access to all of these scripts. But the scripts aren't downloaded; I need to go and grab them myself. I want to just describe what it's doing briefly. What it's doing is it's installing a DaemonSet on the nodes themselves. Is that it? Is that the whole story? That this DaemonSet gets installed on each node. I'm not putting any code in my applications. I know that part. But what else gets installed when I install Pixie?

Zain: Yes. Your applications don't need to be modified to use Pixie. But basically, what happens is when you install Pixie, we deploy a Kubernetes operator. And the Kubernetes operator will go install a DaemonSet, which means that there is one thing running on every node, and that's what's responsible for adding the eBPF instrumentation, and it's also responsible for storing the really hot data. We don't track all the data after we capture it because it's way too expensive based on the data lines in capture. But the lowest layer is we install these things called PEMs, which run one per node in the DaemonSet. At the next layer up, we have our query engine, which basically consists of a small set of services that get deployed on Kubernetes. And the query engine works together with the PEMs to execute and serve the results.

Jonan: What is a PEM? P-E-M?

Zain: PEM, yeah. It stands for Pixie Edge Module.

Jonan: Pixie Edge Module, okay. So it installs the eBPF code, and then we have the PEM, and these scripts are already there for me, or I have to go and grab them somewhere else. How do I get the scripts?

Zain: When you use the CLI, or you use our user interface like our web-based UI, they're already a bunch of pre-baked scripts. So you can use one of those pre-baked scripts, and then we'll shut them after the cluster for you and then execute them. Otherwise, you can write your own custom scripts and then run it through the command-line.

Jonan: So there's a possibility as we move forward with the project, you said, there's this open core piece that this ecosystem of scripts will continue to grow and the people who are writing them will continue to grow, and you may end up moving towards a standard OpenTelemetry where you have, this is the way that people write scripts. So it sounds like it's pretty tightly bound to specifically Pixie. What if I come up with Jonanixie, a new clone of your product I'm not able to actually implement this PxL script.

Zain: To be fair, what we decided to do is we didn't really want to be in the business of creating programming languages and really complex data processing APIs. So we pretty much-airlifted Pandas, and our entire data processing system is actually based on Pandas and this project called Arrow, which is a way to represent data. And we used those to basically build our entire data processing system. So our entire API right now is already based on an open standard called Pandas.

Jonan: So it's already open because Pandas already exists out there. I know the community is a priority for you because I've read about you online, and I've spoken to you about it. I want to know what the world holds for Pixie and the developer ecosystem generally moving forward. We have this open core version. New Relic acquired Pixie for a reason. They're going to have an enterprise version where they offer enterprise-specific features, but with the open core, you'll always be able to run that and monitor your Kubernetes cluster, your self-hosted Kubernetes cluster with your own data backend. That's a real thing.

Zain: Yeah. So the goal is to allow anyone to be able to self-host Pixie and run it as a local offering. So everything that's required to do that will get open-sourced. In addition, there'll be a hosted offering, which will be the Pixie community offering where you can just use Pixie like you do today.

Jonan: And why did you end up here? What is the motivation? And many people start companies every day, software companies, and they leave all of that part out, they build a thing, they keep it closed, they sell it. I can see part of the value in that, to put a very cynical take on it, well, we've got everybody else writing these PxL scripts, which I'm assuming is not your motivation. What motivated you?

Zain: Ishan should talk about this on his side as well, but I think we're pretty aligned on the fact that the big motivation for us to join forces with New Relic was actually just to speed up the adoption of  Pixie and get it out there in more developers' hands. Part of what open sourcing a lot of our core and everything allows us to do is actually just give back to the community and have this be a big part of the future of observability systems. And that's actually the big motivation for us moving forward.

Ishan: Absolutely. When making such a big, somewhat audacious—to try to redefine our product category, the North Star for us as builders is always to drive ubiquity, getting to a point where hopefully Pixie becomes one of the defaults for observability for the Kubernetes and the cloud-native ecosystem is just a really exciting North Star for us and our engineering team both with New Relic's existing customer base but beyond that really expanding that to the broader developer community.

Jonan: So why New Relic and not Tumblr? What made you pick us?

Ishan: Yeah, I can take that. I think obviously we were on a path to build it out independently. Our launch is great. Our Pixienaut community is really growing, and we continue to want to grow that. But if you go back to the origin days of New Relic, they had this rabid developer love that they had kind of an agenda, and the mindshare and their DNA is very, very developer-focused. We obviously love ops and SREs, and also, infra engineers are champions. But ultimately, our goal is to make developers self-sufficient with observability and debugging their production systems. This obsessive focus on developers and going back to New Relic's roots and just really driving for developer ubiquity was a cultural and vision alignment, which is usually very rare.

Jonan: It is, actually. I have had the great privilege of working with companies who are there for the right reasons, in my opinion. They build things to make developers' lives better. I was talking to a friend about this the other day. If you think about it on a high level, I have a problem with my application, and it makes me sad in that moment. And you are contributing sadness to the world by having written the code. If I'm the developer of the framework that made some developer's day bad, then I've contributed sadness, like, net pain to the universe. And on the opposite side of that, you have developer tools like New Relic and Pixie, removing that and removing it sometimes from thousands of people a day. I don't like to go down the road where tech gets to toot their own horn so very much, but I think that it actually has the capacity to make the world significantly better. You get to make a lot of people less sad every day if you do this thing well, and I think that Pixie has done it really, really well. So I want to give people an idea of how they would get started with this. They've got a Kubernetes cluster. Can you run it in minikube?

Zain: Yeah. So Pixie runs on minikube and several other environments. There are some restrictions for running on things like Docker for Mac, mostly because of the way we do observability, but it does work on most environments, including all hosted environments.

Jonan: So if people have a home lab set up, or they've got a couple of Raspberry Pis, they could throw some Kubernetes on, and one way or another, they end up with a Kubernetes cluster, and then they're two minutes away from having Pixie. And right now, you were talking about the functions that exist within eBPF, and they are somewhat limited. They're bound by the progress of eBPF as a technology. There are certain things that this sandbox VM environment allows us to do in the kernels, certain things that we're able to read and hook into, and certain things we are not today. Am I understanding that correctly?

Zain: Well, so eBPF itself, in some ways, is quite limited on what it allows you to do because you just can't run enough instructions because there's a limit in there to not ruin the performance of the applications that you're monitoring or even the kernel. What we actually do is Pixie, to some degree, has a lot of code that runs both in eBPF and also in userspace-land. And we'll fill the gaps for most of the limitations. So right now, once you deploy Pixie, you'll basically get instant access to several popular database protocols like MySQL, Postgres, Cassandra, Redis. You'll get access to SSL or non-SSL HTTP or gRPC traffic and all the system information DES network and all of that just provided to you right away.

Jonan: And including things like Kafka sometimes.

Zain: Yeah. So for Kafka, we are working on more deep protocol support. So what typically happens with Pixie is that since we can actually understand most network traffic and HTTP traffic, we'll actually capture a bunch of data, even data that we don't necessarily understand. And what we typically do over time is add protocol parsers. So if there's Kafka, we'll actually add specific protocol parsers. You can actually deeply understand the traffic from Kafka. To take a more concrete example, if you're like MySQL, you might see a whole bunch of data grow across the wire, but that's going to look like gibberish to you. So what we typically do is parse the protocol so you can be like, "Oh, this is a MySQL prepared statement. And then now this MySQL prepared statement got executed, and here's the data it returned." So over time, you build more and more protocol parsers and allow you to access this in a more canonical developer-centric way.

Jonan: That's awesome. And when you open up this, do you write this MySQL protocol parser, and then at the end of each packet, it says like, "Brought to you by Oracle." Is that written at the --

Zain: Yeah. Yeah. [Chuckles]

Jonan: I'm just teasing because PostgreSQL is the one true database, and everyone should use Postgres, and MySQL is dead to me forever.

Zain: MariaDB then.

Jonan: Yes, MariaDB. Yeah, we need PixieDB. Are you going to write a database next? You should do that.

Ishan: Maybe someday. We already have a database, which is the entire Pixie system that runs on the cluster. We don't quite have the same guarantees that the database will be, you know, can save terabytes of data.

Jonan: It's really funny because one of the jokes in developer communities is when you're working at a company and then someone on the team says, "Well, we should probably write our own database to solve this problem," it's a good time to leave the company. But it does not apply in the observability space because it turns out that's a thing.

[Crosstalk]

Jonan: Yeah. Well, I'm really excited to keep playing with Pixie and to watch it grow. And over time, you expect similar to the way that you've written these protocol parsers to support more technologies. And what else can we expect? What's coming down that people may not be able to predict? I think today you support Kafka tomorrow; you'll support JonanDB as soon as I finish writing it. In the future, what are the things beyond that protocol support might we expect?

Zain: One of the things we've been working on over the past few months is actually being able to add in a lot more dynamic logging into Pixie. And what this basically means is that you can take a running binary and then say, "On this printout, every single time you hit this line of code, it will print out these variables." And we have actually gotten to the point where we can do a bunch of different dynamic logging. And now we are working on features that allow you to profile entire applications and then give you a flame chart to tell you where the application spends time. And over time, we think that we'll be able to stitch these things together to be like, okay, this is where your applications take time. And then, very specifically, take a look at this function, and whenever these values change, the performance is a lot worse.

Jonan: I want to make sure I understand that you're able to say, "When you get to line 17 of this file, then dump out these variables from memory into a log file somewhere." That's a thing that could happen.

Zain: Right. So, specifically, it's hard for us to do exact line numbers because we don't have the source code, but if you tell us on this function, you want to basically dump out all the arguments to this function or dump all the arguments of this function when the function returns or whatever it is, we'll be able to capture that and then dump it out to a log, which is basically an entry inside the Pixie data store, and you'll be able to see it over there.

Jonan: This is one of the things I love about Ruby. I once wrote something like this that used the meta-programming features of Ruby to just go and hook every -- and of course, it was not at all performant, and everyone hated it, and they refused to merge my code. But I thought it was very interesting as just a thought experiment to play with. And it brings me back to this performance question because one thing I can promise is that the nonsense I wrote was not at all performant. So how much overhead am I introducing to my Kubernetes cluster by running Pixie on it?

Zain: So our target goal is to go less than 5% CPU, and we're usually well under 2%. And part of the reason for our low overhead is that we are very careful about the data that we copy in and out of the kernel space. And then we're also very, very careful about the data we send over the networks. And most of the data is just locally persistent for short periods of time while we are able to analyze it. And we spent a lot of time trying to make sure that the code paths for that are pretty heavily optimized.

Ishan: Yeah. And just to add onto that, the existing other alternatives are in the 5% to 5%+ overhead for sampled approaches, so this is on sampled collection. All right. So we're basically kind of seeing everything and collecting everything at less than 5%, mostly 2%, and there's no dependency on the scale. So we operate a thousand node scale at large content streaming companies, and they see similar overhead.

Jonan: The overhead doesn't go up if I run it on more nodes because all of the data is being held locally until you determine what you care about, and then you ship it off. And as with most things in software, I/O is the slow part. It's actually sending it over the network or writing it down to the disc. You're just holding it in memory, figuring out which parts you actually care about, and then tossing the rest effectively.

Zain: That's right. I mean, most of the cost, as you correctly pointed out, comes from I/O or even coping memory or moving data around. So our goal is actually just to minimize that. There's obviously some level of data aggregation that happens upstream. So the more nodes you add will eventually lead to more data getting shipped off to some semi-central nodes inside of the Pixie cluster. But that growth is a lot slower than just what you'd typically get where every node is just fire hosing to the cloud.

Jonan: This actually raises an interesting question for me. You have some kind of election then. If you have nodes that are sending data to other nodes, then someone picked that node. That’s the boss node. How do you—because Kubernetes is just killing these willy-nilly, right? I'm not talking about Pods, I guess. You're running at the node level, and the nodes are not coming up and down as they could.

Zain: Yeah, so the data sets are running as nodes. We have another Pod or a service that's called Kelvin. And we'll run enough replicas to make sure that the data won't go away.

Jonan: So there's actually no need for any kind of election. There's no one point of failure in this design.

Zain: That's right. There are some cases where we have a primary node, and then we use Kubernetes to lock elect it, and it all happens behind the scenes. You don't really have to worry about it. And when you write a PxL script, you don't actually have to worry about where the data is located; we figure it out for you.

Jonan: Yeah, that sounds totally fine. Just don't even worry about it, but in practice, it actually works great. I'm very impressed with this product. So we have a couple of minutes left. If you are looking to share something with developers at large or with your Pixienaut community, now is the time. What do you want to say to people about this acquisition or about the future for Pixie?

Ishan: First thing is the North Star. We talked a lot about it. Ultimately, we would like our roadmap, and our plans to orient towards this North Star of developer ubiquity through an open core model. The idea is to actively work with CNCF and contribute major parts of Pixie so that developers can use Pixie in a self-managed fashion. So you will see news around that coming out very soon. In parallel, our Pixienaut community is something that we really, really want to continue our focus on and amplify that. So the first one post-acquisition is on the 28th of this month. So if you're a listener, if you're part of the Pixienaut community, please do attend. We will be building the product out in the open. So we will show demos and talk more about our open source roadmap. And then, as developers get engaged, this idea of contributing to the community if you find it helpful by writing and contributing scripts is the first way to add to the collective knowledge that we're building up. So it's like marketing, but attend our meetups because that's the best way to get involved, and we'll ramp you up. And hopefully, we can get Pixie to become a default observability system.

Jonan: I'm sold on the dream. If I have anything to say about it, I will do my best to make that happen. I want to point people somewhere where they can find existing resources to learn about this, and that's still on the Pixie Lab's site.

Ishan: Absolutely. Yeah. So the site to visit is pixielabs.ai. That site will continue to exist as the homepage for the open-source project. The homepage has most of the information and links to try it out, as we said, magic in five minutes, or hopefully less than two minutes. There is a page on the community for the Pixienaut community, and if you click on that, you'll see the upcoming events. Subscribe to our group, join our Slack group, and access our Google Drive folder, which has all of the documentation as well. So pixielabs.ai is the best place to learn about the Pixie project.

Jonan: Well, thank you both for coming. I really appreciate you coming on the show to talk to me about Pixie. Like I said several times, I'm a little bit gushing at this point, but you did a really good job. This is so obviously a tool built for developers by developers. It has been a joy to use, and I really appreciate your hard work on this. If people wanted to find you personally on the internet where you say other words about things that might not even be Pixie related, where would they go?

Zain: I guess we both have Twitter and LinkedIn profiles. I've got to say I'm not the most active Twitter user, but I'm @ZainAsgar on Twitter.

Ishan: Yeah, same here. We're more like builders who are now getting some publicity, so we're not that active. I'm @Ishanmkh on Twitter. And then LinkedIn is probably where my network's larger, which I don't think it's a great thing.

Zain: Then the Pixie Twitter as well, which is @pixie_run.

Ishan: Exactly.

Jonan: There's a Pixie Twitter?

Ishan: Pixie, the open-source project is @pixie_run. The best place to get involved, Zain and I are available maybe 24/7 on our Slack. So if you go to the Pixie community Slack, we're always there. If you have questions outside of Pixie, ping us one-on-one, and we're happy to jump on a Zoom call or in email and chat.

Jonan: I would like to point out that building a community on Twitter is also relevant work that counts as being a builder. I'm just trying to make myself feel better as a DevRel because I don't get to write that much code anymore, but I look forward to watching your follower explosion as you both become more active on Twitter. Well, this is the end of our episode for today. I encourage you all to go and look up Pixie. And as Ishan alluded to, there may be some opportunity at the end of the month on the 28th to watch a demo about Pixie and rather a lot of other things. So if you are out there looking for some news or content around Pixie and New Relic, stay tuned. Maybe check out the New Relic blog on or around the 28th of this month. We've got some exciting stuff coming up. So you all take care. Thank you so much for joining me.

Zain: Thank you.

Jonan: Thank you so much for joining us for another episode of Observy McObservface. This podcast is available on Spotify and iTunes, and wherever fine podcasts are sold. Please remember to subscribe, so you don’t miss an episode. If you have an idea for a topic or a guest you would like to hear on the show, please reach out to me. My email address is jonan@newrelic.com. You can also find me on Twitter as @thejonanshow. The show notes for today’s episode, along with many other lovely nerdy things, are available on developer.newrelic.com. Stop by and check it out. Thank you so much. Have a great day.