New Relic Now Dream of innovating more? Start living the dream in October.
RSVP Now

We're back with another episode of Observy McObservface, joined by our esteemed guest Alex Ellis, CNCF Ambassador and founder of OpenFaaS and Inlets.

Alex takes us on a tour of modern cloud ecosystems and tells us how organizations like the CNCF are making decisions that will shape our collective future. He also shares some thoughts on open-core licensing and other strategies that open source maintainers can use to ensure their work is sustainable.

As always we'd love to hear your feedback on the show. Send us an email at devrel@newrelic.com to let us know where you'd like us to take future episodes or suggest potential guests. If you have ideas about the name of the podcast or potential alternatives please send them to definitelynotafakeaddress@newrelic.com so we can be sure to give them the attention they deserve.

Enjoy!

Jonan Scheffler: Hello and welcome back to Observy McObservface, the Observability podcast that reminds me anew every week, why the internet should never be trusted to name anything. I am joined today by my guest, Alex Ellis. How are you today, Alex?

Alex Ellis: I'm doing well, thank you. It's FaaS Friday, like every Friday and it's coming up on 3:00 p.m., which means I can go out on a bike ride soon.

Jonan: FaaS or fast Friday?

Alex: FaaS as in OpenFaaS, or Functions-as-a-Service.

Jonan: There's another name for what is known as serverless technology. How do you feel about the name serverless as opposed to FaaS?

Alex: I like it. The way I look at it—and I'm currently writing up a course at the moment, we're hoping to publish sometime in September—and this is one of the things I had to write. And you always feel a little bit like, “Who am I to say and define this?” But plenty of other people on the internet do, so why not? I've got a bit of experience with it. I think serverless is a category of computing like cloud computing and FaaS is if you like a way of doing that, just in the same way as we have AWS is cloud computing, is how you get VMs, containers is how you run your stuff in an isolated fashion. FaaS is how you run your code on serverless.

Jonan: That's an excellent description. I don't think I've heard it explained so succinctly and I really quite like it. I am like many developers: frequently critical of terms like serverless, simply for the fact that it runs on servers. Serverless does not seem like a very descriptive name, but that makes a lot more sense to me. So you mentioned just now that you were writing a course, what is your course about?

Alex: It's an introduction to serverless. We're looking at it from the point of view of the CNCF, what's going on in terms of like different activities and working groups, what are the various projects that are installable on a self-hosted cluster or cloud cluster versus what can often be very useful and easy-to-use cloud-based functions, but very bounded as to where you can go with that.

Jonan: So you mentioned just now the CNCF, I suppose we should introduce that acronym for our listeners. So CNCF stands for the Cloud Native Computing Foundation. And maybe you could tell us a bit about who they are and what they do?

Alex: Yeah. So the CNCF, if you've come anywhere near containers or Kubernetes in the last couple of years, you will see that it’s basically a foundation that's an offshoot of the Linux Foundation, and what their mission is—is to provide a home for software projects that are independent of any particular vendors that fund the projects' longevity and security needs and marketing needs, but also sort of fosters collaboration. Now, the first project that came into the CNCF was Kubernetes itself donated by Google. And since that initial donation, the CNCF organized an event called KubeCon, or at least it was a community event then adopted by the CNCF, and that event actually started off pretty small, but has exploded in recent times. And we're looking at thousands of people attending these events. Now, it is absolutely taking the world by storm.

Jonan: KubeCon is an impressive conference. Were you in attendance at the first KubeCon?

Alex: No, I wasn't. I was lagging behind a little bit because I was a Docker captain, and I was more in that swim lane. And then in about 2017, I moved over and started really going deep into Kubernetes, learning more about it, implementing it in my own project.

Jonan: So I work a lot of events and often when we are planning our adventures for the year, which events we're going to attend, we start with KubeCon now. There's one that everyone just agrees on out of the gate, KubeCon, because year over year attendance at this conference has been doubling or more since its creation. Would you credit that to the popularity of Kubernetes—there's also this shift, I think, that is happening at the same time to DevOps, instead of, I guess, what did we call that before? Systems administrations?

Alex: IT.

Jonan: IT. So would it have more to do with the popularity of the Kubernetes project itself or a general methodology shift?

Alex: In 2018, at a GOTO conference in Copenhagen, I gave this talk called, “Serverless Beyond the Hype.” And whilst that was mainly focused on serverless, it was really looking at the hype cycle. And this idea that in technology—Gartner has this idea, they've trademarked it called the “hype cycle”—and these projects come along or ideas and technology, even like electric vehicles, there's so much promise in them. People don't really understand what they can do and they get very overhyped. Everyone gets really excited until they reach peak excitement where it climaxes and very quickly falls off a cliff, basically worse than before it started. And then if it manages to creep out of that Valley of Death toward a promise line on this hockey stick, it emerges in a new way that is kind of a bit boring, very plain. And now we're looking at. “What can we actually do with it?”

Alex: I think about things like VR. I mean maybe in the ’80s and ’90s, when I was growing up, VR was all the rage, but it didn't go anywhere. And that's a very long hype cycle for that. Now we're actually seeing people buying it, commodity hardware in homes. Just before Kubernetes, we had something else called Docker, which also was growing with a similar popularity where everybody had to be at every DockerCon every year, and it was also growing in a similar scale whilst we're not quite ready for Kubernetes++, the next thing we're just seeing the same again.

Jonan: That's an excellent way to put it, that the growth cycle of a new technology in the marketing space, they tend to adopt a new term like serverless or Kubernetes or whatever the term of the day is, and very quickly ramp it up and hype it. And I think that process of fast mind-share growth is a bit off-putting to developers. I think that we are suspicious by nature, maybe about new technologies, because we hear of so many, and everyone who comes to us seems to have the innovative, life-changing solution to all of our problems. What do you feel like serverless specifically serves? Because I'm not a big believer in the idea that all software should be written in any particular way. And many people say things like, “Serverless is eating the world, serverless is how applications will be built entirely in the future.” I wonder what types of applications you think are particularly well-suited for serverless architecture?

Alex: Yes. So one of the things that you start to understand as you probe the technology and you put it through its paces is, “What is it good for and where is it not a good match?” The way that I, and I would stretch that to say people, perhaps people in CNCF that are involved with projects that you can install and host yourself, so Knative, OpenFaaS, OpenWhisk, Cloud Foundry, the likes of that; What we're really looking at is a compute platform at the end of the day. They're great for web servers, they're great for APIs and that's really where they fit in. And when you look at Kubernetes, the first thing most companies will try to do is not, "Can we run on MySQL there instead or a Cassandra DB?” It's, “Can we run a web server? Can we run our website? Can we host this? Can we host that? Is it perfect for that?”

Alex: You can lift and shift pretty much anything you want, Ruby on Rails, WordPress, ASP.NET. You can even now reach back in time and get a monolithic Windows app, run that on Kubernetes in a container and experience the whole ecosystem from monitoring to metrics to security. So I think when we look at it, serverless is really just like a specialized version of Kubernetes, where the developer experience means you don't have to think about the servers, you don't have to think about managing them. It's an API, we've got deploy, update, invoke, query—whatever you like—and the same for secrets. And that means that developers can just move a bit quicker than if they had to learn all the ins and outs of Kubernetes manifests, every primitive that you need, or the projects that build up basic things like TLS.

Jonan: I am a huge fan of that dream. I think that as a developer, I want to focus on my code. I want to focus on shipping features for my users. That's the part of it that really gives me joy: knowing that people are out there using the software and enjoying it. But I am often finding myself in a position where even with tooling ecosystems, I sit down to start writing a Rails app, and every time I'm updating a dozen versions and new tools into things before I begin. The speed of an ecosystem, I think, sometimes slows down the ability of developers to live that dream that you're describing, where I'm able to just focus on my code and ship it out there.

Jonan: This is certainly something that I want, but I think right now—in the course of the growth of Kubernetes—this cliff that you were describing for us earlier, maybe hasn't come yet. And because of that, there's this explosive growth that can cause a lot of pain for developers. And I wonder what sort of advice would you have for developers who are new to the Kubernetes ecosystem and trying to keep up with that curve so they can actually achieve that reality you described, where they just get to focus on their code.

Alex: So the way that I'd approach it if that was me, is with a bit of hindsight. I know that Kubernetes is that way and can be intimidating because they're actually being designed in a very modular way. You start off with effectively having something like Docker or containerd on a computer, and that's it. That's your most basic primitive of Kubernetes, a computer that happens to be running Docker. The next step is to install the kubelet. The kubelet is a tiny component and its job is to look at a YAML file on the disk and try to make that happen on that computer. And then it goes a bit bigger and it says, “Right, now we have this API server and we have the scheduler and now Alex can talk to the API server, say, ‘I want you to do this,’ " then the scheduler looks at all of the nodes in the cluster, and decides where to put it.

Alex: And that kubelet that does its one job on its own goes off and does that. And then you can get bigger and bigger and bigger and add all these things back in. I actually think that looking at it from that perspective, many teams can just benefit from using containers on their own, on a single host. And if that host dies, you've lost all your data. OK, so now I have two, with a very simple NGINX or HAProxy load balancer, you don't have to go to full Kubernetes to get the advantages of the cloud. You even have things like Amazon ECS that will run your containers for you. And so if you think about it from that perspective, you don't have to go in with both feet forward. You can build your way up to it, and you can start to get value. And when you feel like there's too much cost and maintenance and complexity for that next step, you can stay where you are.

Jonan: That's a really interesting perspective to me. I have long lamented the fact that many large corporations, especially, are still lingering in this world where they have their servers in the closet. You don't hear very many startups say things like, “We're getting ready to launch our website, we're just waiting for the servers to arrive from China,” right? Mostly any company that is starting today is already cloud native from the beginning, but there are these huge corporations out there in the world, many of them driving technology choices across the industry that are not quite there yet, and their transition can be incredibly painful.

Jonan: The perspective you give that you're to start slow, lets me see the Kubernetes ecosystem in a slightly different light that you have these small pieces. So I have the chance to play with Kubernetes recently, which is actually how I ended up meeting you, because I was building a Raspberry PI cluster. And in that process, I went to set up Kubernetes. I was using k3sup it's k-3-s-u-p. What is the difference between Kubernetes k8s and k3s?

Alex: It depends who you're asking.

Jonan: Because there are different projects.

Alex: There's actually a very long thread where this CNCF—we go back to that again—they host projects. And so what you can do is you can say, right, I've made this thing, yes, it's open source, but I don't want to be seen as owning it. And if I contributed to this foundation, other companies are more likely to contribute because it's not controlled just by me. And so Rancher came up with a way to really optimize and shrink down Kubernetes and they don't call this a distribution, they kind of call it—I don't know what they call it—but it's sort of a spin on Kubernetes where they make some default assumptions. They compile in some stuff, they take some legacy code out and it drops down from needing 2GB of RAM to run on one node to 500MB and each of your additional computers in the cluster. And it needs 50MB to be a member of the cluster.

Alex: So they're trying to submit that to the CNCF as a project. And the old guard are very upset about this and they are saying, “Well, OK, this is a distribution, we don't want distributions.” Everyone else is saying, “But this is insanely useful, it's made it so much easier to adopt Kubernetes.” We're using it in production and they're going, “Oh, we just hand wave, hand wave, it's a distribution.” And this is sort of the political nature of the CNCF. It isn't sort of like Kumbaya, there are interests at play, but to sum it up, and k3s is a normal production-ready Kubernetes that's supported by Rancher commercially, that's hit GA, and for the most part, it would suit almost everyone's needs for what they want.

Jonan: I want to talk a little bit about the way that projects come to be a part of the CNCF. And I guess this political nature of the CNCF that you described a little bit, I think it's very common in large software communities, especially ones that are growing quickly for this sort of thing to emerge. But the CNCF is a place where companies are able to put their open source projects to truly make them part of the community. The CNCF does things like hold the trademarks so that I can't then pick up Kubernetes and make it again my own. For example, Kubernetes started within Google and the trademark belongs to the CNCF now, and other people are able to use Kubernetes, the name Kubernetes in their services and the names of their products.

Jonan: The sort of protection that is offered by this whole is then counterbalanced against the difficulty in projects rising and succeeding in those ecosystems. And the CNCF ecosystem is quite crowded right now. There are many, many projects—some of them, in my opinion, competitive. I guess, in this case, exactly that K3s and K8s. When you have many people with strong opinions, then the type of people who start new open source projects have strong opinions in the same room, you can often end up in a gridlock situation where little progress is made, and yet the CNCF is still able to move quickly. So why is that? How is that happening?

Alex: I would say that that particular Rancher submission is almost in a deadlock, not a gridlock. It's like just before checkmate—sort of in that position. And I don't know what's going to happen with it. I feel like I say, the old guard is just going to keep pushing and pushing fire, fear, uncertainty, and doubt, and trying to get it kept out. But ultimately, it's easy to get swept up in the CNCF, all of the goings on there. One thing a close friend of mine asked me when Google launched a project that competed heavily directly with OpenFaaS, and even aimed to sort of replace it, was, “Do your users care?”

Alex: And I think we really have to keep coming back to that. For all the good the CNCF does—and I am an advocate for it, I'm an ambassador. Sometimes you have to sort of put your head back in your own one-meter personal space, or your two-meter social distancing and say for my users, “Do they care if K3s is in the CNCF or not?” Most of them probably will not, and they'll adopt it because of the value that it brings to them, or the problems that they can solve with it. And I think we do need to also be pragmatic with these things.

Jonan: I absolutely agree that it's easy to lose sight of the greater world when you're in these software ecosystems; the users out there that we have using our software, 99% of them, we will never meet. If we go to conferences and we're involved in these open source organizations, and it all seems like a pretty tight-knit community and you come to know people across these boundaries, but it's important to realize in every moment that you are not there to serve each other's interests or to argue with each other, but that you are there to promote the success of the people using the software itself, which I feel like you personally do quite well and projects like OpenFaaS and k3sup have made it really easy for me to get into this ecosystem.

Jonan: I mentioned earlier that I was building a Raspberry Pi cluster at home, and I was finding a lot of tutorials online for running k3s on Raspberry Pis, and most of them referred me to your software, which is how I came to find k3sup. The Kubernetes ecosystem is this is really easy way to manage your containers and run them across multiple computers, and then you get set up and you come across one very significant barrier, which is that then you would like to actually use HTTP to talk to those applications running inside of your Kubernetes cluster.

Jonan: And you fall deep off of this cliff into the world of Ingress, which in my humble opinion, is intensely complicated in the Kubernetes ecosystem, but for a good reason: That this is a piece that really needs to be designed well in order for Kubernetes to function, that horizontal scaling is very important. So I stumbled across inlets, a project that you maintain and created. Maybe you could tell us a bit about inlets.

Alex: Yeah. So just to touch on what you said initially, when it comes to Ingress and Kubernetes, if you take OpenFaaS off the shelf, that's a tutorial that we have. And in five commands, you will have TLS for your OpenFaaS gateway and functions, in less than five minutes. Because what we've been able to do is say, “There's all these ways of doing things, and that's what trips us up with Kubernetes, especially as a newcomer.” Because all we really want to do is copy and paste from stack overflow, we don't really want to learn it. So we codified what we think in the community makes sense. We've also done that for a few other apps, like the Docker registry and for Portainer, Docker dashboard and for a number of other apps. And that is all wrapped up in a piece of code called arkade.

Alex: arkade, you can use to install OpenFaaS, it will automate whatever that project has decided is how it needs to be installed. So OpenFaaS, it downloads the right version of home chart, it does all the prerequisites, it then runs that, installs our chart. With the Kubernetes dashboard, it finds the latest manifest files, because they're not at home. It then applies them for you and tells you what to type in to get a password. When you look at Linkerd, another project, they have a CLI. So we just download their binary CLI, the right version for your computer, run it, and then tell you what to do next. So through that, we can get an Ingress controller, which is the thing that lets traffic in your cluster and then get cert-manager, which is a great project from Jetstack.

Alex: That goes off to encrypt a free certificate provider, gets you a certificate, and then the last thing is to put a definition in there for the Ingress controller to read and to cooperate with cert-manager. And again, we have an OpenFaaS app with OpenFaaS Ingress. Never once do you have to understand what an Ingress YAML looks like, what a certificate request looks like or anything else. Simply run those commands you get. Pretty much, you can just run that in production if you wanted to. When you come to being inside your home and you're running that on your Raspberry Pi, the main barrier you have is the way that Kubernetes likes to expose its services—through a load balancer. And it sort of assumes that a load balance would always be on the cloud for public IP, and then cert-manager and NGINX Ingress could play together through that load balancer to go off and get an IP and then when it's encrypted, when it's checked if you really own the IP, they set up a little challenge, they call back to your IP and you have to serve it.

Alex: Well, that's clearly going to be 192, 168, whatever, within your house. And they won't be able to get to it. So what Inlets does is it runs a tiny proxy on the internet on something like a $5 VPS that has a public IP. Then it runs a little container inside your cluster and they connect together and suddenly you have this tunnel where anything that touches that public IP hits the surface inside the Kubernetes cluster, and you've effectively added back a load balancer to your on-premises cluster, to your laptop, to your Raspberry Pi. And the great thing is you can shut the lid on your laptop, open it in a coffee shop, and the IP will stay with you because it's attached to the tunnel, not to where you are, like when you have a static IP at home. So that's the basic idea of a tunnel. Inlets can be thought of as a reverse proxy load balancer, so it can play that role for Kubernetes.

Alex: The other thing that it can do that typical projects, or say products like ngrok cannot do, is allow you to replace a VPN. Something that I've had from prospects and from customers of Inlets is that they're using VPNs, and they're finding them hard to maintain, or they need to hire a consultant to provision each one. They have to set up their own subnet at either end of the network, sometimes they have to open ports for that. And they're coming to us and saying, “We think Inlet will be easier, can it help us?” And that's where someone like Ingress can't help you, you've got to service on premise, maybe MySQL.

Alex: Lots of customer data is just too big to migrate, or maybe you can't run that version in the cloud for licensing reasons. What you can do is punch it from your on-premises into your EKS cluster without exposing it on the internet as well. It just becomes part of that Kubernetes cluster and your API can just go, OMS SQL, and it will go to the tunnel and the tunnel will do a bit of sleight-of-hand, and it will actually go through the pipe to on-premises environment.

Jonan: So this allows you to do things like data protection, where you're...

Alex: Potentially, it could potentially be useful there. I guess the clearest replacement is something like a VPN or AWS direct connect.

Jonan: Yeah.

Alex: Whereby you're not putting it on the internet, you're putting it into a cluster that just happens to be reachable through the internet. So it's completely private, completely encrypted, but you're making something appear where it doesn't actually exist so that you can make use of it. Another thing that we've had people come forward and say is, “We've got these really expensive and video graphics cards in our machines, put a server in the office with one of these really expensive machines. If we run it on AWS, it's going to cost us something like, I don't know, $2 or $5 an hour.” Imagine running that 24/7, very expensive. They've already gotten the machines free. So they've run Inlets, that punches the machine into their cluster and they can run the CIA jobs against it, they can make use of it via Kubernetes or even if they don't have Kubernetes, that works just the same.

Jonan: So you mentioned during that discussion helm, and I want to give you my layman's overview of the Kubernetes ecosystem. We started off with containers and they allowed us to run our code in tiny boxes. And then we invented Kubernetes to organize the boxes and sort them about and to direct traffic between them. And then we have a new layer that is emerging now on top of Kubernetes—I guess has emerged on top of Kubernetes—this controller ecosystem of Kubernetes being an orchestration technology. This is an orchestration layer for the orchestration technology, where we have a helm chart that is able to describe to us the state we want the system to be in. We want to have Ingress set up through Inlets, and then we want to be running OpenFaaS on this cluster and, like all of the applications that we have on our Kubernetes cluster, can be controlled with these helm charts.

Jonan: So helm functions at that layer, and I suspect in the near future—or maybe the somewhat distant future, in fact the next couple of years—we will have another layer up above those helm charts. And that level of orchestration will again be simplified as software grows. And I want you to talk about that. I'm asking you to predict the future a little bit for us. What do you think is coming as far as adding usability to the ecosystem by simplifying orchestration?

Alex: I think there are a lot of ways of looking at that, but certainly what scientists describe with arkade is a view into where you can just start making. Making Kubernetes is a bit easier to approach. You've got all these knobs and whistles and dials and things that you can touch and move about on sliders, but most of the time you don't need them. And so by using something like arkade, we're not abstracting, we're just giving you some sane defaults, giving you a way of getting productive quickly and OpenFaaS did the same for pods. So with OpenFaaS when you install it, you're effectively automating Ingress services, deployments, pods, vertical auto scaling, metrics, logging. There's so much that's bundled in a UI. There's so much that's bundled in for you, that you then just have like a little rest API, CLI and this way of firing stuff into your cluster and serving traffic to customers.

Alex: We'll see a bit more of that. We're going to see Kubernetes—well, there's something that's been happening with Kubernetes recently: People are promoting APIs to GA. There was a long time where everything was theta or alpha and we couldn't quite commit. And now you're seeing breaking changes that are really inconvenient to use as coming in because they're saying, “Well, we actually think this is the way it's going to be now.” And that happened with apps or deployments, it's starting to happen with networking, it's going to happen with Ingress. And then that also gives you a way of doing a V2. So we might even see Kubernetes itself bring down or suck in some of the stuff we've done on top as first-level primitives.

Alex: Things like scale from zero and actually know how useful they are to customers. We have it in OpenFaaS, but that's a kind of thing that you might see come into Kubernetes proper. Istio, again, service meshes some of the technology there—having a gateway, having roots, having specific policy and rules about who can talk to who—that's going to get sucked down into Ingress 2.0, and we'll see these projects that we've built in the community start influencing the core project.

Jonan: So I want to discuss your newsletter and I wanted to get the name of your newsletter again.

Alex: Yeah. So the newsletter is because I used to work for a big enterprise company, which is where I made OpenFaaS in my free time. It became very interesting to a lot of people—companies were using it in production and people in my circle were sort of saying, “Quit your job, just go and work on it full time, you need to.” And eventually I did, but didn't bring any income. Users of the project weren't paying for it even when they had it in production. And I was tempted to sort of mention a couple of massively funded VC companies that ask for free support, but don't contribute and won't even pay for support. But I won't. Anyway, what we have is this imbalance between the maintainers and creators. They're creating a lot of value, the value is being captured by the consumers, not by the vendor, by the creator.

Alex: The creator could be living on pea soup whilst the company with millions of dollars of funding or public share capital is eating it up. And so in the early days, I saw a lot of that and still continue to see it now. And what I tried to do is think about, “How can I survive and how can the project survive, given that this unbalanced state of affairs is probably the way things are going to continue to be because there's no reason to pay?” And so the first thing companies tend to do is to think, “We could make it paid-only, we could delete the code and make it paid-only.” Then they have to pay us, those awful people. The likelihood is they'll just find the next project or cloud software and they'll just, they'll migrate and they'll consider it risk. They probably won't pay you.

Alex: The other one needs to say, “Rather than compete with that community and those businesses, we'll build something, we'll find out what they would pay for and we'll build that extra, and if they want it, they can pay for it. And that's open-core. And the third is sort of a mix. We'll build something they want, we'll keep it from them, they won't ever be able to get this unless they pay us for it, but they'll pay us per month and it will involve us hosting it, all of it, or part of it for them. I think of things like logging platforms, where you have an agent that runs in your cluster that may even be open source. It gets data and sends it to a public SaaS product where numbers are crunched, there's pretty UIs, you can put all kinds of users and groups in that you want and maybe even generate reports and invoices and stuff like that.

Alex: That's sort of the third model. And each has its own challenges. Open-core and SaaS both require a huge amount of development and money. And who's to say that anyone actually buys what you built at the end of it? What I've tried to do is initially see if I could get people like these companies to sponsor. Very few are willing to. Sort of beggars belief, really, when you think about how critical it is to production for a lot of them.

Jonan: Exactly.

Alex: And they won't pay $10, $20, $100 a month to make sure that people are working on stuff for them. Fair enough. And actually where most of that sort of support comes from is from individuals like probably even like yourself. Someone like you will come along and they'll go, "Yeah, this isn't fair, but actually I really like this, and I respect Alex for being independent and not taking $400,000, $500,000 a year from VMware or Google. He's living on a shoestring in a small house in the countryside trying to just make the software and do good stuff." And they're like, "Well, we'll put up $520." And that was OK, but then I thought this just seems imbalanced. Some people feel like they're donating, and I try to change that. And I thought, “What I'll do is I will spend two hours a week, sat down writing to them.”

Alex: I will give an insider’s update, an insider's view into my life as an independent software developer, as a consultant, but also into all the projects that I'm building, maintaining, any tutorials that I've written and any premium content that I might put together. And up to now, you've been able to get it every week just by simply subscribing through GitHub. And then more recently, I created this thing called the Treasure Trove and that's a simple go app and it has a link to every past issue. So if you pay, it's $25 at the moment. If you pay at that tier, you then get access to a year's worth of writing about how to do open source, how to be a maintainer, how to get a promotion at work, all of these various things there. I'm trying to sort of make it a bit more about me and try and see, can I deliver some value to potential readers?

Jonan: I think this is a brilliant model for individual developers. I want to revisit a statement you made during that discussion where you said a company doesn't want to pay $10 or $20 a month for building open source software that enables their billion-dollar business, and you said, “Fair enough.” I think it is not fair enough, not even close. I think—

Alex: You and I can debate this all day, but at the end of the day, they are not willing to pay. And I've been doing this for three-and-a-half years, and I've not got a company. Saves for a pinch of salt, there's a couple that pay a small amount of cash every month, but I've not got companies to pay and put their hands in their pockets. And that tells me with everything that I've tried and what I've seen, is that it will not happen. And we just have to take that on the chin and think, “OK, what can we do?”

Jonan: I think that the idea of adding paid content is a good one. I don't see a lot of other developers doing similar things. I do see a lot of people with a pro model, where they'll end up supporting more of an open-core style licensing. The Sidekiq project and the Ruby community is a famous example of this. Mike Perham lives here locally and in Portland, and he has had tremendous success with the idea that you have the product out there in the open that is free, it's open source, available for anyone to use, and then you have a tier that includes some premium features. That open-core model though, I think very much depends on the altruism of the maintainer in keeping valuable features on the open source side of the fence in a company that is based on open-core.

Jonan: You very often see them make a splash with their open source product—they start to gain notoriety, people start using it—and they will immediately introduce an enterprise version. And for the next two years, any new feature of any consequence lives on that side of the house. So there's a balance, but do you think that that means there's a fundamental flaw in the open-core model?

Alex: No. I mean, I don't know what examples you're thinking of, I'm sure that you have some. At the end of the day, I think if I haven't been a good test case for this, then look at some of the others like the author of Caddy and many of the other similar projects like NGINX Ingress that are adopted in production by dozens, if not hundreds of companies, that it's really in the critical path for them. They just won't pay because there's no incentive, humans need incentives to do things. Companies need a price to do things, it is the only way that they will pay. Now, occasionally, there's a company like Rancher, or Digital Ocean that will put up a homepage sponsorship for OpenFaaS. Occasionally there's a company that will pay $50 or $100 a month—but to be fair, that's not enough, nobody can live on that.

Alex: So I think we just have to be real about this. We want companies to pay some money. We need to create a business. Other companies purchase, I guess, two or three things, goods, and services. Maybe support might be another one, but it’s kind of a service. So we need to think about how we can offer them—because you're right, companies aren't altruistic, they have a P&L. If your software is free, they will never pay for it, there's no way. Creating open-core is a proven model, it's been highly successful. SAS is another one.

Alex: To be fair, there are companies out there that have no open source at all. I think of companies like Humio, I did a bit of consulting for, there was a client of mine last year, they have no open source and then they have no shame about it. They're very successful and they have a really good product. We should probably think about this more. You've got something of value, yes—if you give it away, you might get a lot of traction. But actually with my maintainer hat on, and the experience of these three or four big projects, you will not get money and nobody will thank you for maintaining them.

Alex: With Inlet, I got so far with it and I was like, “There's this thing I want and it's going to take me two solid weeks of development to build that.” It's expensive. That would cost quite a lot as a consultant. So I thought, “I'm not going to give it away, but I will make it available.” And so I built Inlet pro, it has a place next to the open source Inlet, and for the people who only use your things because they're free, they can continue doing that. But for the people who are willing to pay for the value, it's available.

Jonan: I like that approach, specifically because it enables hobbyists like me to have access to a much higher caliber of software. So to be clear, I am pretty confident that open-core is the way that open source maintainers move forward and that we continue to grow the communities, because I know far more maintainers that have quit maintaining large open source projects because of this lack of support, that companies are unwilling to just reach into their pockets and pay even a small amount of money. That being said, there are many of these maintainers who are employed full time by those companies. And I want to just take a moment and appreciate those companies who are doing good work. Please continue hiring open source maintainers, creating full-time positions for people to work in open source.

Jonan: The community sees you, and we appreciate you, and we will continue to contribute to the success and health of your organizations if you do your fair share in paying back. Also find this gentleman, Alex Ellis, on the internet and buy literally everything he sells, including his newsletter. So Alex, we've touched on quite a lot. I think that we also have rather a lot of listeners who are a bit behind you. I think technically, it's difficult to compete with the Alex Ellises of the world, but if you had a bit of advice to share with an up-and-coming software developer, what would you share? What would you tell yourself 10 years ago, earlier in your career?

Alex: I think if it was just generic advice, I would say you'd probably need to move jobs every two to three years—unless you're absolutely astoundingly happy with your manager and have the most amazing relationship, it might be worth it to stay where you are and keep a low salary. But if you want to get to your market worth, you do need to move jobs quite often. Now, I stayed at one company, ADP, for about eight, nine years, and they screwed me over financially with compensation. I mean, it was a joke by the time I left, but then I was able to go on and get almost four times my salary in the next step. And that's a special case because of what I built and what was happening in there. But when it comes to everyday developers, don't get stuck in the rut, don't think that's the only thing that you could ever do.

Alex: I have a very close friend who has been at a similar company for 15 years and he feels like he could never move out of that. I think he could and there are exciting opportunities. It's not as easy as if you live in Silicon Valley where people are sort of grappling to get new candidates in, but those jobs do exist. And I think lockdown has shown us that you can move easily and you can work remotely. If you're in Europe, a company on the East Coast has a big overlap with Europe. And if you're in the East Coast, the West Coast has a big overlap with you as well. Consider your career and what you want to get out of it. And I was just so shocked when I found out what people in the US earn back then, I was really shocked.

Alex: And so have a look around, make sure that you're always getting what you deserve. And some people will probably feel like they're too busy, they maybe have commitments with family and children, everybody's different, but if you have the ability and the privilege to step out a bit, try to learn something new, don't try and get it all at once. Be kind to yourself. I think when you look at the internet and look at social media, people portray perfection, they project it, but everyone's got a life just as complicated as our own. And we kind of just need to swim in our own swim lane, I'd say.

Jonan: That's brilliant advice. I often talk with newcomers in the industry about the fact that they're seeing the final polished version when they're looking at some of the software that is produced in open source, for example, and they think, “Well, this is incredible, I could never write this.” I point out to them that they don't see the previous hundred iterations of that software—they're seeing the bit that was released.

Alex: Overnight success, for instance.

Jonan: Exactly. The overnight success. Alex, it has been an absolute pleasure talking to you. I really enjoyed our conversation and I have to go back through it now and figure out all of these acronyms that technologies you described, to try to catch up with the Kubernetes ecosystem, because it is fascinating and it is growing so quickly. I'm so glad that you were able to join us today. Thank you for coming on the show.

Alex: You're welcome. And if you hire someone to do a transcript, I think they might struggle.

Jonan: I think that they might struggle, but we've got some good people and we will revise the transcript as necessary. But I hope to have you back again someday soon. In the meantime, good luck with OpenFaaS.

Alex: Yeah. Thank you. Appreciate it.

Jonan: Bye.

Jonan: Thank you for joining us for another exciting episode of Observy McObservface, the observability podcast with the absolutely rubbish name. You can find us on iTunes or Spotify or wherever fine podcasts are sold. You can also find all of the episodes and their accompanying transcripts and show notes on developer.newrelic.com, alongside all sorts of other interesting things that I’m sure you will enjoy. Our guest today, Alex Ellis, is on Twitter as @alexellisuk, and I am also on Twitter as @thejonanshow. Please look me up and let me know who you would like to see on this podcast, what sorts of things you would like to learn about, and I will do my very best to accommodate you. We are here to give the people what they want. For more about observability, read about observability for Kubernetes. Thank you so much. We will see you next time. Have a great day.

Listen to more Observy McObservface episodes. For related New Relic integrations to use, see our Infrastructure monitoring platform.