Since its introduction in 2014, Kubernetes has revolutionized the ways in which development and operations teams deploy, test, and scale container-based applications.
If you’re new to Kubernetes, it’s important to understand how the various components of a Kubernetes cluster relate so you can get a sense of its full potential.
But first, some refreshers: what are containers? And by extension, why is Kubernetes on the rise?
Getting started with Kubernetes
Embarking on your journey with Kubernetes? You're stepping into a realm of powerful orchestration and automation capabilities that revolutionize how containerized applications are managed. Whether you're a seasoned DevOps professional or a developer looking to scale your applications efficiently, Kubernetes offers a rich set of features to meet your needs. It might initially seem like a complex ecosystem, but fear not—we're here to help you navigate it. In the following sections, we'll break down what Kubernetes is at its core, as well as key architectural components like clusters and nodes, to give you a comprehensive understanding of how everything fits together in this empowering platform.
What are containers?
Containers enable developers to package up an application with all of its required parts and ship it out as one standard, lightweight, secure package. This gives DevOps teams peace of mind knowing that the application they’re building and supporting will run properly in any environment—whether a virtual machine, bare metal, or the cloud. Containers essentially eradicate the “works on my machine” problem inherent with monolithic applications.
What is Kubernetes?
Originally developed by Google as the Borg project, Kubernetes is an open-source container orchestration platform that automates the deployment, management, and scaling of containerized applications. Backed by major players such as Google, AWS, Microsoft, IBM, Cisco, and Intel, Kubernetes is the flagship project of the Cloud Native Computing Foundation and is now the de facto standard for container orchestration.
Kubernetes simplifies the deployment and operation of containerized applications by introducing an abstraction layer on a group of hosts. DevOps teams can focus on building container-delivered applications while Kubernetes manages a whole range of other tasks.
How Kubernetes clusters, nodes, and pods work together
As with many new technologies, Kubernetes comes with its own vocabulary. In this article, we’ll focus on the highest-level constructs of Kubernetes: clusters, nodes, and pods. Understanding their relationship in supporting container-delivered applications will help clarify the value of Kubernetes to enterprises.
Kubernetes clusters vs nodes
In the Kubernetes ecosystem, the terms "clusters" and "nodes" refer to different levels of the organizational hierarchy, each with its own specific role and function.
When you deploy Kubernetes, you are managing a cluster. A cluster is made up of nodes that run containerized applications. Each cluster also has a master (control plane) that manages the nodes and pods (more on pods below) of the cluster. A node represents a single machine in a cluster, typically either a physical machine or virtual machine that’s located either on-premises or hosted by a cloud service provider.
In short, the cluster is the whole orchestration system, and nodes are the individual members of this system that actually run the tasks. The cluster makes the orchestration decisions, and the nodes execute those decisions by running containers.
By conceptualizing a machine as a “node,” we introduce a layer of abstraction. We no longer need to worry about the specific characteristics or location of an individual machine. Instead, we can think about each machine as CPU and RAM resources waiting to be utilized. This allows any machine to substitute any other machine in a cluster.
Each node hosts groups of one or more containers (which run your applications), and the master communicates with nodes about when to create or destroy containers and how to re-route traffic based on new container alignments.
The Kubernetes master is the access point (or the control plane) from which administrators and other users interact with the cluster to manage the scheduling and deployment of containers.
A cluster will always have at least one master but may have more depending on the cluster’s replication pattern.
So, here’s how the relationship works:
- Nodes pool their individual resources together to form a powerful machine or cluster.
- When an application is deployed onto a cluster, Kubernetes automatically distributes workloads across individual nodes.
- If nodes are added or removed, the cluster will then redistribute work.
It’s worth noting, too, that whichever individual nodes are running the code shouldn’t impact the program’s performance.
What are Kubernetes pods?
A pod is the basic unit of scheduling for applications running on your cluster. As discussed above, these applications are running in containers, and each pod comprises one or more container(s).
While pods are able to house multiple containers, one-container-per-pod is the most common model. In some situations, containers that are tightly coupled and need to share resources could sit in the same pod. Pods can quickly and easily communicate with one another as if they were running on the same machine. They do still, however, maintain a degree of isolation. Each pod is assigned a unique IP address within the cluster, allowing the application to use ports without conflict.
Pods are designed as relatively ephemeral, disposable entities. When a pod gets created, it is scheduled to run on a node. The pod remains on that node until the process is terminated, the pod object is deleted, the pod is evicted for lack of resources, or the node fails.
In Kubernetes, pods are the unit of replication. If an application becomes overly popular and a pod can no longer facilitate the load, Kubernetes can deploy replicas of the pod to the cluster. Even if the app isn’t under heavy load, it’s standard practice to create several copies of a pod in a production system to enable load balancing and mitigate the risk of failure.
Kubernetes pods vs containers
In Kubernetes, the relationship between pods and containers is integral to understanding how applications are deployed and managed on the platform. A container is a standalone executable package that includes everything needed to run a piece of software, including the code, runtime, and system libraries. It serves as an isolated environment in which an application operates. While containers encapsulate a single application and its dependencies, a Kubernetes pod is the smallest deployable unit within the platform that can contain one or more containers. Pods provide the environment for containers to run in a coordinated way, sharing the same network IP, port space, and storage, which allows for easy communication and data sharing between containers in the same pod.
In essence, a pod acts as a wrapper around one or more containers, ensuring they are scheduled together onto the same node and providing features like shared volumes and network configurations. This abstraction allows you to manage, scale, and replicate groups of containers as a single entity, making it easier to handle complex applications with multiple interacting components. So, while containers are the atomic units that package your application and its dependencies, pods are the organizational units that bring these containers together in a Kubernetes environment.
Kubernetes pods vs nodes
Both pods and nodes are fundamental components of Kubernetes architecture, but they serve different roles and operate at different layers of the system As mentioned, a node is an individual machine, either physical or virtual, that serves as the worker unit in a Kubernetes cluster. It's where the containers actually run. A pod is the smallest deployable unit in Kubernetes and serves as a wrapper for one or more containers. Unlike nodes, pods don't run on their own; they run on nodes. A pod encapsulates a single instance of an application or service and may consist of a single container or multiple, tightly-coupled containers that share storage and network resources. While nodes can exist without pods, pods cannot run without being scheduled onto nodes.
Putting it all together
By understanding Kubernetes various building blocks—containers, pods, nodes, and clusters—you unlock the full extent of its capabilities. From automating deployment to efficiently managing resources and scaling applications, Kubernetes offers an arsenal of tools tailored to meet the nuanced needs of today's DevOps teams. As you venture deeper into the world of Kubernetes, you'll discover that its complexities are merely stepping stones to achieving operational excellence and scalability. And remember, New Relic is here to support your Kubernetes monitoring needs, empowering you with actionable data as you navigate this robust platform. So go forth and orchestrate!
Ready to get started?
As mentioned above, this is a simplified overview of the core components of Kubernetes—a sophisticated, powerful, potentially game-changing platform that could turn how you develop, operate, and manage your applications on its head.
Adopting containers and Kubernetes for container orchestration requires DevOps teams to rethink and adapt their monitoring strategies to account for new layers of infrastructure and application abstraction that are introduced in a distributed microservices environment.
Learn some best practices for doing that in our guide, “A Complete Introduction to Monitoring Kubernetes with New Relic.”
Learn how to manage cluster capacity with Kubernetes requests and limits!
이 블로그에 표현된 견해는 저자의 견해이며 반드시 New Relic의 견해를 반영하는 것은 아닙니다. 저자가 제공하는 모든 솔루션은 환경에 따라 다르며 New Relic에서 제공하는 상용 솔루션이나 지원의 일부가 아닙니다. 이 블로그 게시물과 관련된 질문 및 지원이 필요한 경우 Explorers Hub(discuss.newrelic.com)에서만 참여하십시오. 이 블로그에는 타사 사이트의 콘텐츠에 대한 링크가 포함될 수 있습니다. 이러한 링크를 제공함으로써 New Relic은 해당 사이트에서 사용할 수 있는 정보, 보기 또는 제품을 채택, 보증, 승인 또는 보증하지 않습니다.