This e-book will show you seven things to consider to ensure your containers are production-ready.
Published Date: June 1, 2021
Kubernetes, which refers to the Greek word for "helm" or "helmsman," is an open source software that performs container orchestration. Kubernetes (frequently shortened to “k8s”) can manage, scale and deploy containerized applications like Docker. Kubernetes users can define what kind of container architecture they want and the software automatically schedules containers to run within those parameters based on the compute resources available, even if the containers are on multiple applications and hosts.
The sheer level of automation Kubernetes provides sets it apart in the container field. It has allowed teams to revolutionize their architectures as they move toward cloud-native applications, and has become the industry standard for container orchestration. In fact, according to a 2019 report by the Cloud Native Computing Foundation (CNCF), as the use of containers has risen rapidly, so has the adoption of Kubernetes; 78% of respondents to the survey use it in production, and the number of certified Kubernetes service providers that assist enterprises in getting started with Kubernetes has risen by nearly 95% since June 2019.
The Cloud Native Computing Foundation (CNCF), an open source project, now hosts Kubernetes, as it is useful for teams transitioning to cloud-native architecture — developers can build an application and bundle it within a container, then move it to the cloud without worrying about how it will transition to another environment.
But what do you need to know about Kubernetes before choosing to adopt it? And how does Kubernetes fit in with Docker? In the following sections, we’ll take a look at how Kubernetes works and how your organization might benefit from using it.
Kubernetes offers many features for its users, including:
- Automatic bin packing
- IPv4/IPv6 dual-stack
- Batch execution
- Load balancer
- Scheduler
- Service discovery
Kubernetes also performs a number of other automated functions, including checking your applications' health (and even reversing a change if something negatively affects it during a deployment), mounting your preferred storage system, scaling your applications, performing self-healing (automatically replacing containers if needed or killing unresponsive ones based on your specifications, as well as auto-scaling, restarting or rescheduling failed containers when necessary) and much more.
Containers are software units into which you can pack an application, along with everything required to run that application, to make it more portable. To better understand how containers function, imagine an empty box. In this box (or container), you can put an application’s code, its system tools, configuration files and any other dependencies required for it to work, and then later unpack it (or, rather, deploy it) elsewhere — on a local machine, a public cloud or a private data center.
Containers are often compared to virtual machines (VMs) because they are similarly able to isolate parts of the application and abstract them from its infrastructure. Unlike VMs, however, containers are much smaller and take up far fewer resources. Because they’re so portable and lightweight, and because developers can spin containers up and down as needed, containers allow teams to implement a microservices-style approach by which they can make changes and redeploy a single service rapidly as opposed to re-launching the entire application — which slows down code releases, creates more productivity headaches and can potentially introduce new errors into the system.
Containerized environments offer a number of benefits, including the following:
- Consistency: All the application’s dependencies are in the container regardless of its location, so it’s easier for developers to focus on functionality rather than worrying about troubleshooting in a new environment.
- Flexibility: Containers are very portable, with the ability to run on Mac, Linux and other open-source environments or Windows, as well as cloud services, and on both bare-metal and virtual servers. This provides flexibility not only for your environment but also external environments, allowing you to migrate to any vendor with ease and avoid vendor lock-in.
- Reduced downtime: Applications that are broken up into containers enable organizations to place them on different physical and virtual machines, in the cloud or on-premises, which in turn increases system fault tolerance.
- Scalability: Unlike VMs, which can be multiple gigabytes, containers usually stay in the megabyte range. Therefore, you can run a lot of containers on one operating system and scale much more efficiently than with VMs.
- Speed: Containers are lightweight and take less than a second to spin up. Because they use fewer server resources, they work fast. Developers can create them quickly, too, resulting in increased productivity and faster time to market.

Like other container orchestration tools, Kubernetes helps manage the networking, scaling, scheduling and deployment of containers. It’s especially helpful in a scaled environment that needs to run thousands — or hundreds of thousands, or even billions — of containers. Teams that have integrated DevOps practices like continuous integration/continuous delivery (CI/CD) benefit because Kubernetes provides more agility in application development, giving developers the ability to run the same application in diverse environments and more easily implement microservices. These tools are also often what’s known as “declarative” — meaning you declare the parameters of your system’s behavior, and the tool makes it happen.
As a container orchestration tool, Kubernetes can be used for automating and managing tasks such as:
- Container deployment
- Container availability
- Resource allocation
- Health monitoring
- Load balancing
- Securing container interaction
- Adjustment of container size or location based on the host’s resources or health
The advantages of using Kubernetes are similar to those of using containers. Kubernetes is portable, so you can use it flexibly in hybrid, cloud, on-premises or multicloud ecosystems. If a container fails or nodes die, they can be automatically replaced or rescheduled due to the “self-healing” nature of Kubernetes. And perhaps most importantly, Kubernetes is scalable, able to run billions of containers (or far fewer), all based on your team’s needs. Kubernetes has an enthusiastic contributor community and there are plenty of Kubernetes-supported tools across the industry to help maximize its use.
Using Kubernetes may have some disadvantages. Although it allows teams to manage incredibly complex environments, it also introduces its own complexity for users unfamiliar with its development workflow. Containerized environments, including microservices, can also be challenging to monitor, secure and troubleshoot because they have many components.
Kubernetes can be used by a wide range of industries — any organization that uses containers in its computing environment, be it e-commerce, finance, healthcare or technology.
Kubernetes can be particularly useful for:
- Transitioning to a cloud platform, such as Amazon Web Services (AWS), Google Cloud or Microsoft Azure
- Scaling your platform
- Implementing machine learning or deploying IoT devices
- Streamlining microservices-based application management
For example, Ancestry.com is the website for the world’s largest consumer genomics DNA network. With billions of historical records to host and millions of paying subscribers to serve, the website’s increasingly complex architecture was becoming more cumbersome and less agile. To address these challenges, the development team began moving to a cloud-native infrastructure, becoming early adopters of Kubernetes for container orchestration. The choice to migrate to a cloud provider and a containerized environment orchestrated by Kubernetes led to productivity gains, in some cases going from code deployments that took 50 minutes to deployments that took under a minute. Such time savings were a big win for the development teams.
A Kubernetes pod is the most basic unit for scheduling containers. It comprises one container or multiple containers wrapped together with the ability to share resources (including network, IP address, hostname, etc.) and communicate with each other, all deployed to a node as a single unit. Many containers can live in a pod, and they always scale together, but to optimize efficiency, avoid putting more containers than necessary into a pod. In addition to application containers, a pod can hold ephemeral containers (temporary containers for inspecting an application; some clusters may not offer this) and init containers, which run to completion before the application container begins to run. Pods themselves are considered ephemeral, too — they’re not meant to run forever, and once you delete a pod (or once it fails), you can’t bring it back. (Though due to the self-healing nature of Kubernetes, a pod from a failed node may be replaced on a different node.) You define your pod configuration and the specifications you’d like it to run on in a YAML or JSON file.
There are a few other architectural components of Kubernetes that can be helpful to know about:
- Nodes can be physical or virtual compute machines and their job is to run the pods with all the necessary elements. If a node dies during its run, the cluster adjusts so that the containers still meet the specifications you’ve set.
- A cluster is a group of nodes; these are managed by the control plane.
- A kubelet is a function contained in a node that makes the containers start, stop and otherwise run in the pod as defined by the specifications you set in the YAML or JSON file, which hold configuration information.

Kubernetes pods can be used in a number of ways, but there are two main ones:
- Single-container pods: A pod with only one container in it is the most common use case for Kubernetes. Using the box analogy, if a pod is like gift wrap around a single box: wrapping just one is easiest and most efficient.
- Multi-container pods: You can put more than one container in a pod if containers need to communicate with each other and share resources. Creating pods with multiple containers is a slightly more advanced use case. For example, you might need to do this if you’re running a helper application along with your primary application, such as a data pusher or proxy. The pod wraps all of these containers and other relevant accompanying resources together. Gift wrapping multiple boxes as one is more difficult than wrapping a singleton; the same is true for Kubernetes.
In general, you don’t need to create pods manually. Instead, you’ll usually rely on workload resources, which manage the pod life cycles, including creating a replacement pod if a node fails.
Also, if your container runtime allows it, you can grant your containers administrative capabilities for an operating system by enabling privileged mode.
The Docker versus Kubernetes debate is less about choosing between the two and more about finding ways to use them in tandem. The two platforms serve different functions — Docker is an open source containerization platform that creates and deploys containers, and Kubernetes is a container orchestration platform. Launched in 2013, Docker is the default container file format and has essentially become synonymous with “containers” themselves. It’s compatible with both Linux and Windows, and can run on-premises and in the cloud.
Docker and Kubernetes work well together — Docker creates and runs the containers, while Kubernetes is the controller-manager that schedules, scales and moves them. In collaboration, you can easily use Docker to create and run your containers, as well as store container images, then use Kubernetes to orchestrate those containers (and their resources) from one Kubernetes control plane. Using Docker and Kubernetes in tandem streamlines the experiences and makes it easier for developers to create scalable applications, as well as allows teams to build cloud-native architectures or microservices more efficiently.
Kubernetes offers a web-based user interface called Dashboard, which can perform the following functions:
- Troubleshoot containerized applications
- Manage cluster resources
- Modify Kubernetes-specific resources (e.g., Jobs or DaemonSets)
- Deploy containerized applications to a cluster with a wizard
- See an overview of a cluster’s applications
You can change the views on Dashboard to provide insight into many aspects of the system, including being able to see all logs from a single pod’s container; the volume of resources applications are using for data storage; all applications running in the selected namespace grouped by workload kind; an admin overview with detailed options to examine nodes, namespaces and persistent volumes.
Monitoring application performance once it’s deployed in a cluster is important. Kubernetes doesn’t necessarily have one monitoring solution, but it does provide data about a cluster’s characteristics to determine resource usage and application performance. For monitoring new clusters, two metric pipelines are available: resource metrics and full metrics. The resource metrics pipeline reports statistics for individual containers, sharing only limited information on cluster components. The full metrics pipeline offers (as its name suggests) access to fuller metrics, compared to the resource metrics.
You may still want a more in-depth monitoring solution to gain visibility into your Kubernetes setup. Luckily, a number of third-party companies have developed tools for Kubernetes monitoring; your organization’s budget and needs will help determine which solution to choose.
Once you’ve decided to use containers in your environment, you can get started with your Kubernetes deployment. There are a number of factors to consider when selecting an installation type, including your available resources, various security needs and the level of maintenance you’re comfortable with. Fortunately for first-time Kubernetes learners, there’s thorough documentation available on the Kubernetes website, and the Kubernetes community — comprising users and contributors — is an invaluable resource for discovering more about the platform. Furthermore, if you plan to use Kubernetes in production, you can manage it yourself or get assistance from a tutorial or one of the many certified Kubernetes providers.
Though organizations were already moving toward the cloud at the beginning of 2020, the COVID-19 pandemic accelerated those transitions exponentially. And as cloud-native computing becomes the norm, the demand for supporting infrastructure — including containers — will grow, too. Forrester predicts that 30% of developers will be using containers by the end of 2021. Additionally, microservices, which allow teams to be more agile, are also on the rise. As such, current investment in Kubernetes is likely future-proof, especially considering how many out-of-the-box solutions are available for Kubernetes from major cloud vendors. And as more practitioners become comfortable with Kubernetes, innovation will inevitably follow.
If the recent past has taught us anything, it’s that agility is key. The nimble nature of DevOps culture and the cloud-driven reality that organizations are facing as industries move toward containers, serverless infrastructure and microservices call for an even greater need to provide reliable (and scalable) applications to their customers. Though implementing Kubernetes at first may present a learning curve, its versatility and potential to increase efficiency and offer an agile, competitive advantage are undeniable. As your team considers its architecture — and its future — it’s important to keep in mind how the tools you use are empowering your developers, and whether a container orchestration platform like Kubernetes might be right for you.
What are some features of Kubernetes?
What are some Kubernetes use cases?
What is the definition of a Kubernetes pod?
What are some Kubernetes pod uses?
How does Kubernetes compare to Docker?
How are dashboarding and reporting capabilities offered in Kubernetes?
How do you get started with Kubernetes?
What is the future of Kubernetes?
The Bottom Line: Kubernetes is a valuable tool if you know how to use it

Splunk Observability and IT Predictions 2023
Splunk leaders and researchers weigh in on the the biggest industry observability and IT trends we’ll see this year.