Container orchestration is the process of managing containers using automation. It allows organizations to automatically deploy, manage, scale and network containers and hosts, freeing engineers from having to complete these processes manually.
As software development has evolved from monolithic applications, containers have become the choice for developing new applications and migrating old ones. Containers are popular because they are easy to create and deploy quickly, regardless of the target environment. A single, small application can be composed of a dozen containers, and an enterprise may deploy thousands of containers across its apps and services.
The more containers an organization has, the more time and resources it must spend managing them. You could conceivably upgrade 25 containers manually, but it would take a considerable amount of time. Container orchestration can perform this and other critical life cycle management tasks in a fraction of the time and with little human intervention. Container orchestration is often a critical part of an organization’s approach to SOAR (security orchestration, automation and response).
In this blog post, we’ll explain the concept of container orchestration and how it works, look at common orchestration use cases, identify the most popular container orchestration platforms and tools, and offer guidance on how to get started.
Orchestration describes the process of managing multiple containers that work together as part of an application infrastructure. Just as a musical orchestrator is responsible for harmoniously assigning and organizing instruments to perform a composition, a container orchestrator coordinates the configuration, deployment and scaling of container-based applications so that they operate correctly and run smoothly.
How does container orchestration work?
Container orchestration is fundamentally about managing the container life cycle and the containerization of your environment. In general, the container life cycle follows the build-deploy-run phases of traditional software development, though the specific steps may vary slightly depending on the container orchestration tool being used. A typical life cycle might look like this:
- Build: In this first step, developers decide how to take the capabilities they’ve selected and build the application.
- Acquire: In this next step, containerized applications are usually acquired from public or private container image repositories. Developers start with a base image from one application and extend its functionality by placing a layer from another application over it. While using existing code from multiple sources in this way is more efficient and productive than creating everything from scratch, it also introduces the challenge of tracking the interdependencies among the various images.
- Deploy: This includes the process of placing and integrating the tested application into production.
- Maintain: In this step, developers continuously monitor the application to ensure it performs correctly. If it deviates from its desired state or fails, they try to understand the problem and fix it.
Container orchestration life cycle is about managing containerization in your environment
Container orchestration allows organizations to streamline the life cycle process and manage it at scale. Developers can also automate many of the tasks required to deploy and scale containerized applications through the use of container orchestration tools.
To start the orchestration process, the development team writes a configuration file. The file describes the app’s configuration and tells it where to find or build the container image, how to mount storage volumes, where to store container logs and other important information. The configuration file should be version-controlled so developers can deploy the same application across different development and testing environments before pushing it to production.
From there, the configuration files are handed over to the container orchestration tool, which schedules the deployment. When it’s time to deploy a container into the cluster, the tool chooses a suitable host (or collection of hosts) in which to place the container based on CPU, available memory and other resource criteria defined in its configuration file.
Once the container is running, the container orchestrator monitors and manages the container life cycle. If something doesn’t match the container’s configuration or leads to a failure, the tool will automatically try to fix it and recover the container.
What Container Orchestration Is Used For
Container orchestration is used to automate and manage tasks across the container life cycle. This includes:
- Configuration and scheduling
- Provisioning and deployment
- Health monitoring
- Resource allocation
- Redundancy and availability
- Updates and upgrades
- Scaling or removing containers to balance workloads across the infrastructure
- Moving containers between hosts
- Load balancing and traffic routing
- Securing container interactions
One big advantage of container orchestration is that you may implement it in any environment where you can run containers, from on-premises servers to public, private, or multi-cloud running AWS, Microsoft Azure or Google Cloud Platform.
The Importance of Container Orchestration
Container orchestration is important because it streamlines the complexity of managing containers running in production. A microservice architecture application can require thousands of containers running in and out of public clouds and on-premises servers. Once that’s extended across all of an enterprise’s apps and services, the herculean effort to manage the entire system manually becomes near impossible without container orchestration processes.
Container orchestration makes this complexity much more manageable. It allows you to deploy, scale and secure containers with minimal hands-on intervention, increasing speed, agility and efficiency. For that reason, it’s a great fit for DevOps teams and can be easily integrated into CI/CD workflows.
Container Orchestration is Critical at Scale
Container orchestration is required to effectively manage the complexity of the container life cycle, usually for a significant number of containers. A single application deployed across a half-dozen containers can be run and managed without much effort or difficulty. Most applications in the enterprise, however, may run across more than a thousand containers, making management exponentially more complicated. Few enterprises, if any, have the time and resources to attempt that kind of colossal undertaking manually..
Container orchestration is a necessity for managing containers in large, dynamic environments. The container life cycle encompasses a multitude of tasks, including provisioning and deployment, allocating resources among containers, scaling and shifting containers between hosts, load balancing, and monitoring container health.
Container orchestration automates these tasks, ensuring they’re done correctly and quickly and allowing development teams to use their resources more efficiently.
Common Benefits of Container Orchestration
Container orchestration offers developers and administrators many benefits. These include:
- Increased productivity: Container orchestration tools remove the burden of individually installing and managing each container in your system, in turn reducing errors and freeing development teams to focus on application improvement.
- Faster deployments: Container orchestration tools make deploying containers more user-friendly. New containerized applications can quickly be created as needed to address increasing traffic.
- Reduced costs: One of the biggest advantages of containers is that they have lower overhead and use fewer resources than traditional virtual machines.
- Stronger security: Container orchestration tools help users share resources more securely. Containers also isolate application processes, improving its overall security. (Read this article to learn more about container security.)
- Easier scalability: Container orchestration tools enable users to scale applications with a single command.
- Faster error recovery: Container orchestration platforms can detect issues like infrastructure failures and automatically work around them, helping maintain high availability and increase the uptime of your application.
Container orchestration offers numerous benefits that will help you meet business goals and increase profitability,
Container Orchestration Tools & Platforms
On its own, container orchestration is just an idea. You need a container orchestration tool to put that idea into practice. These tools provide the framework for creating, deploying and scaling containers. Here are some of the more popular options.
- Kubernetes: Developed by Google (also known as Google Kubernetes) and maintained by the Cloud Native Computing Foundation, the open source platform Kubernetes was at one point the fastest-growing project in the history of open-source software and has become the de facto standard. Kubernetes automates a host of container management tasks including deployment, rollouts, storage provisioning, load balancing and scaling, service discovery, and “self-healing”— the ability to restart, replace or remove a failed container. With broad functionality and an expanding ecosystem of open source supporting tools, Kubernetes is widely supported by leading cloud providers, many of whom now offer fully managed Kubernetes services.
- Mesos Marathon: Marathon is a framework for Apache Mesos, an open source cluster manager that was developed by the University of California at Berkeley. It allows you to scale container infrastructure by automating the bulk of management and monitoring tasks similar to Kubernetes and Docker Swarm — although it has been around longer than both.
- Amazon ECS: Amazon Elastic Container Services (ECS) is a powerful container orchestration tool that lets you deploy and manage Docker containers and run containerized applications on Amazon Web Services (AWS). It works seamlessly with AWS apps and services such as AWS IAM, Amazon CloudWatch, AWS CloudFormation, Amazon Virtual Private Cloud, Amazon Elastic Container Registry, AWS CloudTrail and AWS CodeStar, making it an attractive choice for users already using AWS. However, containers orchestrated by Amazon ECS can only be run on Amazon Web Services EC2, as it currently offers no support for third-party infrastructure.
- Nomad: Unlike most container orchestrators that were specifically designed for Docker-containerized applications, HashiCorp’s free and open source cluster management and scheduling tool supports other standalone, virtualized or containerized applications on all major operating systems across all on-premises infrastructure, as well as in the cloud. That flexibility lets teams support just about any type and level of workload.
- Azure Kubernetes Service: Azure Container Service is Microsoft's container orchestration solution for Azure, its cloud computing service. Its architecture is based on Apache Mesos, and it lets users choose from three container orchestrator options: Kubernetes, Docker Swarm and Mesosphere DCOS (Data Center Operating System), which provide greater portability for application configuration and container management.
- Docker Swarm: Docker Swarm is another popular open source container orchestration platform. Like Kubernetes, it automates the deployment of containerized applications but was designed specifically to work with Docker Engine, a technology for building and containerizing applications. This and its ability to easily integrate with other Docker tools make it a popular choice for teams already working in Docker environments, although it’s more limited in functionality, customization and extensions than Kubernetes.
The Container Orchestration War
The “container orchestration war” refers to a period of heated competition between three container orchestration tools — Kubernetes, Docker Swarm and Apache Mesos. While each platform had specific strengths, the complexity of switching among cloud environments required a standardized solution. The “war” was a contest to determine which platform would establish itself as the industry standard for managing containers.
In 2015, when both Docker Swarm and Kubernetes were released, Apache Mesos was the most widely adopted container management tool, with Twitter, Verizon and Yelp its most high-profile users. Although Apache Mesos and its component frameworks could perform container orchestration, it had a broader range of capabilities that made it complex to implement for developers who just wanted to use it to manage their containers. Kubernetes and Docker Swarm, on the other hand, took a more focused and lightweight approach.
Eventually, Kubernetes emerged as the winner, thanks largely to its robust open source community. According to a recent CNCF survey, in 2020 “91% of respondents report using Kubernetes, 83% of them in production. This continues a steady increase from 78% last year and 58% in 2018.” Today, it is clearly the dominant container orchestration platform, with each of the major cloud providers offering their own managed Kubernetes service. (Explore how Kubernetes won the container orchestration war in this article from Hacker Noon.)
The Role of Kubernetes Container Orchestration
Kubernetes container orchestration refers to the use of the Kubernetes open source platform to manage the container life cycle. Kubernetes does not create containers, but it can dramatically simplify container management by automating processes and minimizing downtime so development teams can focus on improving and adding new features to their applications. To better understand how, let’s look at Kubernetes’s basic components and how they work together.
The Kubernetes engine, its core architecture, is structured hierarchically and uses its own terminology. While a complete breakdown of the platform’s vocabulary is beyond the scope of this article, you can get an understanding of how Kubernetes orchestrates containers by looking at how it organizes a deployment. Kubernetes building blocks include:
- Pods: The smallest deployable Kubernetes unit, a pod is a grouping of one or more containers packaged together and deployed to a node. All containers within a pod share a local network and other resources. They can talk to each other as if they were on the same machine but remain isolated from other pods. At the same time, pods isolate network and storage away from the underlying container.
- Nodes: Any single physical or virtual server with one or more pods is a node. There are two types of nodes — worker nodes and primary nodes. Originally called “minions,” worker nodes receive and perform tasks assigned from a primary node and contain all the services required to manage and assign resources to containers. If a node goes down, the primary node can intervene by duplicating that node to minimize disruption or deploy a new pod to a functioning node.
- Clusters: A primary node and several worker nodes together form a cluster. Clusters consolidate all of these machines into a single, powerful unit. Containerized applications are then deployed to a cluster, and the cluster’s primary nodes distribute the workload to various worker nodes, distributing the work accordingly as nodes are added or removed. A single Kubernetes cluster can scale to contain up to 5,000 nodes and 150,000 pods. Using multiple clusters enables you to logically separate parts of your infrastructure and application from each other, making it easier to visualize your business.
- Kubernetes API: The Kubernetes API stores information about the state of the cluster and controls how users interact with the platform. A user enters information about the desired state of a cluster into the API, which sends that information to the primary node. Based on that information, the primary node assigns tasks to the worker nodes, moving the application from its current state to the desired state.
Kubernetes can be used for on-premises servers or in the cloud, including hybrid cloud or multi-cloud environments, and several cloud providers and third parties offer managed Kubernetes services to help flatten the learning curve. However, it may be quicker and more cost-effective to start with Kubernetes in an isolated development/test environment.
When do you need container orchestration?
While it’s simple to create and deploy a single container, assembling multiple containers into a large application like a database or web app is a much more complicated process. Container deployment — connecting, managing and scaling hundreds or thousands of containers per application into a functioning unit — simply isn’t feasible without automation.
In fact, complexity should be the primary rule of thumb for determining when you need a container orchestration tool. Technically, if your application uses more than a couple of containers, it’s a candidate for orchestration.
Another factor is the need for scaling. Container orchestration tools like Kubernetes support declarative programming so you easily spin up new containers and balance loads by simply describing your desired state for the system, making container orchestration a must when you have to deploy more application instances within a matter of seconds.
Finally, container orchestration is worth considering if you're using CI/CD in your software development. It can maximize your CI/CD efforts by shortening release cycles, preventing app outages by reducing dependency errors, and enabling more efficient server utilization. The industry standard for container orchestration is Kubernetes — especially recommended if it’s your first foray into orchestration. The sections below will tell you how to get started.
When do you need container orchestration?
The easiest way to become familiar with Kubernetes concepts and functionality is to just start running it. Fortunately, there are a few ways to jump in:
- Kubernetes distributions: If you want to run Kubernetes on your own hardware or virtual machines, the easiest way is with a packaged Kubernetes distribution. Red Hat OpenShift, Canonical Kubernetes and Mesosphere Kubernetes Service are just a few of the many options.
- Managed Kubernetes services: Kubernetes is a standard offering from many cloud providers including AWS, Google Cloud Platform, and Microsoft Azure. Each runs Kubernetes slightly differently, but all simplify deploying Kubernetes clusters in their environments.
- MiniKube: MiniKube is a simple tool created by the Kubernetes development team that runs a single-node Kubernetes cluster deployed in your choice of virtualization host. It’s an easy way to run Kubernetes locally on your Windows, Mac or Linux computer.
Once you have Kubernetes running, you can use one of the widely available containerized app demos to familiarize yourself with how Kubernetes deploys and runs applications.
The Bottom Line: Container orchestration is critical for building better apps
As software development continues to embrace the many benefits of containerized applications, container orchestration increasingly becomes a necessity. Container orchestration dramatically reduces the complexity and cost of deploying, managing and scaling apps so your development team can devote more time to creating applications that deliver value to your customers and your business.
What is Splunk?
This posting does not necessarily represent Splunk's position, strategies or opinion.