Kubernetes 101: How To Set Up “Vanilla” Kubernetes

Kubernetes is an open source platform that, through a central API server, allows controllers to watch and adjust what’s going on. The server interacts with all the nodes to do basic tasks like start containers and pass along specific configuration items such as the URI to the persistent storage that the container requires.

But Kubernetes can quickly get complicated. So, let’s look at Vanilla Kubernetes — the nickname for a a K8s setup that’s as basic and elementary as it gets.

(See how Splunk supports Kubernetes visibility.)

What is vanilla Kubernetes?

Vanilla Kubernetes is a K8s environment that runs the most basic components required, but not much more than that. It’s great for beginners and it’s also a great refresh when you’re troubleshooting and scaling these components.

Any core Kubernetes project contains only six individual running pieces:

  1. kube-apiserver 
  2. kube-scheduler
  3. kube-controller-manager
  4. cloud-controller-manager
  5. kubelet
  6. kube-proxy

Let’s look at each.


The kube-apiserver is the core. Everything talks to this service in order to get anything done. The kube-apiserver is stateless by design.


The kube-scheduler matches new requests for pods to the nodes they’ll be run on. This decision can be affected for various reasons including labels, resource requirements, affinity rules, and data location – if local volumes were used, for example.


The kube-controller-manager contains four separate controllers that are bundled together for easier deployments:

  • The node manager watches the health of the nodes in the cluster and reacts accordingly.
  • The replication controller matches the instances of each pod running with their desired state and requests starts/stops as required to meet the specification.
  • The service account and token controller manage secrets in the configuration and default settings for new members of the cluster, from namespaces to nodes.
  • The endpoint controller maintains the list of pods to service mappings, including updating DNS records for cluster-wide availability.


The final piece of the master control plane is the cloud-controller-manager. The cloud-controller-manager essentially emulates the features of the controller-manager but knows how to delegate to cloud specific sub-components to request services and nodes from the configured cloud.

Examples of this include:

  • EBS storage on AWS
  • Standing up a new instance of Aurora and passing the connection information back to the cluster for use by an application

kubelet process

The kubelet process is one of two components that go on every node. Kubelet is the supervisor on each node that interacts with the container runtime to manage the state of containers based on their specification. The specification can be passed along from a controller or through local manifests, as done on the master control plane nodes for pieces like the kube-apiserver.


The other process that’s on every node, kube-proxy is responsible for ensuring that network traffic is both:

  • Routed properly to internal and external services as required.
  • Based on the rules defined by network policies in kube-controller-manager and other custom controllers.

Prerequisites for Kubernetes

The most common products that support a Kubernetes environment include:

  • Ubuntu for the Linux base
  • Docker containerd for the runtime
  • CoreDNS for service discovery (DNS)
  • CNI to handle connecting to a networking layer
  • etcd for the configuration store

This is the most common base configuration and is even used as part of the official Certified Kubernetes Administrator (CKA) exam.


Single node K8s deployment

Kubernetes has an official mini distribution available called minikube. It’s particularly useful when you need to:

  • Run a minimalistic version of Kubernetes due to limited resources
  • Test your application in a container
  • Test the deployment, pod or service specifications you’ve written

minikube can be started and stopped with a single command and contains the core features of Kubernetes in a developer-friendly option. This version is usually run on the developer’s actual workstation.

(There are other community-based single machine distributions — minishift (Red Hat) and microk8s — but those distributions are vendor-specific spins on Kubernetes, so they do drift away from what’s considered a Vanilla install.)

Get started with minikube

To get started using minikube, there are installation instructions on the Kubernetes.io site for Mac OS X, Windows and Linux. These steps on a Windows desktop demonstrate the ease of getting started:

  1. Download and run the installer (linked above).
  2. Have Hyper-V or Virtualbox installed (to check for Hyper-V use “systeminfo” command)
  3. Start it up:
minikube start --vm-driver=virtualbox

One optional final step is to verify it’s running by listing all pods:

kubectl get po -A

Multi-node deployments

As most Kubernetes clusters running in the wild today are multi-node, there are benefits to knowing the minimum deployment required to setup such a cluster. There are multiple ways to accomplish this feat.

One way is to follow the 14 hands-on labs that are part of Kubernetes the Hard Way by Kelsey Hightower, who is a Staff Developer Advocate at Google and a long-time Kubernetes evangelist.This is designed to run on Google Cloud but walks through all the steps — from nothing to a barebones working cluster and then tearing it down again. This is as close to a Vanilla Kubernetes deployment as you can get.

Another way is to follow the Kubernetes documentation for using kubeadm to check the prerequisites and deploy a cluster. Using the kubeadm method still uses a lot of the same kubectl commands found in Kubernetes the Hard Way. But its value comes through its simplification of numerous routine tasks, like adding and removing nodes and creating authentication tokens.

The complete set of steps are:

  1. Prepare the servers running Linux. Windows is possible, but not common.
  2. Install etcd, containerd, and the Kubernetes command line tools.
  3. Generate the certificates for etcd, cluster config, and cluster authentication.
  4. Configure and run etcd. Can be a single node but 3, 5, or 7 nodes are the recommended best practice, and are often co-located on the master nodes in most clusters.
  5. Configure and run the master control plane. It’s possible to have only one but 3+ are required for high availability, and three is the most common initial configuration for clusters.
  6. Configure and run work nodes. Two is the recommended minimum but it can be any number including zero if you’re brave enough to run things on the master nodes.

At this point you’ll have a Vanilla Kubernetes install!

If you want additional steps to make the cluster useful to developers to function then you’ll want to configure additional components including routing and DNS services which, together, allow for container-to-container communications and service discovery.

Next steps with Kubernetes

With the 100+ certified Kubernetes distributions and hosted offerings available on the market today, there’s no reason for the average organization to ever deploy Vanilla Kubernetes on servers beyond training purposes or for very specific use cases, like developing plugins.

For most purposes, minikube is adequate when just the basic building blocks of Kubernetes are needed. And, as demonstrated above, Kubernetes is never actually pure. As you can see, for reasons like enterprise networking capabilities or monitoring and alerting, Kubernetes can quickly move away from being a Vanilla install.

What is Splunk?

This article was originally written by Vince Power. Vince is an Enterprise Architect with a focus on digital transformation built with cloud enabled technologies. He has extensive experience working with Agile development organizations delivering their applications and services using DevOps principles including security controls, identity management, and test automation. You can find @vincepower on Twitter. Vince is a regular contributor at Fixate IO.

This posting does not necessarily represent Splunk's position, strategies or opinion.

Stephen Watts
Posted by

Stephen Watts

Stephen Watts works in growth marketing at Splunk. Stephen holds a degree in Philosophy from Auburn University and is an MSIS candidate at UC Denver. He contributes to a variety of publications including CIO.com, Search Engine Journal, ITSM.Tools, IT Chronicles, DZone, and CompTIA.