Years ago, most software applications were big monoliths, running either as a single process or as a small number of processes spread across a handful of servers. These legacy systems are still widespread today. They have slow release cycles and are updated relatively infrequently. At the end of every release cycle, developers package up the whole system and hand it over to the Operations team, who then deploy and monitor it. In case of hardware failures, the Operations team manually migrates it to the remaining healthy servers. Today, these big monolithic legacy applications are slowly being broken down into smaller, independently running components called microservices. Because microservices are decoupled from each other, they can be developed, deployed, updated, and scaled individually. This enables developers to change components quickly and as often as necessary to keep up with today’s rapidly changing business requirements. However, as applications grew more and more complex and the development team more and more dispersed, it became increasingly difficult to deploy web apps, and undertake refinements in the app. Rather than manual updates, automation was required, and this is where Kubernetes and Docker step in. We will cover Docker in a separate article; this article will focus on Kubernetes.

Kubernetes is a Greek word that means ‘helmsman’, and the name is apt. Kubernetes enables developers to deploy their applications themselves and as often as they want, without requiring any assistance from anyone else. In addition to freeing developers from focusing on nothing else but their coding, Kubernetes also helps the operations team by automatically monitoring and rescheduling those apps in the event of a hardware failure. At the ground level, Kubernetes simplifies the task of building, deploying, and maintaining distributed systems. It does so by abstracting away the hardware infrastructure and exposing the entire data centre as a single computational resource. A typical computer is a collection of CPU, RAM, storage, networking and the operating system. Typical software coding languages let developers abstract away a lot of these details. For example, software coders are least concerned with which CPU core or memory DIM their application uses. It is all left to the OS. This lets them concentrate on their coding efforts. Kubernetes applies these concepts at a higher level, by allowing programmers to deploy and run software components without having to know about the actual servers underneath. In fact, not only the servers, but the data centre resources as well. Kubernetes allows viewing of the data centre as just a pool of compute, network and storage and have an over-arching system that abstracts it. Kubernetes thus is one of an emerging breed of data centre operating systems that has been inspired by decades of real-world experience building reliable systems and has been designed from the ground up to make that experience enjoyable. With Kubernetes in place, programmers need not care about which server or LUN their containers are running on – they can just leave this up to the data centre OS.

When deploying a multi-component application through Kubernetes, it selects a server for each component, deploys it, and enables it to easily find and communicate with all the other components of your application. This makes Kubernetes great for most on-premises datacenters, but where it starts to shine is when it’s used in the largest datacentres, such as the ones built and operated by cloud providers. Kubernetes allows them to offer developers a simple platform for deploying and running any type of application, while not requiring the cloud provider’s own System Admins to know anything about the tens of thousands of apps running on their hardware. With more and more big companies accepting the Kubernetes model as the best way to run apps, it’s becoming the standard way of running distributed apps both in the cloud, as well as on local on-premises infrastructure.

Let’s take a quick look at the major pieces that make up the Kubernetes master.

Kubernetes Cluster: At the hardware level, a Kubernetes cluster is composed of many nodes, which can be split into Masters and Nodes. These are Linux hosts running on anything from VMs, bare metal servers, all the way up to private and public cloud instances.

The Control Plane: The Control Plane is what controls the cluster and makes it function. It consists of multiple components that can run on a single master node or be split across multiple nodes and replicated to ensure high availability. The Kubernetes masters run all of the cluster’s control plane services. It is therefore considered the brain of the cluster where all the control and scheduling decisions are made.

The API server: The API Server is the frontend into the Kubernetes control plane. It is the hub of the Kubernetes cluster; it mediates all interactions between clients and the API objects stored in etcd (a distributed data store that persistently stores the cluster configuration.). Consequently, it is the central meeting point for all of the various components.

Kubelet: The Kubelet is the main Kubernetes agent that runs on each worker node and communicates with the master node. It is installed on a Linux host, which is registered with the cluster as a node. The Kubelet watches the API server for new work assignments, and ensures that the containers which are part of the Pods (a Pod is a collection of containers and its storage inside a node of a Kubernetes cluster.) stay healthy.

The Kube-proxy acts like the network brains, communicating between the multiple worker nodes. It makes sure that every Pod gets its own unique IP address. It also does lightweight load-balancing on the node.

Container runtime: The Kubelet needs to work with a container runtime to do all the container management stuff – things like pulling images and starting and stopping containers.

Kubernetes intends to simplify the task of building, deploying, and maintaining distributed systems. It has developed after decades of real-world experience in building reliable systems and it has been designed from the ground up to make that experience pleasant.

While Kubernetes is a container management tool, Docker is a tool designed to make it easier to deploy and run applications by using containers. Kubernetes and Docker work well as a team. To learn how they work, it is a good idea to learn it from a reputed IT institute.