Anyone using Docker sooner or later hears about Kubernetes (K8s), an open-source container system for automated scaling, deploying and managing applications using container technologies.

Google designed this automated platform to simplify working with containers and donated it to the Cloud Native Computing Foundation for further development.

The name Kubernetes comes from the Greek word κυβερνήτης, meaning ‘governor’ or ‘captain.’

what is kubernetes

What are Containers?

To understand Kubernetes, it is essential that you comprehend the concept of containers.

Essentially, containers are isolated virtual environments where you can develop and manage an application, without the fear of it harming the host operating system. Each container has its libraries, bins, and other runtime components, but uses the kernel of the host OS.

Container Orchestration diagram in a virtual environment

The fact that containers do not need their own operating systems means they are much lighter than the usual virtual machines. They can be as small as 10MB and are remarkably fast to launch. Also, their size allows you to scale in and out of them, as well as to add identical containers.

Server virtualization is excellent for implementation with continuous integration and continuous development (CI/CD). This feature is incredibly important in today’s competitive, fast-paced world of development.

Why the need for Containers in Kubernetes?

Containers are often used for designing applications. A developer can deploy an application using containers. Depending on the popularity of the app, the developer must manage to scale the necessary resources for it to run correctly.

However, as the application becomes more widespread, the number of containers increases. As the number grows, it becomes too much for the developer to manage manually. A need for an automated process arises, and this is when Kubernetes come in.

Kubernetes automates, deploys, scales, and monitors application containers.

Terminology To Get Started With Kubernetes

Before looking into what Kubernetes does, it is useful to get acquainted with the Kubernetes-specific terminology.

Master Node

First and foremost, there is the Master Node. There is only one Master Node, and it represents the main machine that administers all the other (Worker) Nodes by assigning different tasks to each. It is responsible for establishing communication inside the system and managing workloads.

Worker Nodes

The Master Node governs the Worker Nodes. There are multiple instances of these Worker Nodes, each performing the task they are assigned. They deploy the workloads utilizing containers and storage volumes.


Pods are clusters of containers and storage volumes. They contain everything required for deployment to a single node.


A Deployment is a plan that lays out the template for the Pods. It ensures that the Pods are up and running and updated when needed. A deployment may exceed a single pod and spread across multiple pods.


The Pods must be easily traceable inside the network, but locating them may not be as simple. They often move around in the cluster or may get replaced. A Service finds the required Pods and exposes them to the internet. Their job is to ensure that each request gets to the right pod.


Also, every Worker Node has a Kubelet which ensures that all its containers are healthy and working.


Finally, there is a Kubectl, the command-line interface tool used for Kubernetes.

How Does Kubernetes Work?

There are multiple things Kubernetes does that help you deploy and maintain applications using containers, without having to worry about managing resources and manual support and updates. Instead, you can focus that attention on creating and improving applications.

Deploys applications

First, Kubernetes is a tool to do deployments. You give Kubernetes information about the image it needs to create a Pod from. Taking all this into account, it creates a deployment by running the required number of containers.

The deployment holds all the criteria that the user sets, such as the amount of CPU, RAM, and file storage. Kubernetes keeps track of all that and ensures everything is going by the plan and that your application is up and running.

Since an app often requires more than one container, Kubernetes can deploy multi-container applications as well. The system makes sure that all the containers are synchronized and communicating with each other.

Scales applications

The container system balances loads between Pods. Load balancing is scaling the container resources based on the workload it has to fulfill. Kubernetes continually monitors the workloads and accordingly scales the application by adding or removing containers.

Since this process is automated, you do not have to worry about wasting valuable resources. It activates only the number of containers needed for deployment. Additionally, by automatically adding more containers, it provides security in case the application suddenly gets a lot more traffic.

Customizable and Adaptable

Developers can extend the default Kubernetes API with a custom resource definition (CRD) file. Therefore, a user can store and retrieve structured data by installing these custom resources without ruining the compatibility with stock Kubernetes.

However, to create a genuinely declarative API, users must combine a custom controller with custom resources. The controller interprets the data of the desired state and maintains it.

Kubernetes is incredibly flexible when it comes to cloud environments and can work with any environment that supports containers. Also, Kubernetes are not tied to a specific operating system, and they even work on systems that combine Linux and Windows.

Monitors and fixes applications

Kubernetes offers insight into the health of your application. It can provide live information and metrics of your containers and clusters.

Not only does it deploy the application, but it also monitors the deployment making sure it is running. If and when the application goes down, Kubernetes recovers it by spinning another container with minimal downtime.

Kubernetes automates all the necessary fixes for your application. Its features include deploying self-healing applications with autorestart, autoreplication, autoscaling, and autoplacement.

How to experience the power of Kubernetes

Since Kubernetes is a system for setting up and coordinating containers, a prerequisite for using it is to have a containerization engine.

There are many container solutions out of which Docker is the most popular today. Other container providers include AWS, LXD, Java Containers, Hyper-V Containers, and Windows Server Containers.

Apart from containers, there are other projects and support that Kubernetes relies on to give their users the full experience. Some of them are:

  • Docker or Atomic Registry (for the official registry)
  • Ansible (for automation)
  • OpenvSwitch and intelligent edge routing (for networking)
  • LDAP, SELinux, RBAC, and OAUTH with multi-tenancy layers (for security)
  • Heapster, Kibana, Hawkular, and Elastic (for telemetry)

For beginners who still have no experience of deploying multiple containers, Minikube is a great way to start. Minikube is a system for running a single node cluster locally and is excellent for learning the basics, before moving on to Kubernetes.


After reading this article, you now know what Kubernetes is and why it is essential for application development and maintenance.

Not only does it help deploy an application, but it also maintains and manages it, so that you do not have to deal with such tedious work.

Next you should also read