Kubernetes is an open-source system built for orchestrating, scaling, and deploying containerized apps. If you’ve spent any time working with Kubernetes, you know how useful it is for managing containers.

You’ll also know that containers don’t always run the way they are supposed to. If an error pops up, you need a quick and easy way to fix the problem.

This tutorial will explain how to restart pods in Kubernetes.

Article on how to restart Kubernetes pods


  • Access to a terminal window/ command line
  • A Kubernetes cluster
  • The Kubernetes kubectl command line tool

Restarting Kubernetes Pods

Let’s say one of the pods in your container is reporting an error. Depending on the restart policy, Kubernetes might try to automatically restart the pod to get it working again. However, that doesn’t always fix the problem.

If Kubernetes isn’t able to fix the issue on its own, and you can’t find the source of the error, restarting the pod manually is the fastest way to get your app working again.

Note: Modern DevOps teams will have a shortcut to redeploy the pods as a part of their CI/CD pipeline. While this method is effective, it can take quite a bit of time. Your pods will have to run through the whole CI/CD process.

A quick solution is to manually restart the affected pods. It can save you time, especially if your app is running and you don’t want to shut the service down.

Here are three easy way you can do this.

Method 1: Rolling Restart

As of update 1.15, Kubernetes lets you do a rolling restart of your deployment. As a new addition to Kubernetes, this is the fastest restart method.

kubectl rollout restart deployment [deployment_name]

The above-mentioned command performs a step-by-step shutdown and restarts each container in your deployment. Your app will still be available as most of the containers will still be running.

Input and result for rolling restart

Note: Learn how to monitor Kubernetes with Prometheus. Monitoring Kubernetes gives you better insight into the state of your cluster.

Method 2: Using Environment Variables

Another method is to set or change an environment variable to force pods to restart and sync up with the changes you made.

For instance, you can change the container deployment date:

kubectl set env deployment [deployment_name] DEPLOY_DATE="$(date)"

In the example above, the command set env sets up a change in environment variables, deployment [deployment_name] selects your deployment, and DEPLOY_DATE="$(date)" changes the deployment date.

Input and result for using environment variables

Note: To maximize the functionality of your Kubernetes cluster, learn how to build optimized containers for Kubernetes.

Method 3: Scaling the Number of Replicas

Finally, you can use the scale command to change how many replicas of the malfunctioning pod there are. Setting this amount to zero essentially turns the pod off:

kubectl scale deployment [deployment_name] --replicas=0

To restart the pod, use the same command to set the number of replicas to any value larger than zero:

kubectl scale deployment [deployment_name] --replicas=1

When you set the number of replicas to zero, Kubernetes destroys the replicas it no longer needs.

Once you set a number higher than zero, Kubernetes creates new replicas. The new replicas will have different names than the old ones. You can use the command kubectl get pods to check the status of the pods and see what the new names are.

Input adn result for scaling the number of replicas


Kubernetes is an extremely useful system, but like any other system, it isn’t fault-free.

When issues do occur, you can use the three methods listed above to quickly and safely get your app working without shutting down the service for your customers.

After restarting the pods, you will have time to find and fix the true cause of the problem.

Next you should also read