A pod stuck in a crashloopbackoff is a very common error faced while deploying applications to Kubernetes. While in crashloopbackoff, the pod keeps crashing at one point right after it is deployed and run. There might be 3 commonly probable reasons for this.

Reason 1: Probe failure

The kubelet uses liveness, readiness, and startup probes to keep checks on the container. If any one of these probes fails, the container is restarted from that point.

Steps to verify:

  1. kubectl describe po <pod name> -n <namespace>
  2. Check for the events section if any of the probes(liveness, readiness, startup) are failing.


  1. Check if the probe is correctly configured (endpoint, port, SSL config, timeout, command)
  2. Check logs for any possible error
  3. Exec into the pod and execute the curl or other relevant commands to ensure the application is running correctly

Reason 2: OOM (out of memory)

If the pod runs out of allocated memory space while in its running space, it will keep crashing. This is called the OOM (Out of Memory) error.

Steps to verify

  1. kubectl describe po <pod name> -n <namespace>
  2. Check for the events section OOM killed event.


Increase the ram allocated to the pod. If it is a Java application, check for heap configuration.

Reason 3: Startup failure/application process crash

At times when a pod fails to run it is because the application process itself keeps crashing.

Steps to verify:

  1. kubectl describe po <pod name> -n <namespace>.
  2. Look in the status section of the pod
  3. Spot if there is ‘error’ displayed along with the error code


  1. kubectl logs -n <namespace> <podName> -c <containerName> --previous
  2. Usually, the last line of log is helpful in debugging the issue.

If none of these above-mentioned 3 are the causes for your pod being in crashloopbackoff, you would need to start debugging your pod in detail to look for the problem.