[kubernetes] How to debug "ImagePullBackOff"?

All of a sudden, I cannot deploy some images which could be deployed before. I got the following pod status:

[root@webdev2 origin]# oc get pods 
NAME                      READY     STATUS             RESTARTS   AGE 
arix-3-yjq9w              0/1       ImagePullBackOff   0          10m 
docker-registry-2-vqstm   1/1       Running            0          2d 
router-1-kvjxq            1/1       Running            0          2d 

The application just won't start. The pod is not trying to run the container. From the Event page, I have got Back-off pulling image "172.30.84.25:5000/default/arix@sha256:d326. I have verified that I can pull the image with the tag with docker pull.

I have also checked the log of the last container. It was closed for some reason. I think the pod should at least try to restart it.

I have run out of ideas to debug the issues. What can I check more?

This question is related to kubernetes openshift openshift-origin

The answer is


You can use the 'describe pod' syntax

For OpenShift use:

oc describe pod <pod-id>  

For vanilla Kubernetes:

kubectl describe pod <pod-id>  

Examine the events of the output. In my case it shows Back-off pulling image coredns/coredns:latest

In this case the image coredns/coredns:latest can not be pulled from the Internet.

Events:
  FirstSeen LastSeen    Count   From                SubObjectPath           Type        Reason      Message
  --------- --------    -----   ----                -------------           --------    ------      -------
  5m        5m      1   {default-scheduler }                        Normal      Scheduled   Successfully assigned coredns-4224169331-9nhxj to 192.168.122.190
  5m        1m      4   {kubelet 192.168.122.190}   spec.containers{coredns}    Normal      Pulling     pulling image "coredns/coredns:latest"
  4m        26s     4   {kubelet 192.168.122.190}   spec.containers{coredns}    Warning     Failed      Failed to pull image "coredns/coredns:latest": Network timed out while trying to connect to https://index.docker.io/v1/repositories/coredns/coredns/images. You may want to check your internet connection or if you are behind a proxy.
  4m        26s     4   {kubelet 192.168.122.190}                   Warning     FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "coredns" with ErrImagePull: "Network timed out while trying to connect to https://index.docker.io/v1/repositories/coredns/coredns/images. You may want to check your Internet connection or if you are behind a proxy."

  4m    2s  7   {kubelet 192.168.122.190}   spec.containers{coredns}    Normal  BackOff     Back-off pulling image "coredns/coredns:latest"
  4m    2s  7   {kubelet 192.168.122.190}                   Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "coredns" with ImagePullBackOff: "Back-off pulling image \"coredns/coredns:latest\""

Additional debuging steps

  1. try to pull the docker image and tag manually on your computer
  2. Identify the node by doing a 'kubectl/oc get pods -o wide'
  3. ssh into the node (if you can) that can not pull the docker image
  4. check that the node can resolve the DNS of the docker registry by performing a ping.
  5. try to pull the docker image manually on the node
  6. If you are using a private registry, check that your secret exists and the secret is correct. Your secret should also be in the same namespace. Thanks swenzel
  7. Some registries have firewalls that limit ip address access. The firewall may block the pull
  8. Some CIs create deployments with temporary docker secrets. So the secret expires after a few days (You are asking for production failures...)

I forgot to push the image tagged 1.0.8 to the ECR (AWS images hub)... If you are using Helm and upgrade by:

helm upgrade minta-user ./src/services/user/helm-chart

make sure that image tag inside values.yaml is pushed (to ECR or Docker Hub, etc) for example: (this is my helm-chart/values.yaml)

replicaCount: 1

image:
   repository:dkr.ecr.us-east-1.amazonaws.com/minta-user
   tag: 1.0.8

you need to make sure that the image:1.0.8 is pushed!


On GKE, if the pod is dead, it's best to check for the events. It will show in more detail what the error is about.

In my case, I had :

Failed to pull image "gcr.io/project/imagename@sha256:c8e91af54fc17faa1c49e2a05def5cbabf8f0a67fc558eb6cbca138061a8400a":
 rpc error: code = Unknown desc = error pulling image configuration: unknown blob

It turned out the image was damaged somehow. After repushing it and deploying with the new hash, it worked again.


I faced the similar situation and it turned out that with the actualisation of Docker Desktop I was signed out and after I signed back in all works fine again.


Have you tried to edit to see what's wrong (I had the wrong image location)

kubectl edit pods arix-3-yjq9w

or even delete your pod?

kubectl delete arix-3-yjq9w

Run docker login

Push the image to docker hub

Re-create pod

This solved the problem for me. Hope it helps.


I was facing the similar problem, but instead of one all of my pods were not ready and displaying Ready status 0/1 Something like enter image description here

I tried a lot of things but at last i found that the context was not correctly set. Please use following command and ensure you are in correct context

kubectl config get-contexts


I ran into this issue on GKE and the reason was no credentials for docker.

Running this resolved it:

gcloud auth configure-docker