[docker] What is the meaning of ImagePullBackOff status on a Kubernetes pod?

I'm trying to run my first kubernetes pod locally. I've run the following command (from here):

export ARCH=amd64
docker run -d \
    --volume=/:/rootfs:ro \
    --volume=/sys:/sys:ro \
    --volume=/var/lib/docker/:/var/lib/docker:rw \
    --volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
    --volume=/var/run:/var/run:rw \
    --net=host \
    --pid=host \
    --privileged \
    gcr.io/google_containers/hyperkube-${ARCH}:${K8S_VERSION} \
    /hyperkube kubelet \
        --containerized \
        --hostname-override=127.0.0.1 \
        --api-servers=http://localhost:8080 \
        --config=/etc/kubernetes/manifests \
        --cluster-dns=10.0.0.10 \
        --cluster-domain=cluster.local \
        --allow-privileged --v=2

Then, I've trying to run the following:

kubectl create -f ./run-aii.yaml

run-aii.yaml:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: aii
spec:
  replicas: 2
  template:
    metadata:
      labels:
        run: aii
    spec:
      containers:
      - name: aii
        image: aii
        ports:
        - containerPort: 5144
        env:
        - name: KAFKA_IP
          value: kafka
        volumeMounts:
        - mountPath: /root/script
          name: scripts-data
          readOnly: true
        - mountPath: /home/aii/core
          name: core-aii
          readOnly: true
        - mountPath: /home/aii/genome
          name: genome-aii
          readOnly: true
        - mountPath: /home/aii/main
          name: main-aii
          readOnly: true
      - name: kafka
        image: kafkazoo
        volumeMounts:
        - mountPath: /root/script
          name: scripts-data
          readOnly: true
        - mountPath: /root/config
          name: config-data
          readOnly: true
      - name: ws
        image: ws
        ports:
        - containerPort: 3000
      volumes:
      - name: scripts-data
        hostPath:
          path: /home/aii/general/infra/script
      - name: config-data
        hostPath:
          path: /home/aii/general/infra/config
      - name: core-aii
        hostPath: 
          path: /home/aii/general/core
      - name: genome-aii
        hostPath: 
          path: /home/aii/general/genome
      - name: main-aii
        hostPath: 
          path: /home/aii/general/main

Now, when I run: kubectl get pods I'm getting:

NAME                    READY     STATUS             RESTARTS   AGE
aii-806125049-18ocr     0/3       ImagePullBackOff   0          52m
aii-806125049-6oi8o     0/3       ImagePullBackOff   0          52m
aii-pod                 0/3       ImagePullBackOff   0          23h
k8s-etcd-127.0.0.1      1/1       Running            0          2d
k8s-master-127.0.0.1    4/4       Running            0          2d
k8s-proxy-127.0.0.1     1/1       Running            0          2d
nginx-198147104-9kajo   1/1       Running            0          2d

BTW: docker images return:

REPOSITORY                                 TAG                 IMAGE ID            CREATED             SIZE
ws                                         latest              fa7c5f6ef83a        7 days ago          706.8 MB
kafkazoo                                   latest              84c687b0bd74        9 days ago          697.7 MB
aii                                        latest              bd12c4acbbaf        9 days ago          1.421 GB
node                                       4.4                 1a93433cee73        11 days ago         647 MB
gcr.io/google_containers/hyperkube-amd64   v1.2.4              3c4f38def75b        11 days ago         316.7 MB
nginx                                      latest              3edcc5de5a79        2 weeks ago         182.7 MB
docker_kafka                               latest              e1d954a6a827        5 weeks ago         697.7 MB
spotify/kafka                              latest              30d3cef1fe8e        12 weeks ago        421.6 MB
wurstmeister/zookeeper                     latest              dc00f1198a44        3 months ago        468.7 MB
centos                                     latest              61b442687d68        4 months ago        196.6 MB
centos                                     centos7.2.1511      38ea04e19303        5 months ago        194.6 MB
gcr.io/google_containers/etcd              2.2.1               a6cd91debed1        6 months ago        28.19 MB
gcr.io/google_containers/pause             2.0                 2b58359142b0        7 months ago        350.2 kB
sequenceiq/hadoop-docker                   latest              5c3cc170c6bc        10 months ago       1.766 GB

why do I get the ImagePullBackOff ??

This question is related to docker kubernetes

The answer is


Despite all the other great answers none helped me until I found a comment that pointed out this Updating images:

The default pull policy is IfNotPresent which causes the kubelet to skip pulling an image if it already exists.

That's exactly what I wanted, but didn't seem to work.

Reading further said the following:

If you would like to always force a pull, you can do one of the following:

  • omit the imagePullPolicy and use :latest as the tag for the image to use.

When I replaced latest with a version (that I had pushed to minikube's Docker daemon), it worked fine.

$ kubectl create deployment presto-coordinator \
    --image=warsaw-data-meetup/presto-coordinator:beta0
deployment.apps/presto-coordinator created

$ kubectl get deployments
NAME                 READY   UP-TO-DATE   AVAILABLE   AGE
presto-coordinator   1/1     1            1           3s

Find the pod of the deployment (using kubectl get pods) and use kubectl describe pod to find out more on the pod.


I too had this problem, when I checked I image that I was pulling from a private registry was removed If we describe pod it will show pulling event and the image it's trying to pull

kubectl describe pod <POD_NAME>

Events:
 Type     Reason   Age                  From              Message
 ----     ------   ----                 ----              -------
 Normal   Pulling  18h (x35 over 20h)   kubelet, gsk-kub  Pulling image "registeryName:tag"
 Normal   BackOff  11m (x822 over 20h)  kubelet, gsk-kub  Back-off pulling image "registeryName:tag"
 Warning  Failed   91s (x858 over 20h)  kubelet, gsk-kub  Error: ImagePullBackOff


The issue arises when the image is not present on the cluster and k8s engine is going to pull the respective registry. k8s Engine enables 3 types of ImagePullPolicy mentioned :

  1. Always : It always pull the image in container irrespective of changes in the image
  2. Never : It will never pull the new image on the container
  3. IfNotPresent : It will pull the new image in cluster if the image is not present.

Best Practices : It is always recommended to tag the new image in both docker file as well as k8s deployment file. So That it can pull the new image in container.


You can specify also imagePullPolicy: Never in the container's spec:

containers:
- name: nginx
  imagePullPolicy: Never
  image: custom-nginx
  ports:
  - containerPort: 80

I had this error when I tried to create a replicationcontroller. The issue was, I wrongly spelt the nginx image name in template definition.

Note: This error occurs when kubernetes is unable to pull the specified image from the repository.


By default Kubernetes looks in the public Docker registry to find images. If your image doesn't exist there it won't be able to pull it.

You can run a local Kubernetes registry with the registry cluster addon.

Then tag your images with localhost:5000:

docker tag aii localhost:5000/dev/aii

Push the image to the Kubernetes registry:

docker push localhost:5000/dev/aii

And change run-aii.yaml to use the localhost:5000/dev/aii image instead of aii. Now Kubernetes should be able to pull the image.

Alternatively, you can run a private Docker registry through one of the providers that offers this (AWS ECR, GCR, etc.), but if this is for local development it will be quicker and easier to get setup with a local Kubernetes Docker registry.


I had the same problem what caused it was that I already had created a pod from the docker image via the .yml file, however I mistyped the name, i.e test-app:1.0.1 when I needed test-app:1.0.2 in my .yml file. So I did kubectl delete pods --all to remove the faulty pod then redid the kubectl create -f name_of_file.yml which solved my problem.


I had similar problem when using minikube over hyperv with 2048GB memory. I found that in HyperV manager the Memory Demand was higher than allocated.

So I stopped minikube and assigned somewhere between 4096-6144GB. It worked fine after that, all pods running!

I don't know if this can nail down the issue in every case. But just have a look at the memory and disk allocated to the minikube.


One issue that may cause an ImagePullBackOff especially if you are pulling from a private registry is if the pod is not configured with the imagePullSecret of the private registry.

An authentication error may cause an imagePullBackOff.