[docker] How to retry image pull in a kubernetes Pods?

I am new to kubernetes. I have an issue in the pods. When I run the command

 kubectl get pods

Result:

NAME                   READY     STATUS             RESTARTS   AGE
mysql-apim-db-1viwg    1/1       Running            1          20h
mysql-govdb-qioee      1/1       Running            1          20h
mysql-userdb-l8q8c     1/1       Running            0          20h
wso2am-default-813fy   0/1       ImagePullBackOff   0          20h

Due to an issue of "wso2am-default-813fy" node, I need to restart it. Any suggestion?

This question is related to docker kubernetes coreos

The answer is


$ kubectl replace --force -f <resource-file>

if all goes well, you should see something like:

<resource-type> <resource-name> deleted
<resource-type> <resource-name> replaced

details of this can be found in the Kubernetes documentation, "manage-deployment" and kubectl-cheatsheet pages at the time of writing.


If the Pod is part of a Deployment or Service, deleting it will restart the Pod and, potentially, place it onto another node:

$ kubectl delete po $POD_NAME

replace it if it's an individual Pod:

$ kubectl get po -n $namespace $POD_NAME -o yaml | kubectl replace -f -


Most probably the issue of ImagePullBackOff is due to either the image not being present or issue with the pod YAML file.

What I will do is this

kubectl get pod -n $namespace $POD_NAME --export > pod.yaml | kubectl -f apply -

I would also see the pod.yaml to see the why the earlier pod didn't work


First try to see what's wrong with the pod:

kubectl logs -p <your_pod>

In my case it was a problem with the YAML file.

So, I needed to correct the configuration file and replace it:

kubectl replace --force -f <yml_file_describing_pod>

In case of not having the yaml file:

kubectl get pod PODNAME -n NAMESPACE -o yaml | kubectl replace --force -f -


Try with deleting pod it will try to pull image again.

kubectl delete pod <pod_name> -n <namespace_name>