[docker] Docker: Container keeps on restarting again on again

I today deployed an instance of MediaWiki using the appcontainers/mediawiki docker image, and I now have a new problem for which I cannot find any clue. After trying to attach to the mediawiki front container using:

docker attach mediawiki_web_1

which answers Terminated on my configuration for a reason I ignore, trying also:

docker exec -it mediawiki_web_1 bash

I do get something close to an error message:

Error response from daemon: Container 81c07e4a69519c785b12ce4512a8ec76a10231ecfb30522e714b0ae53a0c9c68 is restarting, wait until the container is running

And there is my new problem, because this container never stop restarting. I can see that using docker ps -a which always returns a STATUS of Restarting (127) x seconds ago.

The thing is, I am able to stop the container (I tested) but starting it again seems to bring it back into its restarting loop.

Any idea what could be the issue here ? The whole thing was properly working until I tried to attach to it...

I am sad :-(

This question is related to docker

The answer is


The docker logs command will show you the output a container is generating when you don't run it interactively. This is likely to include the error message.

docker logs --tail 50 --follow --timestamps mediawiki_web_1

You can also run a fresh container in the foreground with docker run -ti <your_wiki_image> to see what that does. You may need to map some config from your docker-compose yml to the docker command.

I would guess that attaching to the media wiki process caused a crash which has corrupted something in your data.


tl;dr It is restarting with a status code of 127, meaning there is a missing file/library in your container. Starting a fresh container just might fix it.

Explanation:

As far as my understanding of Docker goes, this is what is happening:

  1. Container tries to start up. In the process, it tries to access a file/library which does not exist.
  2. It exits with a status code of 127, which is explained in this answer.
  3. Normally, this is where the container should have completely exited, but it restarts.
  4. It restarts because the restart policy must have been set to something other than no (the default), (using either the command line flag --restart or the docker-compose.yml key restart) while starting the container.

Solution: Something might have corrupted your container. Starting a fresh container should ideally do the job.


From personal experience it sounds like there is a problem within your docker container that is not allowing it to restart. So some process within the container is causing the restart to hang or some process is causing the container to crash on start.

When you start the container make sure you start it detached "-d" if you are going to attach to it. (ex. "docker run -d mediawiki_web_1")


Try adding these params to your docker yml file

restart: "no"
  restart: always
  restart: on-failure
  restart: unless-stopped
  environment:
    POSTGRES_DB: "db_name"
    POSTGRES_HOST_AUTH_METHOD: "trust"

Final file should look something like this

postgres:
  restart: "no"
  restart: always
  restart: on-failure
  restart: unless-stopped
  image: postgres:latest
  volumes:
    - /data/postgresql:/var/lib/postgresql
  ports:
    - "5432:5432"
  environment:
    POSTGRES_DB: "db_name"
    POSTGRES_HOST_AUTH_METHOD: "trust"

I had forgot Minikube running in background and thats what always restarted them back up


try running

docker stop CONTAINER_ID & docker rm -v CONTAINER_ID

Thanks


Check the partition where you have installed docker. In most cases, the partition is at 100% capacity so you may need to look into that.


First check the logs why the container failed. Because your restart policy might bring your container back to running status. Better to fix the issue, Then probably you can build a new image with/without fix. Later execute below command

docker system prune

https://forums.docker.com/t/docker-registry-in-restarting-1-status-forever/12717/3


When docker kill CONTAINER_ID does not work and docker stop -t 1 CONTAINER_ID also does not work, you can try to delete the container:

docker container rm CONTAINER_ID

I had a similar issue today where containers were in a continuous restart loop.

The issue in my case was related to me being a poor engineer.

Anyway, I fixed the issue by deleting the container, fixing my code, and then rebuilding and running the container.

Hope that this helps anyone stuck with this issue in future


In my case i removed

Restart=always

added

tty: true

And executed the below command to open shell (daemon process, because docker reads the compose file and stops the container once it reaches the last line of the file).

docker-compose up -d

In my case nginx container was keep on restarting , I checked logs of nginx container and came to know .crt and .key file of a unrequired domain are having errors , so I removed respective .conf file , .crt and .key and then restarted nginx . That's it nginx is working fine without restarting .


This could also be the case if you have created a systemd service that has:

[Service]
Restart=always
ExecStart=/usr/bin/docker container start -a my_container
ExecStop=/usr/bin/docker container stop -t 2 my_container