[docker] Why docker container exits immediately

I run a container in the background using

 docker run -d --name hadoop h_Service

it exits quickly. But if I run in the foreground, it works fine. I checked logs using

docker logs hadoop

there was no error. Any ideas?

DOCKERFILE

 FROM java_ubuntu_new
 RUN wget http://archive.cloudera.com/cdh4/one-click-install/precise/amd64/cdh4-repository_1.0_all.deb
 RUN dpkg -i cdh4-repository_1.0_all.deb
 RUN curl -s http://archive.cloudera.com/cdh4/ubuntu/precise/amd64/cdh/archive.key | apt-key add -
 RUN  apt-get update
 RUN apt-get install -y hadoop-0.20-conf-pseudo
 RUN dpkg -L hadoop-0.20-conf-pseudo
 USER hdfs
 RUN hdfs namenode -format
 USER root
 RUN apt-get install -y sudo
 ADD . /usr/local/
 RUN chmod 777 /usr/local/start-all.sh
 CMD ["/usr/local/start-all.sh"]

start-all.sh

 #!/usr/bin/env bash
 /etc/init.d/hadoop-hdfs-namenode start
 /etc/init.d/hadoop-hdfs-datanode start
 /etc/init.d/hadoop-hdfs-secondarynamenode start
 /etc/init.d/hadoop-0.20-mapreduce-tasktracker start
 sudo -u hdfs hadoop fs -chmod 777 /
 /etc/init.d/hadoop-0.20-mapreduce-jobtracker start
 /bin/bash

This question is related to docker

The answer is


I would like to extend or dare I say, improve answer mentioned by camposer

When you run

docker run -dit ubuntu

you are basically running the container in background in interactive mode.

When you attach and exit the container by CTRL+D (most common way to do it), you stop the container because you just killed the main process which you started your container with the above command.

Making advantage of an already running container, I would just fork another process of bash and get a pseudo TTY by running:

docker exec -it <container ID> /bin/bash

Since the image is a linux, one thing to check is to make sure any shell scripts used in the container have unix line endings. If they have a ^M at the end then they are windows line endings. One way to fix them is with dos2unix on /usr/local/start-all.sh to convert them from windows to unix. Running the docker in interactive mode can help figure out other problems. You could have a file name typo or something. see https://en.wikipedia.org/wiki/Newline


You need to run it with -d flag to leave it running as daemon in the background.

docker run -d -it ubuntu bash


Add this to the end of Dockerfile:

CMD tail -f /dev/null

Sample Docker file:

FROM ubuntu:16.04

# other commands

CMD tail -f /dev/null

Reference


Coming from duplicates, I don't see any answer here which addresses the very common antipattern of running your main workload as a background job, and then wondering why Docker exits.

In simple terms, if you have

my-main-thing &

then either take out the & to run the job in the foreground, or add

wait

at the end of the script to make it wait for all background jobs.

It will then still exit if the main workload exits, so maybe run this in a while true loop to force it to restart forever:

while true; do
    my-main-thing &
    other things which need to happen while the main workload runs in the background
    maybe if you have such things
    wait
done

(Notice also how to write while true. It's common to see silly things like while [ true ] or while [ 1 ] which coincidentally happen to work, but don't mean what the author probably imagined they ought to mean.)


whenever I want a container to stay up after finish the script execution I add

&& tail -f /dev/null

at the end of command. So it should be:

/usr/local/start-all.sh && tail -f /dev/null

If you check Dockerfile from containers, for example fballiano/magento2-apache-php

you'll see that at the end of his file he adds the following command: while true; do sleep 1; done

Now, what I recommend, is that you do this

docker container ls --all | grep 127

Then, you will see if your docker image had an error, if it exits with 0, then it probably needs one of these commands that will sleep forever.


Adding

exec "$@"

at the end of my shell script was my fix!


My pracitce is in the Dockerfile start a shell which will not exit immediately CMD [ "sh", "-c", "service ssh start; bash"], then run docker run -dit image_name. This way the (ssh) service and container is up running.


This did the trick for me:

docker run -dit ubuntu

After it, I checked for the processes running using:

docker ps -a

For attaching again the container

docker attach CONTAINER_NAME

TIP: For exiting without stopping the container type: ^P^Q


There are many possible ways to cause a docker to exit immediately. For me, it was the problem with my Dockerfile. There was a bug in that file. I had ENTRYPOINT ["dotnet", "M4Movie_Api.dll] instead of ENTRYPOINT ["dotnet", "M4Movie_Api.dll"]. As you can see I had missed one quotation(") at the end.

To analyze the problem I started my container and quickly attached my container so that I could see what was the exact problem.

C:\SVenu\M4Movie\Api\Api>docker start 4ea373efa21b


C:\SVenu\M4Movie\Api\Api>docker attach 4ea373efa21b

Where 4ea373efa21b is my container id. This drives me to the actual issue.

enter image description here

After finding the issue, I had to build, restore, publish my container again.


A nice approach would be to start up your processes and services running them in the background and use the wait [n ...] command at the end of your script. In bash, the wait command forces the current process to:

Wait for each specified process and return its termination status. If n is not given, all currently active child processes are waited for, and the return status is zero.

I got this idea from Sébastien Pujadas' start script for his elk build.

Taking from the original question, your start-all.sh would look something like this...

 #!/usr/bin/env bash
 /etc/init.d/hadoop-hdfs-namenode start &
 /etc/init.d/hadoop-hdfs-datanode start &
 /etc/init.d/hadoop-hdfs-secondarynamenode start &
 /etc/init.d/hadoop-0.20-mapreduce-tasktracker start &
 sudo -u hdfs hadoop fs -chmod 777 /
 /etc/init.d/hadoop-0.20-mapreduce-jobtracker start &
 wait

Why docker container exits immediately?

If you want to force the image to hang around (in order to debug something or examine state of the file system) you can override the entry point to change it to a shell:

docker run -it --entrypoint=/bin/bash myimagename

I added read shell statement at the end. This keeps the main process of the container - startup shell script - running.