[docker] Docker - Cannot remove dead container

I am unable to remove the dead container, it appears again after i restart the Docker service.

docker ps -a
CONTAINER ID         STATUS          
11667ef16239         Dead

Then

docker rm -f 11667ef16239

Then, when I ran the docker ps -a, no docker containers showing.

docker ps -a
CONTAINER ID         STATUS

However, when I restart the docker service:

service docker restart

And run the docker ps -a again:

docker ps -a
CONTAINER ID         STATUS          
11667ef16239         Dead

This question is related to docker

The answer is


Try this it worked for me on centos 1) docker container ls -a gives you a list of containers check status which you want to get rid of enter image description here 2) docker container rm -f 97af2da41b2b not a big fan force flag but does the work to check it worked just fire the command again or list it. enter image description here 3) continue till we clear all dead containers enter image description here


grep 656cfd09aee399c8ae8c8d3e735fe48d70be6672773616e15579c8de18e2a3b3 /proc/*/mountinfo

then find the pid of 656cfd09aee399c8ae8c8d3e735fe48d70be6672773616e15579c8de18e2a3b3and and kill it


Try running the following commands. It always works for me.

# docker volume rm $(docker volume ls -qf dangling=true)
# docker rm $(docker ps -q -f 'status=exited')

After execution of the above commands, restart docker by,

# service docker restart

Actually things changed slightly these days in order to get rid of those dead containers you may try to unmount those blocked filesystems to release them

So if you get message like this

Error response from daemon: Cannot destroy container elated_wozniak: Driver devicemapper failed to remove root filesystem 656cfd09aee399c8ae8c8d3e735fe48d70be6672773616e15579c8de18e2a3b3: Device is Busy

just run this

umount /var/lib/docker/devicemapper/mnt/656cfd09aee399c8ae8c8d3e735fe48d70be6672773616e15579c8de18e2a3b3

and you can normally remove container after that


Removing container by force worked for me.

docker rm -f <id_of_the_dead_container>

Notes:

Be aware that this command might throw this error Error response from daemon: Driver devicemapper failed to remove root filesystem <id_of_the_dead_container>: Device is Busy

The mount of your's dead container device mapper should be removed despite this message. That is, you will no longer access this path:

/var/lib/docker/devicemapper/mnt/<id_of_the_dead_container>


I had the following error when removing a dead container (docker 17.06.1-ce on CentOS 7):

Error response from daemon: driver "overlay" failed to remove root filesystem for <some-id>: 
remove /var/lib/docker/overlay/<some-id>/merged: device or resource busy

Here is how I fixed it:

1. Check which other processes are also using docker resources

$ grep docker /proc/*/mountinfo

which outputs something like this, where the number after /proc/ is the pid:

/proc/10001/mountinfo:179...
/proc/10002/mountinfo:149...
/proc/12345/mountinfo:159 149 0:36 / /var/lib/docker/overlay/...

2. Check the process name of the above pid

$ ps -p 10001 -o comm=
dockerd
$ ps -p 10002 -o comm=
docker-containe
$ ps -p 12345 -o comm=
nginx   <<<-- This is suspicious!!!

So, nginx with pid 12345 seems to also be using /var/lib/docker/overlay/..., which is why we cannot remove the related container and get the device or resource busy error. (See here for a discussion on how nginx shares the same mount namespace with docker containers thus prevents its deletion.)

3. Stop nginx and then I can remove the container successfully.

$ sudo service nginx stop
$ docker rm <container-id>

In my case, I had to remove it with

rm -r /var/lib/docker/containers/<container-id>/

and it worked. Maybe that's how you solve it in docker version ~19. My docker version was 19.03.12,


The best way to get rid of dead container processes is to restart your docker service. I was unable to remove a container as it was stuck in restarting status, I just restarted the docker service and it worked for me.


Try this it worked for me:

docker rm -f <container_name>

eg. docker rm -f 11667ef16239

There are a lot of answers in here but none of them involved the (quick) solution that worked for me.

I'm using Docker version 1.12.3, build 6b644ec.

I simply ran docker rmi <image-name> for the image from whence the dead container came. A docker ps -a then showed the dead container missing completely.

Then, of course, I just re-pulled the image and ran the container again.

I have no idea how it found itself in this state but so it is...


I have tried the suggestions above but didn't work.

Then

  1. I try : docker system prune -a, it didn't work the first time
  2. I reboot the system
  3. I try again docker system prune -a. This time it works. It will send a warning message and in the end ask "Are you sure you want to continue? y/n? . Ans:y . It will time a time and in the end the dead containers are gone.
  4. Verify with docker ps -a

IMPORTANT - this is the nuclear option as it destroys all containers + images


  1. For Deleting all dead container docker rm -f $(docker ps --all -q -f status=dead)

  2. For deleting all exited container docker rm -f $(docker ps --all -q -f status=exited)

As I have -f is necessary


You can also remove dead containers with this command

docker rm $(docker ps --all -q -f status=dead)

But, I'm really not sure why & how the dead containers are created. This error seems related https://github.com/typesafehub/mesos-spark-integration-tests/issues/34 whenever i get dead containers

[Update] With Docker 1.13 update, we can easily remove both unwanted containers, dangling images

$ docker system df #will show used space, similar to the unix tool df
$ docker system prune # will remove all unused data.

Running on Centos7 & Docker 1.8.2, I was unable to use Zgr3doo's solution to umount by devicemapper ( I think the response I got was that the volume wasn't mounted/found. )

I think I also had a similar thing happen with sk8terboi87 ? 's answer: I believe the message was that the volumes couldn't be unmounted, and it listed the specific volumes that it tried to umount in order to delete the dead containers.

What did work for me was stopping docker first, and then deleting the directories manually. I was able to determine which ones they were by the error output of previous command to delete all the dead containers.

Apologies for the vague descriptions above. I found this SO question days after I handled the dead containers. .. However, I noticed a similar pattern today:

$ sudo docker stop fervent_fermi; sudo docker rm fervent_fermi fervent_fermi
Error response from daemon: Cannot destroy container fervent_fermi: Driver devicemapper failed to remove root filesystem a11bae452da3dd776354aae311da5be5ff70ac9ebf33d33b66a24c62c3ec7f35: Device is Busy
Error: failed to remove containers: [fervent_fermi]

$ sudo systemctl docker stop
$ sudo rm -rf /var/lib/docker/devicemapper/mnt/a11bae452da3dd776354aae311da5be5ff70ac9ebf33d33b66a24c62c3ec7f35
$

I did notice, when using this approach that docker re-created the images with different names:

a11bae452da3     trend_av_docker   "bash"   2 weeks ago    Dead    compassionate_ardinghelli

This may have been due to the container being issued with restart=always, however, the container ID matches the ID of the container that previously used the volume that I force-deleted. There were no difficulties deleting this new container:

$ sudo docker rm -v compassionate_ardinghelli
compassionate_ardinghelli

Try, It worked for me:

$ docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
4f13b53be9dd        5b0bbf1173ea        "/opt/app/netjet..."   5 months ago        Dead                                    appname_chess

$ docker rm $(docker ps --all -q -f status=dead)
Error response from daemon: driver "devicemapper" failed to remove root filesystem for 4f13b53be9ddef3e9ba281546aef1c544805282971f324291a1dc91b50eeb440: failed to remove device 487b4b73c58d19ef79201cf6d5fcd6b7316e612e99c14505a6bf24399cad9795-init: devicemapper: Error running DeleteDevice dm_task_run failed

su
cd /var/lib/docker/containers
[root@localhost containers]#  ls -l
total 0
drwx------. 1 root root 312 Nov 17 08:58 4f13b53be9ddef3e9ba281546aef1c544805282971f324291a1dc91b50eeb440
[root@localhost containers]# rm -rf 4f13b53be9ddef3e9ba281546aef1c544805282971f324291a1dc91b50eeb440
systemctl restart docker

I got the same issue and both answers did not help.

What helped for me is just creating the directories that are missing and them remove them:

mkdir /var/lib/docker/devicemapper/mnt/656cfd09aee399c8ae8c8d3e735fe48d70be6672773616e15579c8de18e2a3b3
mkdir /var/lib/docker/devicemapper/mnt/656cfd09aee399c8ae8c8d3e735fe48d70be6672773616e15579c8de18e2a3b3-init
docker rm 656cfd09aee399c8ae8c8d3e735fe48d70be6672773616e15579c8de18e2a3b3

for Windows:

del D:\ProgramData\docker\containers\{CONTAINER ID}
del D:\ProgramData\docker\windowsfilter\{CONTAINER ID}

Then restart the Docker Desktop


Try kill it and then remove >:) i.e.
docker kill $(docker ps -q)


Tried all of the above (short of reboot/ restart docker).

So here is the error om docker rm:

$ docker rm 08d51aad0e74
Error response from daemon: driver "devicemapper" failed to remove root filesystem for 08d51aad0e74060f54bba36268386fe991eff74570e7ee29b7c4d74047d809aa: remove /var/lib/docker/devicemapper/mnt/670cdbd30a3627ae4801044d32a423284b540c5057002dd010186c69b6cc7eea: device or resource busy

Then I did a the following:

$  grep docker /proc/*/mountinfo | grep 958722d105f8586978361409c9d70aff17c0af3a1970cb3c2fb7908fe5a310ac
/proc/20416/mountinfo:629 574 253:15 / /var/lib/docker/devicemapper/mnt/958722d105f8586978361409c9d70aff17c0af3a1970cb3c2fb7908fe5a310ac rw,relatime shared:288 - xfs /dev/mapper/docker-253:5-786536-958722d105f8586978361409c9d70aff17c0af3a1970cb3c2fb7908fe5a310ac rw,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota

This got be the PID of the offending process keeping it busy - 20416 (the item after /proc/

So I did a ps -p and to my surprise find:

[devops@dp01app5030 SeGrid]$ ps -p 20416
  PID TTY          TIME CMD
20416 ?        00:00:19 ntpd

A true WTF moment. So I pair problem solved with Google and found this: Then found this https://github.com/docker/for-linux/issues/124

Turns out I had to restart ntp daemon and that fixed the issue!!!