I installed docker on a Debian 7 machine in the following way
$ echo deb http://get.docker.io/ubuntu docker main > /etc/apt/sources.list.d/docker.list
$ sudo apt-get update
$ curl -sSL https://get.docker.com/ubuntu/ | sudo sh
After that when I first tried creating an Image it failed with the following error
time="2015-06-02T14:26:37-04:00" level=info msg="[8] System error: write /sys/fs/cgroup/docker/01f5670fbee1f6687f58f3a943b1e1bdaec2630197fa4da1b19cc3db7e3d3883/cgroup.procs: no space left on device"
Here is the docker info
Containers: 2
Images: 21
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 25
Dirperm1 Supported: true
Execution Driver: native-0.2
Kernel Version: 3.16.0-0.bpo.4-amd64
Operating System: Debian GNU/Linux 7 (wheezy)
CPUs: 2
Total Memory: 15.7 GiB
WARNING: No memory limit support
WARNING: No swap limit support
How can I increase the memory? Where are the system configurations stored?
From Kal's suggestions:
When I got rid of all the images and containers it did free some space and the image build ran longer before failing with the same error. So the question is, which space is this referring to and how do I configure it?
The current best practice is:
docker system prune
Note the output from this command prior to accepting the consequences:
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all dangling images
- all dangling build cache
Are you sure you want to continue? [y/N]
In other words, continuing with this command is permanent. Keep in mind that best practice is to treat stopped containers as ephemeral i.e. you should be designing your work with Docker to not keep these stopped containers around. You may want to consider using the --rm
flag at runtime if you are not actively debugging your containers.
Make sure you read this answer, re: Volumes
You may also be interested in this answer, if docker system prune
does not work for you.
you can also use:
docker system prune
or for just volumes:
docker volume prune
Your cgroups have the cpuset
controller enabled. This controller is mostly useful in NUMA environment where it allows to finely specify which CPU/memory bank your tasks are allowed to run.
By default the mandatory cpuset.mems
and cpuset.cpus
are not set which means that there is "no space left" for your task, hence the error.
The easiest way to fix this is to enable cgroup.clone_children
to 1 in the root cgroup. In your case, it should be
echo 1 > /sys/fs/cgroup/docker/cgroup.clone_children
It will basically instruct the system to automatically initialize container's cpuset.mems
and cpuset.cpus
from their parent cgroup.
As already mentioned,
docker system prune
helps, but with Docker 17.06.1 and later without pruning unused volumes. Since Docker 17.06.1, the following command prunes volumes, too:
docker system prune --volumes
From the Docker documentation: https://docs.docker.com/config/pruning/
The docker system prune command is a shortcut that prunes images, containers, and networks. In Docker 17.06.0 and earlier, volumes are also pruned. In Docker 17.06.1 and higher, you must specify the --volumes flag for docker system prune to prune volumes.
If you want to prune volumes and keep images and containers:
docker volume prune
For me docker system prune
did the trick. I`m running mac os.
to remove all unused containers, volumes, networks and images at once (https://docs.docker.com/engine/reference/commandline/system_prune/#related-commands):
docker system prune -a -f --volumes
if it's not enough, one can remove running containers first:
docker rm -f $(docker ps -a -q)
docker system prune -a -f --volumes
increasing /var/lib/docker or using another location with more space is also a good alternative to get rid of this error (see How to change the docker image installation directory?)
Its may be due to the default storage space set to 40GB ( default path , /var/lib/docker)
you can change the storage volume to point to different path
DOCKER_STORAGE_OPTIONS='--storage-driver=overlay --graph=CUSTOM_PATH'
if you run command docker info ( it should show storage driver as overlay)
1. Remove Containers:
$ docker rm $(docker ps -aq)
2. Remove Images:
$ docker rmi $(docker images -q)
Instead of perform steps 1 and 2 you can do:
docker system prune
This command will remove:
Docker leaves dangling images around that can take up your space. To clean up after Docker, run the following:
docker image prune [-af if you want to force remove all images]
or with older versions of Docker:
docker rm $(docker ps -q -f 'status=exited')
docker rmi $(docker images -q -f "dangling=true")
This will remove exited and dangling images, which hopefully clears out device space.
If you're using the boot2docker image via Docker Toolkit, then the problem stems from the fact that the boot2docker virtual machine has run out of space.
When you do a docker import
or add a new image, the image gets copied into the /mnt/sda1
which might have become full.
One way to check what space you have available in the image, is to ssh into the vm and run df -h
and check the remaining space in /mnt/sda1
The ssh command is
docker-machine ssh default
Once you are sure that it is indeed a space issue, you can either clean up according to the instructions in some of the answers on this question, or you may choose to resize the boot2docker image itself, by increasing the space on /mnt/sda1
You can follow the instructions here to do the resizing of the image https://gist.github.com/joost/a7cfa7b741d9d39c1307
Check that you have free space on /var as this is where Docker stores the image files by default (in /var/lib/docker).
First clean stuff up by using docker ps -a
to list all containers (including stopped ones) and docker rm
to remove them; then use docker images
to list all the images you have stored and docker rmi
to remove them.
Next change the storage location with a -g option on the docker daemon or by editing /etc/default/docker
and adding the -g
option to DOCKER_OPTS
. -g
specifies the location of the "Docker runtime" which is basically all the stuff that Docker creates as you build images and run containers. Choose a location with plenty of space as the disk space used will tend to grow over time. If you edit /etc/default/docker
, you will need to restart the docker daemon for the change to take effect.
Now you should be able to create a new image (or pull one from Docker Hub) and you should see a bunch of files getting created in the directory you specified with the -g option.
In my case I didn't have so many images/containers, but the build cache was filling up my Docker Disk.
You can see that this is the problem by running
docker system df
Output:
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 22 13 7.581GB 3.899GB (51%)
Containers 15 0 2.166GB 2.166GB (100%)
Local Volumes 4 4 550.2MB 0B (0%)
Build Cache 611 0 43.83GB 43.83GB!!!!!!!!!
The command below solves that issue
docker builder prune
docker rmi $(docker images -f "dangling=true" -q)
If it's just a test installation of Docker (ie not production) and you don't care about doing a nuclear clean, you can:
clean all containers:
docker ps -a | sed '1 d' | awk '{print $1}' | xargs -L1 docker rm
clean all images:
docker images -a | sed '1 d' | awk '{print $3}' | xargs -L1 docker rmi -f
Again, I use this in my ec2 instances when developing Docker, not in any serious QA or Production path. The great thing is that if you have your Dockerfile(s), it's easy to rebuild and or docker pull
.
I also encountered this issue on RHEL machine. I did not find any apt solution anywhere on stack-overflow and docker-hub community. If you are facing this issue even after below command:
docker system prune --all
The solution which worked finally:
In my case installation of ubuntu-server 18.04.1 [for some weird reason] created an LVM logical volume with just 4GBs in size instead of 750GBs. Therefore when pulling images i would get this "no space left on device" error. The fix is simple:
lvextend -l 100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv
Don't just run the docker prune
command. It will delete all the docker networks, containers, and images. So you might end up losing the important data as well.
The error shows that "No space left on device" so we just need to free up some space.
The easiest way to free some space is to remove dangling images.
When the old created images are not being used those images are referred to as dangling images or there are some cache images as well which you can remove.
Use the below commands. To list all dangling images image id.
docker images -f "dangling=true" -q
to remove the images by image id.
docker rmi IMAGE_ID
This way you can free up some space and start hacking with docker again :)
I run the below commands.
There is no need to rebuilt images afterwards.
docker rm $(docker ps -qf 'status=exited')
docker rmi $(docker images -qf "dangling=true")
docker volume rm $(docker volume ls -qf dangling=true)
These remove exited/dangling containers and dangling volumes.
$ docker rm $(docker ps -aq)
This worked for me
docker system prune
appears to be better option with latest version
Clean Docker by using the following command:
docker images --no-trunc | grep '<none>' | awk '{ print $3 }' \
| xargs docker rmi
Seems like there are a few ways this can occur. The issue I had was that the docker disk image had hit its maximum size (Docker Whale -> Preferences -> Disk if you want to view what size that is in OSX).
I upped the limit and and was good to go. I'm sure cleaning up unused images would work as well.
So docker system prune
and docker system prune --volumes
suggested in other answers freed up some space each time, but eventually every time I ran anything I was getting the error.
What actually fixed the root issue was deleting the Docker.raw
file that Docker for Mac uses for storage, and restarting it.
To find that file open up Docker for Mac and go to*
Preferences > Resources > Advanced > Disk Image Location
*this is for version 2.2.0.5, but on older versions it should be similar
On newer versions of Docker for Mac**, it shows you the actual size of that file on disk right there in the UI, as well as its max allocated size. You'll probably see that it is massive. For example on my machine it was 41GB!
**On older versions, it doesn't show you the actual disk usage in the UI, and MacOS Finder always shows the file size as the max allocated size. You can check the actual size on disk by opening the directory in a terminal and running
du -h Docker.raw
I deleted Docker.raw
, restarted Docker for Mac, and the file was automatically created again and was back to being 0GB.
Everything continued to work as before, though of course I had lost my Docker cache. As expected, after running a few Docker commands the file started to fill up again with a few GB of stuff, but nowhere near 41GB.
Update
A few months later, my Docker.raw
filled back up again to a similar size. So this method did work, but has to repeated every few months. For me that's fine.
A note on why this works - I have to assume it's a bug in Docker for Mac. It really seems like docker system prune
/ docker system prune --volumes
should entirely clear the contents of this file, but it appears the file accumulates other stuff that can't be deleted by these commands. Anyway, deleting it manually solves the problem!
I just ran into this. I'm on Ubuntu 20.04. What worked? Reinstalling Docker:
sudo apt-get purge docker-ce docker-ce-cli containerd.io
sudo apt-get install docker-ce docker-ce-cli containerd.io
A bit crude, I know. I tried pruning Docker, but it still would not work.
I had the same error and solve it this way:
1 . Delete the orphaned volumes in Docker, you can use the built-in docker volume command. The built-in command also deletes any directory in /var/lib/docker/volumes that is not a volume so make sure you didn't put anything in there you want to save.
Warning be very careful with this if you have some data you want to keep
Cleanup:
$ docker volume rm $(docker volume ls -qf dangling=true)
Additional commands:
List dangling volumes:
$ docker volume ls -qf dangling=true
List all volumes:
$ docker volume ls
2 . Also consider removing all the unused Images.
First get rid of the <none>
images (those are sometimes generated while building an image and if for any reason the image building was interrupted, they stay there).
here's a nice script I use to remove them
docker rmi $(docker images | grep '^<none>' | awk '{print $3}')
Then if you are using Docker Compose to build Images locally for every project. You will end up with a lot of images usually named like your folder (example if your project folder named Hello, you will find images name Hello_blablabla
). so also consider removing all these images
you can edit the above script to remove them or remove them manually with
docker rmi {image-name}
Source: Stackoverflow.com