[docker] How to copy Docker images from one host to another without using a repository

How do I transfer a Docker image from one machine to another one without using a repository, no matter private or public?

I create my own image in VirtualBox, and when it is finished I try to deploy to other machines to have real usage.

Since it is based on my own based image (like Red Hat Linux), it cannot be recreated from a Dockerfile. My dockerfile isn't easily portable.

Are there simple commands I can use? Or another solution?

This question is related to docker

The answer is


I want to move all images with tags.

```
OUT=$(docker images --format '{{.Repository}}:{{.Tag}}')
OUTPUT=($OUT)
docker save $(echo "${OUTPUT[*]}") -o /dir/images.tar
``` 

Explanation:

First OUT gets all tags but separated with new lines. Second OUTPUT gets all tags in an array. Third $(echo "${OUTPUT[*]}") puts all tags for a single docker save command so that all images are in a single tar.

Additionally, this can be zipped using gzip. On target, run:

tar xvf images.tar.gz -O | docker load

-O option to tar puts contents on stdin which can be grabbed by docker load.


You can use a one-liner with DOCKER_HOST variable:

docker save app:1.0 | gzip | DOCKER_HOST=ssh://user@remotehost docker load

Script to perform Docker save and load function (tried and tested):

Docker Save:

#!/bin/bash

#files will be saved in the dir 'Docker_images'
mkdir Docker_images
cd Docker_images
directory=`pwd`
c=0
#save the image names in 'list.txt'
doc= docker images | awk '{print $1}' > list.txt
printf "START \n"
input="$directory/list.txt"
#Check and create the image tar for the docker images
while IFS= read -r line
do
     one=`echo $line | awk '{print $1}'`
     two=`echo $line | awk '{print $1}' | cut -c 1-3`
     if [ "$one" != "<none>" ]; then
             c=$((c+1))
             printf "\n $one \n $two \n"
             docker save -o $two$c'.tar' $one
             printf "Docker image number $c successfully converted:   $two$c \n \n"
     fi
done < "$input"

Docker Load:

#!/bin/bash

cd Docker_images/
directory=`pwd`
ls | grep tar > files.txt
c=0
printf "START \n"
input="$directory/files.txt"
while IFS= read -r line
do
     c=$((c+1))
     printf "$c) $line \n"
     docker load -i $line
     printf "$c) Successfully created the Docker image $line  \n \n"
done < "$input"

First save the Docker image to a zip file:

docker save <docker image name> | gzip > <docker image name>.tar.gz

Then load the exported image to Docker using the below command:

zcat <docker image name>.tar.gz | docker load

docker-push-ssh is a command line utility I created just for this scenario.

It sets up a temporary private Docker registry on the server, establishes an SSH tunnel from your localhost, pushes your image, then cleans up after itself.

The benefit of this approach over docker save (at the time of writing most answers are using this method) is that only the new layers are pushed to the server, resulting in a MUCH quicker upload.

Oftentimes using an intermediate registry like dockerhub is undesirable, and cumbersome.

https://github.com/brthor/docker-push-ssh

Install:

pip install docker-push-ssh

Example:

docker-push-ssh -i ~/my_ssh_key [email protected] my-docker-image

The biggest caveat is that you have to manually add your localhost to Docker's insecure_registries configuration. Run the tool once and it will give you an informative error:

Error Pushing Image: Ensure localhost:5000 is added to your insecure registries.
More Details (OS X): https://stackoverflow.com/questions/32808215/where-to-set-the-insecure-registry-flag-on-mac-os

Where should I set the '--insecure-registry' flag on Mac OS?


To save an image to any file path or shared NFS place see the following example.

Get the image id by doing:

docker images

Say you have an image with id "matrix-data".

Save the image with id:

docker save -o /home/matrix/matrix-data.tar matrix-data

Copy the image from the path to any host. Now import to your local Docker installation using:

docker load -i <path to copied image file>

When using docker-machine, you can copy images between machines mach1 and mach2 with:

docker $(docker-machine config <mach1>) save <image> | docker $(docker-machine config <mach2>) load

And of course you can also stick pv in the middle to get a progess indicator:

docker $(docker-machine config <mach1>) save <image> | pv | docker $(docker-machine config <mach2>) load

You may also omit one of the docker-machine config sub-shells, to use your current default docker-host.

docker save <image> | docker $(docker-machine config <mach>) load

to copy image from current docker-host to mach

or

docker $(docker-machine config <mach>) save <image> | docker load

to copy from mach to current docker-host.


Transferring a Docker image via SSH, bzipping the content on the fly:

docker save <image> | bzip2 | \
     ssh user@host 'bunzip2 | docker load'

It's also a good idea to put pv in the middle of the pipe to see how the transfer is going:

docker save <image> | bzip2 | pv | \
     ssh user@host 'bunzip2 | docker load'

(More info about pv: home page, man page).


I assume you need to save couchdb-cartridge which has an image id of 7ebc8510bc2c:

stratos@Dev-PC:~$ docker images
REPOSITORY                             TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
couchdb-cartridge                      latest              7ebc8510bc2c        17 hours ago        1.102 GB
192.168.57.30:5042/couchdb-cartridge   latest              7ebc8510bc2c        17 hours ago        1.102 GB
ubuntu                                 14.04               53bf7a53e890        3 days ago          221.3 MB

Save the archiveName image to a tar file. I will use the /media/sf_docker_vm/ to save the image.

stratos@Dev-PC:~$ docker save imageID > /media/sf_docker_vm/archiveName.tar

Copy the archiveName.tar file to your new Docker instance using whatever method works in your environment, for example FTP, SCP, etc.

Run the docker load command on your new Docker instance and specify the location of the image tar file.

stratos@Dev-PC:~$ docker load < /media/sf_docker_vm/archiveName.tar

Finally, run the docker images command to check that the image is now available.

stratos@Dev-PC:~$ docker images
REPOSITORY                             TAG        IMAGE ID         CREATED             VIRTUAL SIZE
couchdb-cartridge                      latest     7ebc8510bc2c     17 hours ago        1.102 GB
192.168.57.30:5042/couchdb-cartridge   latest     bc8510bc2c       17 hours ago        1.102 GB
ubuntu                                 14.04      4d2eab1c0b9a     3 days ago          221.3 MB

Please find this detailed post.


You may use sshfs:

$ sshfs user@ip:/<remote-path> <local-mount-path>
$ docker save <image-id> > <local-mount-path>/myImage.tar

For a flattened export of a container's filesystem, use;

docker export CONTAINER_ID > my_container.tar

Use cat my_container.tar | docker import - to import said image.


To transfer images from your local Docker installation to a minikube VM:

docker save <image> | (eval $(minikube docker-env) && docker load)

Run

docker images

to see a list of the images on the host. Let's say you have an image called awesomesauce. In your terminal, cd to the directory where you want to export the image to. Now run:

docker save awesomesauce:latest > awesomesauce.tar

Copy the tar file to a thumb drive or whatever, and then copy it to the new host computer.

Now from the new host do:

docker load < awesomesauce.tar

Now go have a coffee and read Hacker News...


All other answers are very helpful. I just went through the same problem and figure out an easy way with docker machine scp.

Since Docker Machine v0.3.0, scp was introduced to copy files from one Docker machine to another. This is very convenient if you want copying a file from your local computer to a remote Docker machine such as AWS EC2 or Digital Ocean because Docker Machine is taking care of SSH credentials for you.

  1. Save you images using docker save like:

    docker save -o docker-images.tar app-web
    
  2. Copy images using docker-machine scp

    docker-machine scp ./docker-images.tar remote-machine:/home/ubuntu
    

Assume your remote Docker machine is remote-machine and the directory you want the tar file to be is /home/ubuntu.

  1. Load the Docker image

    docker-machine ssh remote-machine sudo docker load -i docker-images.tar