[ssh-keys] Using SSH keys inside docker container

I have an app that executes various fun stuff with Git (like running git clone & git push) and I'm trying to docker-ize it.

I'm running into an issue though where I need to be able to add an SSH key to the container for the container 'user' to use.

I tried copying it into /root/.ssh/, changing $HOME, creating a git ssh wrapper, and still no luck.

Here is the Dockerfile for reference:

#DOCKER-VERSION 0.3.4                                                           

from  ubuntu:12.04                                                              

RUN  apt-get update                                                             
RUN  apt-get install python-software-properties python g++ make git-core openssh-server -y
RUN  add-apt-repository ppa:chris-lea/node.js                                   
RUN  echo "deb http://archive.ubuntu.com/ubuntu precise universe" >> /etc/apt/sources.list
RUN  apt-get update                                                             
RUN  apt-get install nodejs -y                                                  

ADD . /src                                                                       
ADD ../../home/ubuntu/.ssh/id_rsa /root/.ssh/id_rsa                             
RUN   cd /src; npm install                                                      

EXPOSE  808:808                                                                 

CMD   [ "node", "/src/app.js"]

app.js runs the git commands like git pull

This question is related to ssh-keys docker

The answer is


One cross-platform solution is to use a bind mount to share the host's .ssh folder to the container:

docker run -v /home/<host user>/.ssh:/home/<docker user>/.ssh <image>

Similar to agent forwarding this approach will make the public keys accessible to the container. An additional upside is that it works with a non-root user too and will get you connected to GitHub. One caveat to consider, however, is that all contents (including private keys) from the .ssh folder will be shared so this approach is only desirable for development and only for trusted container images.


In later versions of docker (17.05) you can use multi stage builds. Which is the safest option as the previous builds can only ever be used by the subsequent build and are then destroyed

See the answer to my stackoverflow question for more info


Forward the ssh authentication socket to the container:

docker run --rm -ti \
        -v $SSH_AUTH_SOCK:/tmp/ssh_auth.sock \
        -e SSH_AUTH_SOCK=/tmp/ssh_auth.sock \
        -w /src \
        my_image

Your script will be able to perform a git clone.

Extra: If you want cloned files to belong to a specific user you need to use chown since using other user than root inside the container will make git fail.

You can do this publishing to the container's environment some additional variables:

docker run ...
        -e OWNER_USER=$(id -u) \
        -e OWNER_GROUP=$(id -g) \
        ...

After you clone you must execute chown $OWNER_USER:$OWNER_GROUP -R <source_folder> to set the proper ownership before you leave the container so the files are accessible by a non-root user outside the container.


In my case I had a problem with nodejs and 'npm i' from a remote repository. I fixed it added 'node' user to nodejs container and 700 to ~/.ssh in container.

Dockerfile:

USER node #added the part
COPY run.sh /usr/local/bin/
CMD ["run.sh"]

run.sh:

#!/bin/bash
chmod 700 -R ~/.ssh/; #added the part

docker-compose.yml:

nodejs:
      build: ./nodejs/10/
      container_name: nodejs
      restart: always
      ports:
        - "3000:3000"
      volumes:
        - ../www/:/var/www/html/:delegated
        - ./ssh:/home/node/.ssh #added the part
      links:
        - mailhog
      networks:
        - work-network

after that it started works


Docker containers should be seen as 'services' of their own. To separate concerns you should separate functionalities:

1) Data should be in a data container: use a linked volume to clone the repo into. That data container can then be linked to the service needing it.

2) Use a container to run the git cloning task, (i.e it's only job is cloning) linking the data container to it when you run it.

3) Same for the ssh-key: put it is a volume (as suggested above) and link it to the git clone service when you need it

That way, both the cloning task and the key are ephemeral and only active when needed.

Now if your app itself is a git interface, you might want to consider github or bitbucket REST APIs directly to do your work: that's what they were designed for.


Simplest way, get a launchpad account and use: ssh-import-id


If you are using docker compose an easy choice is to forward SSH agent like that:

something:
    container_name: something
    volumes:
        - $SSH_AUTH_SOCK:/ssh-agent # Forward local machine SSH key to docker
    environment:
        SSH_AUTH_SOCK: /ssh-agent

I ran into the same problem today and little bit modified version with previous posts I found this approach more useful to me

docker run -it -v ~/.ssh/id_rsa:/root/.my-key:ro image /bin/bash

(Note that readonly flag so container will not mess my ssh key in any case.)

Inside container I can now run:

ssh-agent bash -c "ssh-add ~/.my-key; git clone <gitrepourl> <target>"

So I don't get that Bad owner or permissions on /root/.ssh/.. error which was noted by @kross


A concise overview of the challenges of SSH inside Docker containers is detailed here. For connecting to trusted remotes from within a container without leaking secrets there are a few ways:

Beyond these there's also the possibility of using a key-store running in a separate docker container accessible at runtime when using Compose. The drawback here is additional complexity due to the machinery required to create and manage a keystore such as Vault by HashiCorp.

For SSH key use in a stand-alone Docker container see the methods linked above and consider the drawbacks of each depending on your specific needs. If, however, you're running inside Compose and want to share a key to an app at runtime (reflecting practicalities of the OP) try this:

  • Create a docker-compose.env file and add it to your .gitignore file.
  • Update your docker-compose.yml and add env_file for service requiring the key.
  • Access public key from environment at application runtime, e.g. process.node.DEPLOYER_RSA_PUBKEY in the case of a Node.js application.

The above approach is ideal for development and testing and, while it could satisfy production requirements, in production you're better off using one of the other methods identified above.

Additional resources:


Expanding Peter Grainger's answer I was able to use multi-stage build available since Docker 17.05. Official page states:

With multi-stage builds, you use multiple FROM statements in your Dockerfile. Each FROM instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in the final image.

Keeping this in mind here is my example of Dockerfile including three build stages. It's meant to create a production image of client web application.

# Stage 1: get sources from npm and git over ssh
FROM node:carbon AS sources
ARG SSH_KEY
ARG SSH_KEY_PASSPHRASE
RUN mkdir -p /root/.ssh && \
    chmod 0700 /root/.ssh && \
    ssh-keyscan bitbucket.org > /root/.ssh/known_hosts && \
    echo "${SSH_KEY}" > /root/.ssh/id_rsa && \
    chmod 600 /root/.ssh/id_rsa
WORKDIR /app/
COPY package*.json yarn.lock /app/
RUN eval `ssh-agent -s` && \
    printf "${SSH_KEY_PASSPHRASE}\n" | ssh-add $HOME/.ssh/id_rsa && \
    yarn --pure-lockfile --mutex file --network-concurrency 1 && \
    rm -rf /root/.ssh/

# Stage 2: build minified production code
FROM node:carbon AS production
WORKDIR /app/
COPY --from=sources /app/ /app/
COPY . /app/
RUN yarn build:prod

# Stage 3: include only built production files and host them with Node Express server
FROM node:carbon
WORKDIR /app/
RUN yarn add express
COPY --from=production /app/dist/ /app/dist/
COPY server.js /app/
EXPOSE 33330
CMD ["node", "server.js"]

.dockerignore repeats contents of .gitignore file (it prevents node_modules and resulting dist directories of the project from being copied):

.idea
dist
node_modules
*.log

Command example to build an image:

$ docker build -t ezze/geoport:0.6.0 \
  --build-arg SSH_KEY="$(cat ~/.ssh/id_rsa)" \
  --build-arg SSH_KEY_PASSPHRASE="my_super_secret" \
  ./

If your private SSH key doesn't have a passphrase just specify empty SSH_KEY_PASSPHRASE argument.

This is how it works:

1). On the first stage only package.json, yarn.lock files and private SSH key are copied to the first intermediate image named sources. In order to avoid further SSH key passphrase prompts it is automatically added to ssh-agent. Finally yarn command installs all required dependencies from NPM and clones private git repositories from Bitbucket over SSH.

2). The second stage builds and minifies source code of web application and places it in dist directory of the next intermediate image named production. Note that source code of installed node_modules is copied from the image named sources produced on the first stage by this line:

COPY --from=sources /app/ /app/

Probably it also could be the following line:

COPY --from=sources /app/node_modules/ /app/node_modules/

We have only node_modules directory from the first intermediate image here, no SSH_KEY and SSH_KEY_PASSPHRASE arguments anymore. All the rest required for build is copied from our project directory.

3). On the third stage we reduce a size of the final image that will be tagged as ezze/geoport:0.6.0 by including only dist directory from the second intermediate image named production and installing Node Express for starting a web server.

Listing images gives an output like this:

REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
ezze/geoport        0.6.0               8e8809c4e996        3 hours ago         717MB
<none>              <none>              1f6518644324        3 hours ago         1.1GB
<none>              <none>              fa00f1182917        4 hours ago         1.63GB
node                carbon              b87c2ad8344d        4 weeks ago         676MB

where non-tagged images correpsond to the first and the second intermediate build stages.

If you run

$ docker history ezze/geoport:0.6.0 --no-trunc

you will not see any mentions of SSH_KEY and SSH_KEY_PASSPHRASE in the final image.


In order to inject you ssh key, within a container, you have multiple solutions:

  1. Using a Dockerfile with the ADD instruction, you can inject it during your build process

  2. Simply doing something like cat id_rsa | docker run -i <image> sh -c 'cat > /root/.ssh/id_rsa'

  3. Using the docker cp command which allows you to inject files while a container is running.


'you can selectively let remote servers access your local ssh-agent as if it was running on the server'

https://developer.github.com/guides/using-ssh-agent-forwarding/


This line is a problem:

ADD ../../home/ubuntu/.ssh/id_rsa /root/.ssh/id_rsa

When specifying the files you want to copy into the image you can only use relative paths - relative to the directory where your Dockerfile is. So you should instead use:

ADD id_rsa /root/.ssh/id_rsa

And put the id_rsa file into the same directory where your Dockerfile is.

Check this out for more details: http://docs.docker.io/reference/builder/#add


For debian / root / authorized_keys:

RUN set -x && apt-get install -y openssh-server

RUN mkdir /var/run/sshd
RUN mkdir -p /root/.ssh
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN  echo "ssh-rsa AAAA....yP3w== rsa-key-project01" >> /root/.ssh/authorized_keys
RUN chmod -R go= /root/.ssh

You can pass the authorised keys in to your container using a shared folder and set permissions using a docker file like this:

FROM ubuntu:16.04
RUN apt-get install -y openssh-server
RUN mkdir /var/run/sshd
EXPOSE 22
RUN cp /root/auth/id_rsa.pub /root/.ssh/authorized_keys
RUN rm -f /root/auth
RUN chmod 700 /root/.ssh
RUN chmod 400 /root/.ssh/authorized_keys
RUN chown root. /root/.ssh/authorized_keys
CMD /usr/sbin/sshd -D

And your docker run contains something like the following to share an auth directory on the host (holding the authorised_keys) with the container then open up the ssh port which will be accessable through port 7001 on the host.

-d -v /home/thatsme/dockerfiles/auth:/root/auth -–publish=127.0.0.1:7001:22

You may want to look at https://github.com/jpetazzo/nsenter which appears to be another way to open a shell on a container and execute commands within a container.


I'm trying to work the problem the other way: adding public ssh key to an image. But in my trials, I discovered that "docker cp" is for copying FROM a container to a host. Item 3 in the answer by creak seems to be saying you can use docker cp to inject files into a container. See https://docs.docker.com/engine/reference/commandline/cp/

excerpt

Copy files/folders from a container's filesystem to the host path. Paths are relative to the root of the filesystem.

  Usage: docker cp CONTAINER:PATH HOSTPATH

  Copy files/folders from the PATH to the HOSTPATH

You can use multi stage build to build containers This is the approach you can take :-

Stage 1 building an image with ssh

FROM ubuntu as sshImage
LABEL stage=sshImage
ARG SSH_PRIVATE_KEY
WORKDIR /root/temp

RUN apt-get update && \
    apt-get install -y git npm 

RUN mkdir /root/.ssh/ &&\
    echo "${SSH_PRIVATE_KEY}" > /root/.ssh/id_rsa &&\
    chmod 600 /root/.ssh/id_rsa &&\
    touch /root/.ssh/known_hosts &&\
    ssh-keyscan github.com >> /root/.ssh/known_hosts

COPY package*.json ./

RUN npm install

RUN cp -R node_modules prod_node_modules

Stage 2: build your container

FROM node:10-alpine

RUN mkdir -p /usr/app

WORKDIR /usr/app

COPY ./ ./

COPY --from=sshImage /root/temp/prod_node_modules ./node_modules

EXPOSE 3006

CMD ["npm", "run", "dev"] 

add env attribute in your compose file:

   environment:
      - SSH_PRIVATE_KEY=${SSH_PRIVATE_KEY}

then pass args from build script like this:

docker-compose build --build-arg SSH_PRIVATE_KEY="$(cat ~/.ssh/id_rsa)"

And remove the intermediate container it for security. This Will help you cheers.


Note: only use this approach for images that are private and will always be!

The ssh key remains stored within the image, even if you remove the key in a layer command after adding it (see comments in this post).

In my case this is ok, so this is what I am using:

# Setup for ssh onto github
RUN mkdir -p /root/.ssh
ADD id_rsa /root/.ssh/id_rsa
RUN chmod 700 /root/.ssh/id_rsa
RUN echo "Host github.com\n\tStrictHostKeyChecking no\n" >> /root/.ssh/config

It's a harder problem if you need to use SSH at build time. For example if you're using git clone, or in my case pip and npm to download from a private repository.

The solution I found is to add your keys using the --build-arg flag. Then you can use the new experimental --squash command (added 1.13) to merge the layers so that the keys are no longer available after removal. Here's my solution:

Build command

$ docker build -t example --build-arg ssh_prv_key="$(cat ~/.ssh/id_rsa)" --build-arg ssh_pub_key="$(cat ~/.ssh/id_rsa.pub)" --squash .

Dockerfile

FROM python:3.6-slim

ARG ssh_prv_key
ARG ssh_pub_key

RUN apt-get update && \
    apt-get install -y \
        git \
        openssh-server \
        libmysqlclient-dev

# Authorize SSH Host
RUN mkdir -p /root/.ssh && \
    chmod 0700 /root/.ssh && \
    ssh-keyscan github.com > /root/.ssh/known_hosts

# Add the keys and set permissions
RUN echo "$ssh_prv_key" > /root/.ssh/id_rsa && \
    echo "$ssh_pub_key" > /root/.ssh/id_rsa.pub && \
    chmod 600 /root/.ssh/id_rsa && \
    chmod 600 /root/.ssh/id_rsa.pub

# Avoid cache purge by adding requirements first
ADD ./requirements.txt /app/requirements.txt

WORKDIR /app/

RUN pip install -r requirements.txt

# Remove SSH keys
RUN rm -rf /root/.ssh/

# Add the rest of the files
ADD . .

CMD python manage.py runserver

Update: If you're using Docker 1.13 and have experimental features on you can append --squash to the build command which will merge the layers, removing the SSH keys and hiding them from docker history.


You can use secrets to manage any sensitive data which a container needs at runtime but you don’t want to store in the image or in source control, such as:

  • Usernames and passwords
  • TLS certificates and keys
  • SSH keys
  • Other important data such as the name of a database or internal server
  • Generic strings or binary content (up to 500 kb in size)

https://docs.docker.com/engine/swarm/secrets/

I was trying to figure out how to add signing keys to a container to use during runtime (not build) and came across this question. Docker secrets seem to be the solution for my use case, and since nobody has mentioned it yet I'll add it.


Late to the party admittedly, how about this which will make your host operating system keys available to root inside the container, on the fly:

docker run -v ~/.ssh:/mnt -it my_image /bin/bash -c "ln -s /mnt /root/.ssh; ssh [email protected]"

I'm not in favour of using Dockerfile to install keys since iterations of your container may leave private keys behind.


Starting from docker API 1.39+ (Check API version with docker version) docker build allows the --ssh option with either an agent socket or keys to allow the Docker Engine to forward SSH agent connections.

Build Command

export DOCKER_BUILDKIT=1
docker build --ssh default=~/.ssh/id_rsa .

Dockerfile

# syntax=docker/dockerfile:experimental
FROM python:3.7

# Install ssh client (if required)
RUN apt-get update -qq
RUN apt-get install openssh-client -y

# Download public key for github.com
RUN --mount=type=ssh mkdir -p -m 0600 ~/.ssh && ssh-keyscan github.com >> ~/.ssh/known_hosts

# Clone private repository
RUN --mount=type=ssh git clone [email protected]:myorg/myproject.git myproject

More Info:


You can also link your .ssh directory between the host and the container, I don't know if this method has any security implications but it may be the easiest method. Something like this should work:

$ sudo docker run -it -v /root/.ssh:/root/.ssh someimage bash

Remember that docker runs with sudo (unless you don't), if this is the case you'll be using the root ssh keys.


We had similar problem when doing npm install in docker build time.

Inspired from solution from Daniel van Flymen and combining it with git url rewrite, we found a bit simpler method for authenticating npm install from private github repos - we used oauth2 tokens instead of the keys.

In our case, the npm dependencies were specified as "git+https://github.com/..."

For authentication in container, the urls need to be rewritten to either be suitable for ssh authentication (ssh://[email protected]/) or token authentication (https://${GITHUB_TOKEN}@github.com/)

Build command:

docker build -t sometag --build-arg GITHUB_TOKEN=$GITHUB_TOKEN . 

Unfortunately, I'm on docker 1.9, so --squash option is not there yet, eventually it needs to be added

Dockerfile:

FROM node:5.10.0

ARG GITHUB_TOKEN

#Install dependencies
COPY package.json ./

# add rewrite rule to authenticate github user
RUN git config --global url."https://${GITHUB_TOKEN}@github.com/".insteadOf "https://github.com/"

RUN npm install

# remove the secret token from the git config file, remember to use --squash option for docker build, when it becomes available in docker 1.13
RUN git config --global --unset url."https://${GITHUB_TOKEN}@github.com/".insteadOf

# Expose the ports that the app uses
EXPOSE 8000

#Copy server and client code
COPY server /server 
COPY clients /clients

It looks like this is now available in the 18.09 release.

According to documentation:

The docker build has a --ssh option to allow the Docker Engine to forward SSH agent connections.

Here is an example Dockerfile using SSH in the container:

# syntax=docker/dockerfile:experimental
FROM alpine

# Install ssh client and git
RUN apk add --no-cache openssh-client git

# Download public key for github.com
RUN mkdir -p -m 0600 ~/.ssh && ssh-keyscan github.com >> ~/.ssh/known_hosts

# Clone private repository
RUN --mount=type=ssh git clone [email protected]:myorg/myproject.git myproject

Once the Dockerfile is created, use the --ssh option for connectivity with the SSH agent:

$ docker build --ssh default .

Also, take a look at https://medium.com/@tonistiigi/build-secrets-and-ssh-forwarding-in-docker-18-09-ae8161d066


If you don't care about the security of your SSH keys, there are many good answers here. If you do, the best answer I found was from a link in a comment above to this GitHub comment by diegocsandrim. So that others are more likely to see it, and just in case that repo ever goes away, here is an edited version of that answer:

Most solutions here end up leaving the private key in the image. This is bad, as anyone with access to the image has access to your private key. Since we don't know enough about the behavior of squash, this may still be the case even if you delete the key and squash that layer.

We generate a pre-sign URL to access the key with aws s3 cli, and limit the access for about 5 minutes, we save this pre-sign URL into a file in repo directory, then in dockerfile we add it to the image.

In dockerfile we have a RUN command that do all these steps: use the pre-sing URL to get the ssh key, run npm install, and remove the ssh key.

By doing this in one single command the ssh key would not be stored in any layer, but the pre-sign URL will be stored, and this is not a problem because the URL will not be valid after 5 minutes.

The build script looks like:

# build.sh
aws s3 presign s3://my_bucket/my_key --expires-in 300 > ./pre_sign_url
docker build -t my-service .

Dockerfile looks like this:

FROM node

COPY . .

RUN eval "$(ssh-agent -s)" && \
    wget -i ./pre_sign_url -q -O - > ./my_key && \
    chmod 700 ./my_key && \
    ssh-add ./my_key && \
    ssh -o StrictHostKeyChecking=no [email protected] || true && \
    npm install --production && \
    rm ./my_key && \
    rm -rf ~/.ssh/*

ENTRYPOINT ["npm", "run"]

CMD ["start"]

In a running docker container, you can issue ssh-keygen with the docker -i (interactive) option. This will forward the container prompts to create the key inside the docker container.


As eczajk already commented in Daniel van Flymen's answer it does not seem to be safe to remove the keys and use --squash, as they still will be visible in the history (docker history --no-trunc).

Instead with Docker 18.09, you can now use the "build secrets" feature. In my case I cloned a private git repo using my hosts SSH key with the following in my Dockerfile:

# syntax=docker/dockerfile:experimental

[...]

RUN --mount=type=ssh git clone [...]

[...]

To be able to use this, you need to enable the new BuildKit backend prior to running docker build:

export DOCKER_BUILDKIT=1

And you need to add the --ssh default parameter to docker build.

More info about this here: https://medium.com/@tonistiigi/build-secrets-and-ssh-forwarding-in-docker-18-09-ae8161d066


This issue is really an annoying one. Since you can't add/copy any file outside the dockerfile context, which means it's impossible to just link ~/.ssh/id_rsa into image's /root/.ssh/id_rsa, and when you definitely need a key to do some sshed thing like git clone from a private repo link..., during the building of your docker image.

Anyways, I found a solution to workaround, not so persuading but did work for me.

  1. in your dockerfile:

    • add this file as /root/.ssh/id_rsa
    • do what you want, such as git clone, composer...
    • rm /root/.ssh/id_rsa at the end
  2. a script to do in one shoot:

    • cp your key to the folder holding dockerfile
    • docker build
    • rm the copied key
  3. anytime you have to run a container from this image with some ssh requirements, just add -v for the run command, like:

    docker run -v ~/.ssh/id_rsa:/root/.ssh/id_rsa --name container image command

This solution results in no private key in both you project source and the built docker image, so no security issue to worry about anymore.


I put together a very simple solution that works for my use case where I use a "builder" docker image to build an executable that gets deployed separately. In other words my "builder" image never leaves my local machine and only needs access to private repos/dependencies during the build phase.

You do not need to change your Dockerfile for this solution.

When you run your container, mount your ~/.ssh directory (this avoids having to bake the keys directly into the image, but rather ensures they're only available to a single container instance for a short period of time during the build phase). In my case I have several build scripts that automate my deployment.

Inside my build-and-package.sh script I run the container like this:

# do some script stuff before    

...

docker run --rm \
   -v ~/.ssh:/root/.ssh \
   -v "$workspace":/workspace \
   -w /workspace builder \
   bash -cl "./scripts/build-init.sh $executable"

...

# do some script stuff after (i.e. pull the built executable out of the workspace, etc.)

The build-init.sh script looks like this:

#!/bin/bash

set -eu

executable=$1

# start the ssh agent
eval $(ssh-agent) > /dev/null

# add the ssh key (ssh key should not have a passphrase)
ssh-add /root/.ssh/id_rsa

# execute the build command
swift build --product $executable -c release

So instead of executing the swift build command (or whatever build command is relevant to your environment) directly in the docker run command, we instead execute the build-init.sh script which starts the ssh-agent, then adds our ssh key to the agent, and finally executes our swift build command.

Note 1: For this to work you'll need to make sure your ssh key does not have a passphrase, otherwise the ssh-add /root/.ssh/id_rsa line will ask for a passphrase and interrupt the build script.

Note 2: Make sure you have the proper file permissions set on your script files so that they can be run.

Hopefully this provides a simple solution for others with a similar use case.


A simple and secure way to achieve this without saving your key in a Docker image layer, or going through ssh_agent gymnastics is:

  1. As one of the steps in your Dockerfile, create a .ssh directory by adding:

    RUN mkdir -p /root/.ssh

  2. Below that indicate that you would like to mount the ssh directory as a volume:

    VOLUME [ "/root/.ssh" ]

  3. Ensure that your container's ssh_config knows where to find the public keys by adding this line:

    RUN echo " IdentityFile /root/.ssh/id_rsa" >> /etc/ssh/ssh_config

  4. Expose you local user's .ssh directory to the container at runtime:

    docker run -v ~/.ssh:/root/.ssh -it image_name

    Or in your dockerCompose.yml add this under the service's volume key:

    - "~/.ssh:/root/.ssh"

Your final Dockerfile should contain something like:

FROM node:6.9.1

RUN mkdir -p /root/.ssh
RUN  echo "    IdentityFile /root/.ssh/id_rsa" >> /etc/ssh/ssh_config

VOLUME [ "/root/.ssh" ]

EXPOSE 3000

CMD [ "launch" ]