[docker] How do I pass environment variables to Docker containers?

I'm new to Docker, and it's unclear how to access an external database from a container. Is the best way to hard-code in the connection string?

# Dockerfile
ENV DATABASE_URL amazon:rds/connection?string

This question is related to docker environment-variables dockerfile

The answer is


If you are using 'docker-compose' as the method to spin up your container(s), there is actually a useful way to pass an environment variable defined on your server to the Docker container.

In your docker-compose.yml file, let's say you are spinning up a basic hapi-js container and the code looks like:

hapi_server:
  container_name: hapi_server
  image: node_image
  expose:
    - "3000"

Let's say that the local server that your docker project is on has an environment variable named 'NODE_DB_CONNECT' that you want to pass to your hapi-js container, and you want its new name to be 'HAPI_DB_CONNECT'. Then in the docker-compose.yml file, you would pass the local environment variable to the container and rename it like so:

hapi_server:
  container_name: hapi_server
  image: node_image
  environment:
    - HAPI_DB_CONNECT=${NODE_DB_CONNECT}
  expose:
    - "3000"

I hope this helps you to avoid hard-coding a database connect string in any file in your container!


If you have the environment variables in an env.sh locally and want to set it up when the container starts, you could try

COPY env.sh /env.sh
COPY <filename>.jar /<filename>.jar
ENTRYPOINT ["/bin/bash" , "-c", "source /env.sh && printenv && java -jar /<filename>.jar"]

This command would start the container with a bash shell (I want a bash shell since source is a bash command), sources the env.sh file(which sets the environment variables) and executes the jar file.

The env.sh looks like this,

#!/bin/bash
export FOO="BAR"
export DB_NAME="DATABASE_NAME"

I added the printenv command only to test that actual source command works. You should probably remove it when you confirm the source command works fine or the environment variables would appear in your docker logs.


docker run --rm -it --env-file <(bash -c 'env | grep <your env data>') Is a way to grep the data stored within a .env and pass them to Docker, without anything being stored unsecurely (so you can't just look at docker history and grab keys.

Say you have a load of AWS stuff in your .env like so:

AWS_ACCESS_KEY: xxxxxxx
AWS_SECRET: xxxxxx
AWS_REGION: xxxxxx

running docker with ```docker run --rm -it --env-file <(bash -c 'env | grep AWS_') will grab it all and pass it securely to be accessible from within the container.


Using docker-compose, you can inherit env variables in docker-compose.yml and subsequently any Dockerfile(s) called by docker-compose to build images. This is useful when the Dockerfile RUN command should execute commands specific to the environment.

(your shell has RAILS_ENV=development already existing in the environment)

docker-compose.yml:

version: '3.1'
services:
  my-service: 
    build:
      #$RAILS_ENV is referencing the shell environment RAILS_ENV variable
      #and passing it to the Dockerfile ARG RAILS_ENV
      #the syntax below ensures that the RAILS_ENV arg will default to 
      #production if empty.
      #note that is dockerfile: is not specified it assumes file name: Dockerfile
      context: .
      args:
        - RAILS_ENV=${RAILS_ENV:-production}
    environment: 
      - RAILS_ENV=${RAILS_ENV:-production}

Dockerfile:

FROM ruby:2.3.4

#give ARG RAILS_ENV a default value = production
ARG RAILS_ENV=production

#assign the $RAILS_ENV arg to the RAILS_ENV ENV so that it can be accessed
#by the subsequent RUN call within the container
ENV RAILS_ENV $RAILS_ENV

#the subsequent RUN call accesses the RAILS_ENV ENV variable within the container
RUN if [ "$RAILS_ENV" = "production" ] ; then echo "production env"; else echo "non-production env: $RAILS_ENV"; fi

This way, I don't need to specify environment variables in files or docker-compose build/up commands:

docker-compose build
docker-compose up

You can pass using -e parameters with docker run .. command as mentioned here and as mentioned by @errata.

However, the possible downside of this approach is that your credentials will be displayed in the process listing, where you run it.

To make it more secure, you may write your credentials in a configuration file and do docker run with --env-file as mentioned here. Then you can control the access of that config file so that others having access to that machine wouldn't see your credentials.


Use -e or --env value to set environment variables (default []).

An example from a startup script:

 docker run  -e myhost='localhost' -it busybox sh

If you want to use multiple environments from the command line then before every environment variable use the -e flag.

Example:

 sudo docker run -d -t -i -e NAMESPACE='staging' -e PASSWORD='foo' busybox sh

Note: Make sure put the container name after the environment variable, not before that.

If you need to set up many variables, use the --env-file flag

For example,

 $ docker run --env-file ./my_env ubuntu bash

For any other help, look into the Docker help:

 $ docker run --help

Official documentation: https://docs.docker.com/compose/environment-variables/


The problem I had was that I was putting the --env-file at the end of the command

docker run -it --rm -p 8080:80 imagename --env-file ./env.list

Fix

docker run --env-file ./env.list -it --rm -p 8080:80 imagename

here is how i was able to solve it

docker run --rm -ti -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_SESSION_TOKEN -e AWS_SECURITY_TOKEN amazon/aws-cli s3 ls

one more example:

export VAR1=value1
export VAR2=value2

$ docker run --env VAR1 --env VAR2 ubuntu env | grep VAR
VAR1=value1
VAR2=value2

Using jq to convert the env to JSON:

env_as_json=`jq -c -n env`
docker run -e HOST_ENV="$env_as_json" <image>

this requires jq version 1.6 or newer

this pust the host env as json, essentially like so in Dockerfile:

ENV HOST_ENV  (all env from the host as json)

For Amazon AWS ECS/ECR, you should manage your environment variables (especially secrets) via a private S3 bucket. See blog post How to Manage Secrets for Amazon EC2 Container Service–Based Applications by Using Amazon S3 and Docker.


Another way is to use the powers of /usr/bin/env:

docker run ubuntu env DEBUG=1 path/to/script.sh

We can also use host machine environment variable using -e flag and $ :

Before running the following command, need to export(means set) local env variables.

docker run -it -e MG_HOST=$MG_HOST -e MG_USER=$MG_USER -e MG_PASS=$MG_PASS -e MG_AUTH=$MG_AUTH -e MG_DB=$MG_DB -t image_tag_name_and_version 

By using this method, you can set the environment variable automatically with your given name. In my case(MG_HOST ,MG_USER)

Additionally:

If you are using python you can access these environment variable inside docker by

import os
host,username,password,auth,database=os.environ.get('MG_HOST'),os.environ.get('MG_USER'),os.environ.get('MG_PASS'),os.environ.get('MG_AUTH'),os.environ.get('MG_DB')

There is a nice hack how to pipe host machine environment variables to a docker container:

env > env_file && docker run --env-file env_file image_name

Use this technique very carefully, because env > env_file will dump ALL host machine ENV variables to env_file and make them accessible in the running container.


Examples related to docker

standard_init_linux.go:190: exec user process caused "no such file or directory" - Docker What is the point of WORKDIR on Dockerfile? E: gnupg, gnupg2 and gnupg1 do not seem to be installed, but one of them is required for this operation How do I add a user when I'm using Alpine as a base image? docker: Error response from daemon: Get https://registry-1.docker.io/v2/: Service Unavailable. IN DOCKER , MAC How to fix docker: Got permission denied issue pull access denied repository does not exist or may require docker login Docker error: invalid reference format: repository name must be lowercase Docker: "no matching manifest for windows/amd64 in the manifest list entries" OCI runtime exec failed: exec failed: (...) executable file not found in $PATH": unknown

Examples related to environment-variables

Using Environment Variables with Vue.js Adding an .env file to React Project Is there any way to set environment variables in Visual Studio Code? Test process.env with Jest How to set environment variables in PyCharm? ARG or ENV, which one to use in this case? how to set ASPNETCORE_ENVIRONMENT to be considered for publishing an asp.net core application? What is a good practice to check if an environmental variable exists or not? Passing bash variable to jq Tensorflow set CUDA_VISIBLE_DEVICES within jupyter

Examples related to dockerfile

standard_init_linux.go:190: exec user process caused "no such file or directory" - Docker What is the point of WORKDIR on Dockerfile? Can't create a docker image for COPY failed: stat /var/lib/docker/tmp/docker-builder error /bin/sh: apt-get: not found COPY with docker but with exclusion Dockerfile if else condition with external arguments Docker build gives "unable to prepare context: context must be a directory: /Users/tempUser/git/docker/Dockerfile" denied: requested access to the resource is denied : docker Understanding "VOLUME" instruction in DockerFile ARG or ENV, which one to use in this case?