The problem: You want to backup you image container WITH the data volumes in it but this option is Not out off the box, The straight forward and trivial way would be copy the volumes path and backup the docker image 'reload it and and link it both together. but this solution seems to be clumsy and not sustainable and maintainable - You would need to create a cron job that would make this flow each time.
Solution: Using dockup - Docker image to backup your Docker container volumes and upload it to s3 (Docker + Backup = dockup) . dockup will use your AWS credentials to create a new bucket with name as per the environment variable ,gets the configured volumes and will be tarballed, gzipped, time-stamped and uploaded to the S3 bucket.
Steps:
docker-compose.yml
and attach the env.txt
configuration file to it, The data should be uploaded to a dedicated secured s3 bucket and ready to be reloaded on DRP executions. in order to verify which volumes path to configure run docker inspect <service-name>
and locate the volumes :"Volumes": { "/etc/service-example": {}, "/service-example": {} },
Edit the content of the configuration file env.txt
, and place it on the project path:
AWS_ACCESS_KEY_ID=<key_here>
AWS_SECRET_ACCESS_KEY=<secret_here>
AWS_DEFAULT_REGION=us-east-1
BACKUP_NAME=service-backup
PATHS_TO_BACKUP=/etc/service-example /service-example
S3_BUCKET_NAME=docker-backups.example.com
RESTORE=false
Run the dockup container
$ docker run --rm \ --env-file env.txt \ --volumes-from <service-name> \ --name dockup tutum/dockup:latest