[docker] How to mount a single file in a volume

I am trying to dockerize a PHP application. In the dockerfile, I download the archive, extract it, etc.

Everything works fine. However, if a new version gets released and I update the dockerfile, I have to reinstall the application, because the config.php gets overwritten.

So I thought I can mount the file as a volume, like I do with the database.

I tried it two ways, with a volume and a direct path.

docker-compose:

version: '2'
services:
  app:
    build: src
    ports:
      - "8080:80"
    depends_on:
      - mysql
    volumes:
      -  app-conf:/var/www/html/upload
      -  app-conf:/var/www/html/config.php
    environment:
      DB_TYPE: mysql
      DB_MANAGER: MysqlManager

  mysql:
    image: mysql:5.6
    container_name: mysql
    volumes:
      - mysqldata:/var/lib/mysql
    ports:
      - 3306:3306
    environment:
      MYSQL_ROOT_PASSWORD:
      MYSQL_DATABASE:
      MYSQL_USER:
      MYSQL_PASSWORD:

volumes:
  mysqldata:
  app-conf:

Which results in the error:

And I tried it with a given path, as a mounted volume.

/src/docker/myapp/upload:/var/www/html/upload
/src/docker/myapp/upload:/var/www/html/config.php

However, both ways are not working. With the mounted volume, I see that upload gets created.

But then it fails with:

/var/www/html/config.php\" caused \"not a directory\"""

If I try it with

/src/docker/myapp/upload/config.php:/var/www/html/config.php

Docker creates the upload folder and then a config.php folder. Not a file.

Or is there another way to persist the config?

This question is related to docker docker-compose

The answer is


For me, the issue was that I had a broken symbolic link on the file I was trying to mount into the container


I had the same issue, docker-compose was creating a directory instead of a file, then crashing mid-way.

what i did :

  1. run the container without mapping the file

  2. copy the config file to the host location :

    docker cp containername:/var/www/html/config.php ./config.php

  3. remove the container (docker-compose down)

  4. put the mapping back and remount up the container

docker compose will find the config file, and will map that instead of trying to create a directory.


Maybe this helps someone.

I had this problem and tried everything. Volume bindings looked well and even if I mounted directory (not files), I had the file names in the mounted directory correctly but mounted as dirs.

I tried to re-enable shared drives and Docker complained the firewall is active.

After disabling the firewall all was working fine.


In windows, if you need the a ${PWD} env variable in your docker-compose.yml you can creat a .env file in the same directory as your docker-compose.yml file then insert manualy the location of your folder.

CMD (pwd_var.bat) :

echo PWD=%cd% >> .env

Powershell (pwd_var.ps1) :

$PSDefaultParameterValues['Out-File:Encoding'] = 'utf8'; echo "PWD=$(get-location).path" >> .env

There is more good features hear for docker-compose .env variables: https://docs.docker.com/compose/reference/envvars/ especially for the COMPOSE_CONVERT_WINDOWS_PATHS env variable that allow docker compose to accept windows path with baskslash "\".

When you want to share a file on windows, the file must exist before sharing it with the container.


The way that worked for me is to use a bind mount

  version: "3.7"    
  services:
  app:
    image: app:latest
    volumes:
      - type: bind
        source: ./sourceFile.yaml
        target: /location/targetFile.yaml

Thanks mike breed for the answer over at: Mount single file from volume using docker-compose

You need to use the "long syntax" to express a bind mount using the volumes key: https://docs.docker.com/compose/compose-file/#long-syntax-3


You can also use a relative path in your docker-compose.yml file like this (tested on Windows host, Linux container):

volumes:
    - ./test.conf:/fluentd/etc/test.conf

For anyone using Windows container like me, know that you CANNOT bind or mount single files using windows container.

The following examples will fail when using Windows-based containers, as the destination of a volume or bind mount inside the container must be one of: a non-existing or empty directory; or a drive other than C:. Further, the source of a bind mount must be a local directory, not a file.

net use z: \\remotemachine\share

docker run -v z:\foo:c:\dest ...

docker run -v \\uncpath\to\directory:c:\dest ...

docker run -v c:\foo\somefile.txt:c:\dest ...

docker run -v c:\foo:c: ...

docker run -v c:\foo:c:\existing-directory-with-contents ...

It's hard to spot but it's there

Link to the Github issue regarding mapping files into Windows container


All above answers are Correct.

but one thing that I found really helpful is that mounted file should exist inside docker host in advance otherwise docker will create a directory instead.

for example:

/a/file/inside/host/hostFile.txt:/a/file/inside/container/containerFile.txt

hostFile.txt should exist in advance. otherwise you will receive this error: containerFile.txt is a directory


For those who use Docker Desktop for Mac: If the file is present in your local filesystem but it's mounted as a directory inside the container, probably, you didn't share the file/directory with Docker Desktop. You need to check Docker Desktop file-sharing settings:

  1. Go to "Preferences" -> "Resources" -> "File sharing".
  2. If the directory with the desired file is missing, add a path to the directory containing your file.

Note! Do not add your root directory or any system directory to the file-sharing settings as it will load your CPU. The issue is described in Github, and this comment gives a workaround.


I had the same issue on Windows, Docker 18.06.1-ce-win73 (19507).

Removing and re-adding the shared drive via the Docker settings panel and everything worked again.


Use mount (--mount) instead volume (-v)

More info: https://docs.docker.com/storage/bind-mounts/

Example:

Ensure /tmp/a.txt exists on docker host

docker run -it --mount type=bind,source=/tmp/a.txt,target=/root/a.txt alpine sh

I have same issue on my Windows 8.1

It turned out that it was due to case-sensitivity of path. I called docker-compose up from directory cd /c/users/alex/ and inside container a file was turned into directory.

But when I did cd /c/Users/alex/ (not Users capitalized) and called docker-compose up from there, it worked.

In my system both Users dir and Alex dir are capitalized, though it seems like only Users dir matter.


As of docker-compose file version 3.2, you can specify a volume mount of type "bind" (instead of the default type "volume") that allows you to mount a single file into the container. Search for "bind mount" in the docker-compose volume docs: https://docs.docker.com/compose/compose-file/#volumes

In my case, I was trying to mount a single ".secrets" file into my application that contained secrets for local development and testing only. In production, my application fetches these secrets from AWS instead.

If I mounted this file as a volume using the shorthand syntax:

volumes:
 - ./.secrets:/data/app/.secrets

Docker would create a ".secrets" directory inside the container instead of mapping to the file outside of the container. My code would then raise an error like "IsADirectoryError: [Errno 21] Is a directory: '.secrets'".

I fixed this by using the long-hand syntax instead, specifying my secrets file using a read-only "bind" volume mount:

volumes:
 - type: bind
   source: ./.secrets
   target: /data/app/.secrets
   read_only: true

Now Docker correctly mounts my .secrets file into the container, creating a file inside the container instead of a directory.


I had been suffering from a similar issue. I was trying to import my config file to my container so that I can fix it every time I need without re-building the image.

I mean I thought the below command would map $(pwd)/config.py from Docker host to /root/app/config.py into the container as a file.

docker run -v $(pwd)/config.py:/root/app/config.py my_docker_image

However, it always created a directory named config.py, not a file.

while looking for clue, I found the reason(from here)

If you use -v or --volume to bind-mount a file or directory that does not yet exist on the Docker host, -v will create the endpoint for you. It is always created as a directory.

Therefore, it is always created as a directory because my docker host does not have $(pwd)/config.py.

Even if I create config.py in docker host. $(pwd)/config.py just overwirte /root/app/config.py not exporting /root/app/config.py.


You can mount files or directories/folders it all depends on Source file or directory. And also you need to provide full path or if you are not sure you can use PWD. Here is a simple working example.

In this example, I am mounting env-commands file which already exists in my working directory

$ docker run  --rm -it -v ${PWD}/env-commands:/env-commands aravindgv/eosdt:1.0.5 /bin/bash -c "cat /env-commands"

Examples related to docker

standard_init_linux.go:190: exec user process caused "no such file or directory" - Docker What is the point of WORKDIR on Dockerfile? E: gnupg, gnupg2 and gnupg1 do not seem to be installed, but one of them is required for this operation How do I add a user when I'm using Alpine as a base image? docker: Error response from daemon: Get https://registry-1.docker.io/v2/: Service Unavailable. IN DOCKER , MAC How to fix docker: Got permission denied issue pull access denied repository does not exist or may require docker login Docker error: invalid reference format: repository name must be lowercase Docker: "no matching manifest for windows/amd64 in the manifest list entries" OCI runtime exec failed: exec failed: (...) executable file not found in $PATH": unknown

Examples related to docker-compose

E: gnupg, gnupg2 and gnupg1 do not seem to be installed, but one of them is required for this operation How to upgrade docker-compose to latest version How to fix docker: Got permission denied issue Docker error: invalid reference format: repository name must be lowercase Is it safe to clean docker/overlay2/ Docker: How to delete all local Docker images Docker "ERROR: could not find an available, non-overlapping IPv4 address pool among the defaults to assign to the network" How to run docker-compose up -d at system start up? How to create a DB for MongoDB container on start up? How to use local docker images with Minikube?