Search code examples
apache-sparkdockerdocker-composedocker-swarm

Share volumes between docker stacks?


I have two different docker stacks, one for HBase and one for Spark. I need to get the HBase jars into the spark path. One way that I can do this, without having to modify the spark containers is to use a volume. In my docker-compose.yml for HBase, I have defined a volume that points to the HBase home (it happens to be /opt/hbase-1.2.6). Is it possible to share that volume with the spark stack?

Right now, since the service names are different (2 different docker-compose files) the volumes are being prepended (hbase_hbasehome and spark_hbasehome) causing the share to fail.


Solution

  • You could use an external volume. See here the official documentation:

    if set to true, specifies that this volume has been created outside of Compose. docker-compose up does not attempt to create it, and raises an error if it doesn’t exist.

    external cannot be used in conjunction with other volume configuration keys (driver, driver_opts).

    In the example below, instead of attempting to create a volume called [projectname]_data, Compose looks for an existing volume simply called data and mount it into the db service’s containers.

    As an example:

    version: '2'
    
    services:
      db:
        image: postgres
        volumes:
          - data:/var/lib/postgresql/data
    
    volumes:
      data:
        external: true
    

    You can also specify the name of the volume separately from the name used to refer to it within the Compose file:

    volumes:
      data:
        external:
          name: actual-name-of-volume