Search code examples
dockertensorflowkerasdocker-swarmnvidia-docker

Docker service with GPU configured in compose file; no GPU recognized by Keras


I've got a multi-service application configured in a v3.5 docker compose file.

One of the services is to have access to the (one) GPU on the (one) node in the swarm. However, if I start the service using the docker compose file, I don't seem to have access to the GPU, as reported by keras:

   import keras
   from tensorflow.python.client import device_lib
   print(device_lib.list_local_devices())

prints

Using TensorFlow backend.

[name: "/device:CPU:0"
 device_type: "CPU"
 memory_limit: 268435456
 locality {  }
 incarnation: 10790773049987428954,
 name: "/device:XLA_CPU:0"
 device_type: "XLA_CPU"
 memory_limit: 17179869184
 locality {  }
 incarnation: 239154712796449863
 physical_device_desc: "device: XLA_CPU device"]

If I run the same image from the command line like this:

docker run -it --rm $(ls /dev/nvidia* | xargs -I{} echo '--device={}') $(ls /usr/lib/*-linux-gnu/{libcuda,libnvidia}* | xargs -I{} echo '-v {}:{}:ro') -v $(pwd):/srv --entrypoint /bin/bash ${MY_IMG}

The output is

[name: "/device:CPU:0"
 device_type: "CPU"
 memory_limit: 268435456
 locality {
 }
 incarnation: 3178082198631681841, name: "/device:XLA_CPU:0"
 device_type: "XLA_CPU"
 memory_limit: 17179869184
 locality {
 }
 incarnation: 15685155444461741733
 physical_device_desc: "device: XLA_CPU device", name: "/device:XLA_GPU:0"
 device_type: "XLA_GPU"
 memory_limit: 17179869184
 locality {
 }
 incarnation: 4056441191345727860
 physical_device_desc: "device: XLA_GPU device"]

Config:

I've installed nvidia-docker and configured the node according to this guide:

/etc/systemd/system/docker.service.d/override.conf:

[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --default-runtime=nvidia --node-generic-resource gpu=GPU-b7ad85d5

and

/etc/nvidia-container-runtime/config.toml:

disable-require = false
swarm-resource = "DOCKER_RESOURCE_GPU"

[nvidia-container-cli]
#root = "/run/nvidia/driver"
#path = "/usr/bin/nvidia-container-cli"
environment = []
#debug = "/var/log/nvidia-container-toolkit.log"
#ldcache = "/etc/ld.so.cache"
load-kmods = true
#no-cgroups = false
#user = "root:video"
ldconfig = "@/sbin/ldconfig.real"

[nvidia-container-runtime]
#debug = "/var/log/nvidia-container-runtime.log"

The relevant part of the docker compose file:

docker-compose.yaml:

version: '3.5'
   ...
   services:
        ...
        my-service:
            ...
            deploy:
              resources:
                reservations:
                  generic_resources:
                    - discrete_resource_spec:
                        kind: 'gpu'
                        value: 1

Question: What else is needed to get access to the GPU in that docker service?


Solution

  • NVIDIA-Docker is only working on Docker Compose 2.3

    Change version to version: '2.3'

    https://github.com/NVIDIA/nvidia-docker/wiki#do-you-support-docker-compose.