Nvidia docker container runtime doesn't detect my gpu

By specifying the correct Docker version in configuration.nix, it worked.

Originally:
virtualisation.docker.package = pkgs.docker;

After correction:
virtualisation.docker.package = pkgs.docker_25;

Everything else was OK.

I’m glad to know that changing it to 25 did the trick. Although CDI was implemented in 27.1.1 too, you might have hit a bug on it, that was fixed and backported to 25, and probably to newer 27 releases too.

1 Like

Hi, I tried but it always says Error response from daemon: could not select device driver "cdi" with capabilities: [[gpu]]. Here’s my docker-compose.yml.

services:
  panoptic_slam:
-     image: "panoptic_slam:latest"
+    image: "cr.weaviate.io/semitechnologies/weaviate:1.28.2"
    container_name: panoptic_slam_sys
    environment:
      DISPLAY: $DISPLAY
      PATH: $PATH
      NVIDIA_DRIVER_CAPABILITIES: all
      NVIDIA_VISIBLE_DEVICES: void
    volumes:
      - /tmp/.X11-unix:/tmp/.X11-unix
      - ~/.Xauthority:/root/.Xauthority
      - /dev/bus/usb:/dev/bus/usb
      - ../Dataset:/home/panoptic_slam/Dataset
      - ../Output:/home/panoptic_slam/Output
    device_cgroup_rules:
      - 'c 189:* rmw'
    network_mode: "host"
    privileged: true
    tty: true
    deploy:
      resources:
        reservations:
          devices:
            - driver: cdi
              capabilities: [gpu]
              device_ids:
                - nvidia.com/gpu=all

Did it work in your environment?