...
So far, the container does not have access to the GPUs. To give it access to them, you need to change the runtime to nvidia
and explicitly specify a list of GPUs. The following example uses the first 2 GPUs:
Code Block | ||||
---|---|---|---|---|
| ||||
$ docker run --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=0,1 --rm lambda-stack:20.04 nvidia-smi Thu Sep 1 05:40:13 2022 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 515.65.01 Driver Version: 515.65.01 CUDA Version: 11.7 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA A100-SXM... On | 00000000:07:00.0 Off | 0 | | N/A 48C P0 166W / 400W | 50691MiB / 81920MiB | 73% Default | | | | Disabled | +-------------------------------+----------------------+----------------------+ | 1 NVIDIA A100-SXM... On | 00000000:0B:00.0 Off | 0 | | N/A 48C P0 244W / 400W | 34161MiB / 81920MiB | 96% Default | | | | Disabled | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| +-----------------------------------------------------------------------------+ |
...