Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The --rm flag ensures that the container is stopped and cleanup cleaned-up after usage. TODO clarify/check this

To run an interactive command like bash or python interpreter, add the -it flag:

...

So far, the container does not have access to the GPUs. To give it access to them, you need to change the runtime to nvidia and explicitly specify a list of GPUs. The following example uses the first 2 GPUs4th and 5th GPUs (indices start at 0):

Code Block
languagenone
$ docker run --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=03,14 --rm lambda-stack:20.04 nvidia-smi
Thu Sep  1 0522:4002:1325 2022       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.65.01    Driver Version: 515.65.01    CUDA Version: 11.7     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA A100-SXM...  On   | 00000000:074C:00.0 Off |                    0 |
| N/A   48C60C    P0   166W374W / 400W |  50691MiB33549MiB / 81920MiB |     73%47%      Default |
|                               |                      |             Disabled |
+-------------------------------+----------------------+----------------------+
|   1  NVIDIA A100-SXM...  On   | 00000000:0B88:00.0 Off |                    0 |
| N/A   48C54C    P0   244W377W / 400W |  34161MiB33549MiB / 81920MiB |     96%86%      Default |
|                               |                      |             Disabled |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+

TODO mention GPU index remappingNote that GPU device numbers are re-mapped to start from 0 in the container.

Note

This method does not prevent multiple containers to access the same GPUs. Therefore, make sure to check with other users which GPUs they are using.

This method does ensure that your container will not use by mistake any other GPU than the one specified.

TODO mention --gpu too?

the one specified.

An alternative method to access GPUs is the --gpus option:

Code Block
$ docker run --gpus 2 --rm lambda-stack:20.04 nvidia-smi

Unless you are using the 8 GPUs, we strongly recommend not using this syntax, as it does not let you choose precisely which GPUs to select.

Access a folder from the container

...

Code Block
languagenone
$ docker run -v $HOME/my_project:/my_project --rm lambda-stack:20.04 ls /
TODO outputmy_project

Any program running in the container can then access and modify files in the /my_project folder.

...

Code Block
$ docker image rm pytorch-transformers
Untagged: pytorch-transformers:latest
Deleted: sha256:432c6be0a999484db090c5d9904e5c783454080d8ad8bc39e0499ace479c4559
Deleted: sha256:623ae3b33709c2fc4c40bc2c3959049345fee0087d39b4f53eb95aefd1c16f7d

Next steps

TODO list references to go beyond the basicsThis document is a very short introduction to Docker to use LambdaStack. If you want to know more about Docker in general, we can recommend this workshop material and the associated recording on Youtube.