Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

You should see the lambda-stack image in the list, this is the one we will use for now.

To start a program in the container, use the docker run --rm command:

...

So far, the container does not have access to the GPUs. To give it access to them, you need to change the runtime to nvidia and explicitly specify a list of GPUs. The following example uses the first 3 2 GPUs:

Code Block
breakoutModewide
languagebash
$ docker run --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=0,1 --rm lambda-stack:20.04 nvidia-smi
TODO output
Note

This method does not prevent multiple containers to access the same GPUs. Therefore, make sure to check with other users which GPUs they are using.

This method does ensure that your container will not use by mistake any other GPU than the one specified.

TODO mention --gpu too?

Access a folder from the container

A container is isolated from the host environment by default. Bind mounts allow you to mount a folder from the host machine into the container. TODO check terminology between bind mounts and volumes

Add Python packages to the container

TODOSpecify the path to the directory on host and the corresponding path inside the container using the -v flag:

Code Block
-v <path on host>:<path in container>

For example, assuming you have a project folder in $HOME/my_project and want to access it as /my_project in the container, you would use:

Code Block
languagebash
$ docker run -v $HOME/my_project:/my_project --rm lambda-stack:20.04 ls /
TODO output

Any program running in the container can then access and modify files in the /my_project folder.

You can repeat the -v flag to mount multiple folders in the container.

Add packages to the container

The container may not have all the packages you need. To add more packages, you can create a new container based on the LambdaStack one.

To create a container, you need a Dockerfile definition file. It contains the information about the base container and the installations instructions for the additional packages.

In the following Dockerfile example, the Transformers library (PyTorch version) from Hugging Face is added to the LambdaStack container:

Code Block
FROM lambda-stack:20.04
RUN pip install pip install transformers[torch]

To build the corresponding container, first create an empty folder and save the Dockerfile in it:

Code Block
languagebash
$ mkdir hugging_container
$ echo "FROM lambda-stack:20.04" > hugging_container/Dockerfile
$ echo "pip install pip install transformers[torch]" >> hugging_container/Dockerfile

then use the docker build command to generate the new container

Code Block
languagebash
$ cd hugging_container
$ docker build -t $USER/hugging .

TODO list image

Next steps

TODO do more with containers