Skip to content

Commit

Permalink
Merge branch 'update-readme' into 'main'
Browse files Browse the repository at this point in the history
Update readme with new config file format and cli

See merge request nvidia/cloud-native/vgpu-device-manager!10
  • Loading branch information
cdesiniotis committed Sep 6, 2022
2 parents 0efd7cf + 5249441 commit b2ead37
Show file tree
Hide file tree
Showing 2 changed files with 199 additions and 28 deletions.
175 changes: 147 additions & 28 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,54 +2,173 @@

**Note:** This project is under active development and not yet designed for production use. Use at your own risk.

The `NVIDIA vGPU Device Manager` manages vGPU devices on a GPU node in a Kubernetes cluster.
It defines a schema for declaratively specifying the list of vGPU types one would like to create on the node.
The vGPU Device Manager parses this schema and applies the desired config by creating vGPU devices following steps outlined in the
[NVIDIA vGPU User Guide](https://docs.nvidia.com/grid/latest/grid-vgpu-user-guide/index.html#creating-vgpu-device-red-hat-el-kvm).
NVIDIA Virtual GPU (vGPU) enables multiple virtual machines (VMs) to have simultaneous, direct access to a single physical GPU, using the same NVIDIA graphics drivers that are deployed on non-virtualized operating systems.
By doing this, NVIDIA vGPU provides VMs with unparalleled graphics performance, compute performance, and application compatibility, together with the cost-effectiveness and scalability brought about by sharing a GPU among multiple workloads.
Under the control of the NVIDIA Virtual GPU Manager running under the hypervisor, NVIDIA physical GPUs are capable of supporting multiple virtual GPU devices (vGPUs) that can be assigned directly to guest VMs.
To learn more, refer to the [NVIDIA vGPU Software Documentation](https://docs.nvidia.com/grid/).

As an example, consider the following configuration for a node with NVIDIA Tesla T4 GPUs.
The `NVIDIA vGPU Device Manager` is a tool designed for system administrators to make working with vGPU devices easier.

It allows administrators to ***declaratively*** define a set of possible vGPU device
configurations they would like applied to all GPUs on a node. At runtime, they
then point `nvidia-vgpu-dm` at one of these configurations, and
`nvidia-vgpu-dm` takes care of applying it. In this way, the same
configuration file can be spread across all nodes in a cluster, and a runtime
flag (or environment variable) can be used to decide which of these
configurations to actually apply to a node at any given time.

As an example, consider the following configuration for a node with two NVIDIA Tesla T4 GPUs.

```
version: v1
vgpu-configs:
default:
- "T4-8Q"
# NVIDIA Tesla T4, Q-Series
T4-16Q:
- "T4-16Q"
T4-8Q:
- "T4-8Q"
T4-4Q:
- "T4-4Q"
T4-2Q:
- "T4-2Q"
T4-1Q:
- "T4-1Q"
- devices: all
vgpu-devices:
"T4-1Q": 16
T4-2Q:
- devices: all
vgpu-devices:
"T4-2Q": 8
T4-4Q:
- devices: all
vgpu-devices:
"T4-4Q": 4
T4-8Q:
- devices: all
vgpu-devices:
"T4-8Q": 2
T4-16Q:
- devices: all
vgpu-devices:
"T4-16Q": 1
# Custom configurations
T4-small:
- "T4-1Q"
- "T4-2Q"
- devices: [0]
vgpu-devices:
"T4-1Q": 16
- devices: [1]
vgpu-devices:
"T4-2Q": 8
T4-medium:
- "T4-4Q"
- "T4-8Q"
- devices: [0]
vgpu-devices:
"T4-4Q": 4
- devices: [1]
vgpu-devices:
"T4-8Q": 2
T4-large:
- "T4-8Q"
- "T4-16Q"
- devices: [0]
vgpu-devices:
"T4-8Q": 2
- devices: [1]
vgpu-devices:
"T4-16Q": 1
```

Each of the sections under `vgpu-configs` is user-defined, with custom labels used to refer to them. For example, the `T4-8Q` label refers to the vGPU configuration that creates vGPU devices of type `T4-8Q` on all T4 GPUs on the node. Likewise, the `T4-1Q` label refers to the vGPU configuration that creates vGPU devices of type `T4-1Q` on all T4 GPUs on the node.
Each of the sections under `vgpu-configs` is user-defined, with custom labels used to refer to them. For example, the `T4-8Q` label refers to the vGPU configuration that creates 2 vGPU devices of type `T4-8Q` on all T4 GPUs on the node. Likewise, the `T4-1Q` label refers to the vGPU configuration that creates 16 vGPU devices of type `T4-1Q` on all T4 GPUs on the node. Finally, the `T4-small` label defines a completely custom configuration which creates 16 `T4-1Q` vGPU devices on the first GPU and 8 `T4-2Q` vGPU devices on the second GPU.

More than one vGPU type can be associated with a configuration. For example, the `T4-small` label specifies both the `T4-1Q` and `T4-2Q` vGPU types. If the node has multiple T4 cards, then vGPU devices of both types will be created on the node. More specifically, the vGPU Device Manager will select the vGPU types in a round robin fashion as it creates devices. vGPU devices of type `T4-1Q` get created on the first card, vGPU devices of type `T4-2Q` get created on the second card, vGPU devices of type `T4-1Q` get created on the third card, etc.
Using the `nvidia-vgpu-dm` tool, the following commands can be run to apply each of these configs in turn:
```
$ nvidia-vgpu-dm apply -f examples/config-t4.yaml -c T4-1Q
$ nvidia-vgpu-dm apply -f examples/config-t4.yaml -c T4-2Q
$ nvidia-vgpu-dm apply -f examples/config-t4.yaml -c T4-4Q
$ nvidia-vgpu-dm apply -f examples/config-t4.yaml -c T4-8Q
$ nvidia-vgpu-dm apply -f examples/config-t4.yaml -c T4-16Q
$ nvidia-vgpu-dm apply -f examples/config-t4.yaml -c T4-small
$ nvidia-vgpu-dm apply -f examples/config-t4.yaml -c T4-medium
$ nvidia-vgpu-dm apply -f examples/config-t4.yaml -c T4-large
```

## Prerequisites
The currently applied configuration can then be asserted with:
```
$ nvidia-vgpu-dm assert -f examples/config-t4.yaml -c T4-large
INFO[0000] Selected vGPU device configuration is currently applied
- [NVIDIA vGPU Manager](https://docs.nvidia.com/grid/latest/grid-vgpu-user-guide/index.html#installing-configuring-grid-vgpu) is installed on the system.
$ echo $?
0
$ nvidia-vgpu-dm assert -f examples/config-t4.yaml -c T4-16Q
FATA[0000] Assertion failure: selected configuration not currently applied
$ echo $?
1
```

## Build `nvidia-vgpu-dm`

```
git clone https://gitlab.com/nvidia/cloud-native/vgpu-device-manager.git
cd vgpu-device-manager
make cmd-nvidia-vgpu-dm
```

This will generate a binary called `nvidia-vgpu-dm` in your current directory.

## Usage

**Note:** Currently this project can only be deployed on Kubernetes, and the only supported way is through the [NVIDIA GPU Operator](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/overview.html). It is not meant to be run as a standalone component and no CLI utility exists. The instructions below are for deploying the vGPU Device Manager as a standalone DaemonSet, for development purposes.
#### Prerequisites

- [NVIDIA vGPU Manager](https://docs.nvidia.com/grid/latest/grid-vgpu-user-guide/index.html#installing-configuring-grid-vgpu) is installed on the system.

#### Apply a specific vGPU device config from a configuration file
```
nvidia-vgpu-dm apply -f examples/config-t4.yaml -c T4-1Q
```

#### Apply a specific vGPU device config with debug output
```
nvidia-vgpu-dm -d apply -f examplpes/config-t4.yaml -c T4-1Q
```

#### Apply a one-off vGPU device configuration without a configuration file
```
cat <<EOF | nvidia-vgpu-dm apply -f -
version: v1
vgpu-configs:
T4-1Q:
- devices: all
vgpu-devices:
"T4-1Q": 16
EOF
```

#### Assert a specific vGPU device configuration is currently applied
```
nvidia-vgpu-dm assert -f examples/config.yaml -c T4-1Q
```

#### Assert a one-off vGPU device configuration without a configuration file
```
cat <<EOF | nvidia-vgpu-dm assert -f -
version: v1
vgpu-configs:
T4-1Q:
- devices: all
vgpu-devices:
"T4-1Q": 16
EOF
```

#### Assert only that the configuration file is valid and the selected config is present in it
```
nvidia-vgpu-dm assert -f exaples/config.yaml -c T4-1Q --valid-config
```

## Kubernetes Deployment

The [NVIDIA vGPU Device Manager container](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/cloud-native/containers/vgpu-device-manager) manages vGPU devices on a GPU node in a Kubernetes cluster.
The containerized deployment is only supported through the [NVIDIA GPU Operator](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/overview.html).
It is not meant to be run as a standalone component.
The instructions below are for deploying the vGPU Device Manager as a standalone DaemonSet, for development purposes.

First, create a vGPU devices configuration file. The example file in `examples/` can be used as a starting point:

Expand Down
52 changes: 52 additions & 0 deletions examples/config-t4.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
version: v1
vgpu-configs:
# NVIDIA Tesla T4, Q-Series
T4-1Q:
- devices: all
vgpu-devices:
"T4-1Q": 16

T4-2Q:
- devices: all
vgpu-devices:
"T4-2Q": 8

T4-4Q:
- devices: all
vgpu-devices:
"T4-4Q": 4

T4-8Q:
- devices: all
vgpu-devices:
"T4-8Q": 2

T4-16Q:
- devices: all
vgpu-devices:
"T4-16Q": 1

# Custom configurations
T4-small:
- devices: [0]
vgpu-devices:
"T4-1Q": 16
- devices: [1]
vgpu-devices:
"T4-2Q": 8

T4-medium:
- devices: [0]
vgpu-devices:
"T4-4Q": 4
- devices: [1]
vgpu-devices:
"T4-8Q": 2

T4-large:
- devices: [0]
vgpu-devices:
"T4-8Q": 2
- devices: [1]
vgpu-devices:
"T4-16Q": 1

0 comments on commit b2ead37

Please sign in to comment.