From 5249441c724a36b1e833893fd5ca9f152d4cd222 Mon Sep 17 00:00:00 2001 From: Christopher Desiniotis Date: Mon, 5 Sep 2022 21:25:40 -0700 Subject: [PATCH] Update readme with new config file format and cli Signed-off-by: Christopher Desiniotis --- README.md | 175 +++++++++++++++++++++++++++++++++------- examples/config-t4.yaml | 52 ++++++++++++ 2 files changed, 199 insertions(+), 28 deletions(-) create mode 100644 examples/config-t4.yaml diff --git a/README.md b/README.md index c5b3537..7ed3b3c 100644 --- a/README.md +++ b/README.md @@ -2,54 +2,173 @@ **Note:** This project is under active development and not yet designed for production use. Use at your own risk. -The `NVIDIA vGPU Device Manager` manages vGPU devices on a GPU node in a Kubernetes cluster. -It defines a schema for declaratively specifying the list of vGPU types one would like to create on the node. -The vGPU Device Manager parses this schema and applies the desired config by creating vGPU devices following steps outlined in the -[NVIDIA vGPU User Guide](https://docs.nvidia.com/grid/latest/grid-vgpu-user-guide/index.html#creating-vgpu-device-red-hat-el-kvm). +NVIDIA Virtual GPU (vGPU) enables multiple virtual machines (VMs) to have simultaneous, direct access to a single physical GPU, using the same NVIDIA graphics drivers that are deployed on non-virtualized operating systems. +By doing this, NVIDIA vGPU provides VMs with unparalleled graphics performance, compute performance, and application compatibility, together with the cost-effectiveness and scalability brought about by sharing a GPU among multiple workloads. +Under the control of the NVIDIA Virtual GPU Manager running under the hypervisor, NVIDIA physical GPUs are capable of supporting multiple virtual GPU devices (vGPUs) that can be assigned directly to guest VMs. +To learn more, refer to the [NVIDIA vGPU Software Documentation](https://docs.nvidia.com/grid/). -As an example, consider the following configuration for a node with NVIDIA Tesla T4 GPUs. +The `NVIDIA vGPU Device Manager` is a tool designed for system administrators to make working with vGPU devices easier. + +It allows administrators to ***declaratively*** define a set of possible vGPU device +configurations they would like applied to all GPUs on a node. At runtime, they +then point `nvidia-vgpu-dm` at one of these configurations, and +`nvidia-vgpu-dm` takes care of applying it. In this way, the same +configuration file can be spread across all nodes in a cluster, and a runtime +flag (or environment variable) can be used to decide which of these +configurations to actually apply to a node at any given time. + +As an example, consider the following configuration for a node with two NVIDIA Tesla T4 GPUs. ``` version: v1 vgpu-configs: - default: - - "T4-8Q" - # NVIDIA Tesla T4, Q-Series - T4-16Q: - - "T4-16Q" - T4-8Q: - - "T4-8Q" - T4-4Q: - - "T4-4Q" - T4-2Q: - - "T4-2Q" T4-1Q: - - "T4-1Q" + - devices: all + vgpu-devices: + "T4-1Q": 16 + + T4-2Q: + - devices: all + vgpu-devices: + "T4-2Q": 8 + + T4-4Q: + - devices: all + vgpu-devices: + "T4-4Q": 4 + + T4-8Q: + - devices: all + vgpu-devices: + "T4-8Q": 2 + + T4-16Q: + - devices: all + vgpu-devices: + "T4-16Q": 1 # Custom configurations T4-small: - - "T4-1Q" - - "T4-2Q" + - devices: [0] + vgpu-devices: + "T4-1Q": 16 + - devices: [1] + vgpu-devices: + "T4-2Q": 8 + T4-medium: - - "T4-4Q" - - "T4-8Q" + - devices: [0] + vgpu-devices: + "T4-4Q": 4 + - devices: [1] + vgpu-devices: + "T4-8Q": 2 + T4-large: - - "T4-8Q" - - "T4-16Q" + - devices: [0] + vgpu-devices: + "T4-8Q": 2 + - devices: [1] + vgpu-devices: + "T4-16Q": 1 ``` -Each of the sections under `vgpu-configs` is user-defined, with custom labels used to refer to them. For example, the `T4-8Q` label refers to the vGPU configuration that creates vGPU devices of type `T4-8Q` on all T4 GPUs on the node. Likewise, the `T4-1Q` label refers to the vGPU configuration that creates vGPU devices of type `T4-1Q` on all T4 GPUs on the node. +Each of the sections under `vgpu-configs` is user-defined, with custom labels used to refer to them. For example, the `T4-8Q` label refers to the vGPU configuration that creates 2 vGPU devices of type `T4-8Q` on all T4 GPUs on the node. Likewise, the `T4-1Q` label refers to the vGPU configuration that creates 16 vGPU devices of type `T4-1Q` on all T4 GPUs on the node. Finally, the `T4-small` label defines a completely custom configuration which creates 16 `T4-1Q` vGPU devices on the first GPU and 8 `T4-2Q` vGPU devices on the second GPU. -More than one vGPU type can be associated with a configuration. For example, the `T4-small` label specifies both the `T4-1Q` and `T4-2Q` vGPU types. If the node has multiple T4 cards, then vGPU devices of both types will be created on the node. More specifically, the vGPU Device Manager will select the vGPU types in a round robin fashion as it creates devices. vGPU devices of type `T4-1Q` get created on the first card, vGPU devices of type `T4-2Q` get created on the second card, vGPU devices of type `T4-1Q` get created on the third card, etc. +Using the `nvidia-vgpu-dm` tool, the following commands can be run to apply each of these configs in turn: +``` +$ nvidia-vgpu-dm apply -f examples/config-t4.yaml -c T4-1Q +$ nvidia-vgpu-dm apply -f examples/config-t4.yaml -c T4-2Q +$ nvidia-vgpu-dm apply -f examples/config-t4.yaml -c T4-4Q +$ nvidia-vgpu-dm apply -f examples/config-t4.yaml -c T4-8Q +$ nvidia-vgpu-dm apply -f examples/config-t4.yaml -c T4-16Q +$ nvidia-vgpu-dm apply -f examples/config-t4.yaml -c T4-small +$ nvidia-vgpu-dm apply -f examples/config-t4.yaml -c T4-medium +$ nvidia-vgpu-dm apply -f examples/config-t4.yaml -c T4-large +``` -## Prerequisites +The currently applied configuration can then be asserted with: +``` +$ nvidia-vgpu-dm assert -f examples/config-t4.yaml -c T4-large +INFO[0000] Selected vGPU device configuration is currently applied -- [NVIDIA vGPU Manager](https://docs.nvidia.com/grid/latest/grid-vgpu-user-guide/index.html#installing-configuring-grid-vgpu) is installed on the system. +$ echo $? +0 + +$ nvidia-vgpu-dm assert -f examples/config-t4.yaml -c T4-16Q +FATA[0000] Assertion failure: selected configuration not currently applied + +$ echo $? +1 +``` + +## Build `nvidia-vgpu-dm` + +``` +git clone https://gitlab.com/nvidia/cloud-native/vgpu-device-manager.git +cd vgpu-device-manager +make cmd-nvidia-vgpu-dm +``` + +This will generate a binary called `nvidia-vgpu-dm` in your current directory. ## Usage -**Note:** Currently this project can only be deployed on Kubernetes, and the only supported way is through the [NVIDIA GPU Operator](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/overview.html). It is not meant to be run as a standalone component and no CLI utility exists. The instructions below are for deploying the vGPU Device Manager as a standalone DaemonSet, for development purposes. +#### Prerequisites + +- [NVIDIA vGPU Manager](https://docs.nvidia.com/grid/latest/grid-vgpu-user-guide/index.html#installing-configuring-grid-vgpu) is installed on the system. + +#### Apply a specific vGPU device config from a configuration file +``` +nvidia-vgpu-dm apply -f examples/config-t4.yaml -c T4-1Q +``` + +#### Apply a specific vGPU device config with debug output +``` +nvidia-vgpu-dm -d apply -f examplpes/config-t4.yaml -c T4-1Q +``` + +#### Apply a one-off vGPU device configuration without a configuration file +``` +cat <