diff --git a/README.md b/README.md
index a22f8578f3d64..5bdff29dfd159 100644
--- a/README.md
+++ b/README.md
@@ -1,17 +1,13 @@
-
-
- nm-vllm
-
+# nm-vllm
## Overview
-[vLLM](https://github.com/vllm-project/vllm) is a fast and easy-to-use library for LLM inference that Neural Magic regularly contributes to.
-
-`nm-vllm` is our supported enterprise distribution of vLLM.
+`nm-vllm` is our supported enterprise distribution of [vLLM](https://github.com/vllm-project/vllm).
## Installation
-The [nm-vllm PyPi package](https://pypi.neuralmagic.com/simple/nm-vllm/index.html) includes pre-compiled binaries for CUDA (version 12.1) kernels, streamlining the setup process. For other PyTorch or CUDA versions, please compile the package from source.
+### PyPI
+The [nm-vllm PyPi package](https://pypi.neuralmagic.com/simple/nm-vllm/index.html) includes pre-compiled binaries for CUDA (version 12.1) kernels. For other PyTorch or CUDA versions, please compile the package from source.
Install it using pip:
```bash
@@ -30,6 +26,17 @@ cd nm-vllm
pip install -e .[sparse] --extra-index-url https://pypi.neuralmagic.com/simple
```
+### Docker
+
+The [`nm-vllm` container registry](https://github.com/neuralmagic/nm-vllm/pkgs/container/nm-vllm-openai) includes premade docker images.
+
+Launch the OpenAI-compatible server with:
+
+```bash
+MODEL_ID=Qwen/Qwen2-0.5B-Instruct
+docker run --gpus all --shm-size 2g ghcr.io/neuralmagic/nm-vllm-openai:latest --model $MODEL_ID
+```
+
## Models
Neural Magic maintains a variety of optimized models on our Hugging Face organization profiles: