From 215d5a29aaae764eafa4a731af8ddd0ae9550e34 Mon Sep 17 00:00:00 2001 From: Domenic Barbuzzi Date: Wed, 24 Jul 2024 18:23:51 +0000 Subject: [PATCH] Update README --- README.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index d28141f..0fd3f00 100644 --- a/README.md +++ b/README.md @@ -2,18 +2,18 @@ ## Overview -The `nm-vllm` packages published in this repository are Neuralmagic Enterprise Edition of [vLLM](https://github.com/vllm-project/vllm). Packages are versioned Python wheels and docker images. These are released as "production level" official releases and "beta level" Nightly's. +The `nm-vllm` packages published in this repository are Neuralmagic Enterprise Edition of [vLLM](https://github.com/vllm-project/vllm). Packages are versioned Python wheels and docker images. These are released as "production level" official releases and "beta level" nightly releases. -Official releases are made at the discretion of Neuralmagic, but typically track with `vllm` releases. These wheels are available via "public pypi" as well as ["nm-pypi"](https://pypi.neuralmagic.com). +Official releases are made at the discretion of Neuralmagic, but typically track with `vllm` releases. These wheels are available via the official PyPI as well as [Neuralmagic's PyPI](https://pypi.neuralmagic.com). -Nightly's are released every night given green runs in automation. The wheels are available at ["nm-pypi"](https://pypi.neuralmagic.com). +Nightly builds are released every night given green runs in automation. The wheels are available at [Neuralmagic's PyPI](https://pypi.neuralmagic.com). ## Installation ### PyPI -The [nm-vllm PyPi package](https://pypi.neuralmagic.com/simple/nm-vllm/index.html) includes pre-compiled binaries for CUDA (version 12.1) kernels. For other PyTorch or CUDA versions, please compile the package from source. +The [nm-vllm PyPI package](https://pypi.neuralmagic.com/simple/nm-vllm/index.html) includes pre-compiled binaries for CUDA (version 12.1) kernels. For other PyTorch or CUDA versions, please compile the package from source. Install it using pip: ```bash @@ -40,11 +40,11 @@ docker run --gpus all --shm-size 2g ghcr.io/neuralmagic/nm-vllm-ent:latest --mod ## Benchmarks -Please see our benchmarking results [here]( https://neuralmagic.github.io/nm-vllm-certs/dev/bench/). +Please see our benchmark results [here]( https://neuralmagic.github.io/nm-vllm-certs/dev/bench/). ## Models -Neural Magic maintains a variety of optimized models on our Hugging Face organization profiles: +Neuralmagic maintains a variety of optimized models on our Hugging Face organization profiles: - [neuralmagic](https://huggingface.co/neuralmagic) - [nm-testing](https://huggingface.co/nm-testing)