nm-vllm
is our supported enterprise distribution of vLLM.
The nm-vllm PyPi package includes pre-compiled binaries for CUDA (version 12.1) kernels. For other PyTorch or CUDA versions, please compile the package from source.
Install it using pip:
pip install nm-vllm --extra-index-url https://pypi.neuralmagic.com/simple
To utilize the weight sparsity features, include the optional sparse
dependencies.
pip install nm-vllm[sparse] --extra-index-url https://pypi.neuralmagic.com/simple
You can also build and install nm-vllm
from source (this will take ~10 minutes):
git clone https://github.com/neuralmagic/nm-vllm.git
cd nm-vllm
pip install -e .[sparse] --extra-index-url https://pypi.neuralmagic.com/simple
The nm-vllm
container registry includes premade docker images.
Launch the OpenAI-compatible server with:
MODEL_ID=Qwen/Qwen2-0.5B-Instruct
docker run --gpus all --shm-size 2g ghcr.io/neuralmagic/nm-vllm-openai:latest --model $MODEL_ID
Neural Magic maintains a variety of optimized models on our Hugging Face organization profiles: