SGLang Inference Server
Install and use SGLang on DGX Spark
Verify system prerequisites
Check that your NVIDIA Spark device meets all requirements before proceeding. This step runs on your host system and ensures Docker, GPU drivers, and container toolkit are properly configured.
# Verify Docker installation
docker --version
# Check NVIDIA GPU drivers
nvidia-smi
# Verify Docker GPU support
docker run --rm --gpus all lmsysorg/sglang:spark nvidia-smi
# Check available disk space
df -h /
Pull the SGLang Container
Download the latest SGLang container. This step runs on the host and may take several minutes depending on your network connection.
# Pull the SGLang container
docker pull lmsysorg/sglang:spark
# Verify the image was downloaded
docker images | grep sglang
Launch SGLang container for server mode
Start the SGLang container in server mode to enable HTTP API access. This runs the inference server inside the container, exposing it on port 30000 for client connections.
# Launch container with GPU support and port mapping
docker run --gpus all -it --rm \
-p 30000:30000 \
-v /tmp:/tmp \
lmsysorg/sglang:spark \
bash
Start the SGLang inference server
Inside the container, launch the HTTP inference server with a supported model. This step runs inside the Docker container and starts the SGLang server daemon.
# Start the inference server with DeepSeek-V2-Lite model
python3 -m sglang.launch_server \
--model-path deepseek-ai/DeepSeek-V2-Lite \
--host 0.0.0.0 \
--port 30000 \
--trust-remote-code \
--tp 1 \
--attention-backend flashinfer \
--mem-fraction-static 0.75 &
# Wait for server to initialize
sleep 30
# Check server status
curl http://localhost:30000/health
Test client-server inference
From a new terminal on your host system, test the SGLang server API to ensure it's working correctly. This validates that the server is accepting requests and generating responses.
# Test with curl
curl -X POST http://localhost:30000/generate \
-H "Content-Type: application/json" \
-d '{
"text": "What does NVIDIA love?",
"sampling_params": {
"temperature": 0.7,
"max_new_tokens": 100
}
}'
Test Python client API
Create a simple Python script to test programmatic access to the SGLang server. This runs on the host system and demonstrates how to integrate SGLang into applications.
import requests
# Send prompt to server
response = requests.post('http://localhost:30000/generate', json={
'text': 'What does NVIDIA love?',
'sampling_params': {
'temperature': 0.7,
'max_new_tokens': 100,
},
})
print(f"Response: {response.json()['text']}")
Validate installation
Confirm that both server and offline modes are working correctly. This step verifies the complete SGLang setup and ensures reliable operation.
# Check server mode (from host)
curl http://localhost:30000/health
curl -X POST http://localhost:30000/generate -H "Content-Type: application/json" \
-d '{"text": "Hello", "sampling_params": {"max_new_tokens": 10}}'
# Check container logs
docker ps
docker logs <CONTAINER_ID>
Cleanup and rollback
Stop and remove containers to clean up resources. This step returns your system to its original state.
WARNING
This will stop all SGLang containers and remove temporary data.
# Stop all SGLang containers
docker ps | grep sglang | awk '{print $1}' | xargs docker stop
# Remove stopped containers
docker container prune -f
# Remove SGLang images (optional)
docker rmi lmsysorg/sglang:spark
Next steps
With SGLang successfully deployed, you can now:
- Integrate the HTTP API into your applications using the
/generateendpoint - Experiment with different models by changing the
--model-pathparameter - Scale up using multiple GPUs by adjusting the
--tp(tensor parallel) setting - Deploy production workloads using the container orchestration platform of your choice