NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
View All Playbooks
View All Playbooks

onboarding

  • Set Up Local Network Access
  • Open WebUI with Ollama

data science

  • Single-cell RNA Sequencing
  • Portfolio Optimization
  • CUDA-X Data Science
  • Text to Knowledge Graph
  • Optimized JAX

tools

  • VS Code
  • DGX Dashboard
  • Comfy UI
  • RAG Application in AI Workbench
  • Set up Tailscale on Your Spark

fine tuning

  • FLUX.1 Dreambooth LoRA Fine-tuning
  • LLaMA Factory
  • Fine-tune with NeMo
  • Fine-tune with Pytorch
  • Unsloth on DGX Spark

use case

  • Secure Long Running AI Agents with OpenShell on DGX Spark
  • OpenClaw 🦞
  • Spark & Reachy Photo Booth
  • Live VLM WebUI
  • Install and Use Isaac Sim and Isaac Lab
  • Vibe Coding in VS Code
  • Build and Deploy a Multi-Agent Chatbot
  • Connect Two Sparks
  • NCCL for Two Sparks
  • Build a Video Search and Summarization (VSS) Agent

inference

  • LM Studio on DGX Spark
  • Nemotron-3-Nano with llama.cpp
  • Speculative Decoding
  • SGLang for Inference
  • TRT LLM for Inference
  • vLLM for Inference
  • NVFP4 Quantization
  • Multi-modal Inference
  • NIM on Spark
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2026 NVIDIA Corporation

LM Studio on DGX Spark

30 MIN

Deploy LM Studio and serve LLMs on a Spark device; use LM Link to access models remotely.

InferenceLM LinkLM Studiollmster
View on GitHub
OverviewOverviewInstructionsInstructionsTroubleshootingTroubleshooting

Step 1
Install llmster on the DGX Spark

llmster is LM Studio's terminal native, headless LM Studio ‘daemon’.

You can install it on servers, cloud instances, machines with no GUI, or just on your computer. This is useful for running LM Studio in headless mode on DGX Spark, then connecting to it from your laptop via the API.

On your Spark, install llmster by running:

curl -fsSL https://lmstudio.ai/install.sh | bash

For Windows:

irm https://lmstudio.ai/install.ps1 | iex

Once installed, follow the instructions in your terminal output to add lms to your PATH. Interact with LM Studio using the lms CLI or the SDK / LM Studio V1 REST API (new with enhanced features) / OpenAI-compatible REST API.

Step 2
Download Required Ancillary Files

Run the following curl commands in your local terminal to download files required to complete later steps in this playbook. You may choose from Python, JavaScript, or Bash.

# JavaScript
curl -L -O https://raw.githubusercontent.com/lmstudio-ai/docs/main/_assets/nvidia-spark-playbook/js/run.js

# Python
curl -L -O https://raw.githubusercontent.com/lmstudio-ai/docs/main/_assets/nvidia-spark-playbook/py/run.py

# Bash
curl -L -O https://raw.githubusercontent.com/lmstudio-ai/docs/main/_assets/nvidia-spark-playbook/bash/run.sh

Step 3
Start the LM Studio API Server

Use lms, LM Studio's CLI, to start the server from your terminal. Enable local network access, which allows the LM Studio API server running on your machine to be accessed by all other devices on the same local network (make sure they are trusted devices). To do this, run the following command:

lms server start --bind 0.0.0.0 --port 1234

To test the connectivity between your laptop and your Spark, run the following command in your local terminal

curl http://<SPARK_IP>:1234/api/v1/models 

where <SPARK_IP> is your device's IP address. You can find your Spark’s IP address by running this on your Spark:

hostname -I

Step 3b. (Optional) Connect with LM Link

LM Link lets you use your Spark’s models from your laptop (or other devices) as if they were local, over an end-to-end encrypted connection. You don’t need to be on the same local network or bind the server to 0.0.0.0.

  1. Create a Link — Go to lmstudio.ai/link and follow Create your Link to set up your private LM Link network.
  2. Link both devices — On your DGX Spark (llmster) and on your laptop, sign in and join the same Link. LM Link uses Tailscale mesh VPNs; devices communicate without opening ports to the internet.
  3. Use remote models — On your laptop, open LM Studio (or use the local server). Remote models from your Spark appear in the model loader. Any tool that connects to localhost:1234 — including the LM Studio SDK, Codex, Claude Code, OpenCode, and the scripts in Step 6 — can use those models without changing the endpoint.

LM Link is in Preview and is free for up to 2 users, 5 devices each. For details and limits, see LM Link.

Step 4
Download a model to your Spark

As an example, let's download and run gpt-oss 120B, one of the best open source models from OpenAI. This model is too large for many laptops due to memory limitations, which makes this a fantastic use case for the Spark.

lms get openai/gpt-oss-120b

This download will take a while due to its large size. Verify that the model has been successfully downloaded by listing your models:

lms ls

Step 5
Load the model

Load the model on your Spark so that it is ready to respond to requests from your laptop.

lms load openai/gpt-oss-120b

Step 6
Set up a simple program that uses LM Studio SDK on the laptop

Install the LM Studio SDKs and use a simple script to send a prompt to your Spark and validate the response. To get started quickly, we provide simple scripts below for Python, JavaScript, and Bash. Download the scripts from the Overview page of this playbook and run the corresponding command from the directory containing it.

NOTE

Within each script, replace <SPARK_IP> with the IP address of your DGX Spark on your local network.

JavaScript

Pre-reqs: User has installed npm and node

npm install @lmstudio/sdk
node run.js

Python

Pre-reqs: User has installed uv

uv run --script run.py

Bash

Pre-reqs: User has installed jq and curl

bash run.sh

Step 7
Next Steps

  • Try downloading and serving different models from the LM Studio model catalog.
  • Use LM Link to connect more devices and use your Spark’s models from anywhere with end-to-end encryption.

Step 8
Cleanup and rollback

Remove and uninstall LM Studio completely if needed. Note that LM Studio stores models separately from the application. Uninstalling LM Studio will not remove downloaded models unless you explicitly delete them.

If you want to remove the entire LM Studio application, quit LM Studio from the tray first, then move the application to trash.

To uninstall llmster, remove the folder ~/.lmstudio/llmster.

To remove downloaded models, delete the contents of ~/.lmstudio/models/.

Resources

  • LM Studio Documentation
  • LM Link (use local models remotely)
  • DGX Spark Documentation
  • DGX Spark Forum
  • LM Studio Discord