OpenClaw

30 MINS

Run OpenClaw locally on DGX Spark with LM Studio or Ollama

Important: Read security warnings first

CAUTION

Before proceeding, review the security risks in the Overview tab. OpenClaw is an AI agent that can access your files, execute commands, and connect to external services. Data exposure and malicious code execution are real risks. Strongly recommended: Run OpenClaw on an isolated system or VM, use dedicated accounts (not your main accounts), and never expose the dashboard to the public internet without authentication.

Install OpenClaw on your DGX Spark

On your DGX Spark, open a terminal and run the official install script. This installs OpenClaw and its dependencies on your Linux system.

curl -fsSL https://openclaw.ai/install.sh | bash

After dependencies are downloaded, OpenClaw will show a security warning. Read the risks; if you accept them, use the arrow keys to select Yes and press Enter.

Complete the OpenClaw onboarding

Work through the prompts as follows.

  1. Quickstart vs Manual: Choose Quickstart.

  2. Model provider: To use a local model (recommended for DGX Spark), go to the bottom of the list and select Skip for now—you’ll configure the model later. To use a cloud model instead, pick a provider and follow its instructions.

  3. Filtering models by provider: Select All Providers. On the next prompt for the default model, choose Keep Current.

  4. Communication channel: You can connect a channel (e.g., messaging) to use the bot when away from the machine, or select Skip for Now and configure it later.

  5. Skills: We recommend selecting No for now. You can add skills later from the web UI or Clawhub after you’ve tested the basics.

  6. Homebrew: If you are prompted to install Homebrew, select No—Homebrew is for macOS only and is not needed on Linux.

  7. Hooks: We recommend selecting all three for a better experience. Note that this may log data locally; enable only if you’re comfortable with that.

  8. Dashboard URL: The terminal will print a URL for the OpenClaw dashboard. Save this URL (and any access token shown)—you’ll need it to open the web UI.

  9. Finish: Select Yes on the final prompt to complete installation.

You can now open the OpenClaw dashboard in a browser using the URL and token from the installer.

Choose and install a local LLM backend

OpenClaw can use a local LLM via LM Studio (best raw performance, uses Llama.cpp) or Ollama (simpler and good for deployment). Use a separate terminal on your DGX Spark for the backend so the gateway and the model server can run side by side.

Install one of the following:

Option A – LM Studio

curl -fsSL https://lmstudio.ai/install.sh | bash

Option B – Ollama

curl -fsSL https://ollama.com/install.sh | sh

Select and download a model

Model quality and capability scale with size. Free as much GPU memory as possible (avoid other GPU workloads, enable only the skills you need). DGX Spark has 128GB unified memory, so you can run large models with room to spare.

Suggested models by GPU memory:

GPU memorySuggested modelModel sizeNotes
8–12 GBqwen3-4B-Thinking-2507~5GB
16 GBgpt-oss-20b~12GBLower latency, good for interactive use
24–48 GBNemotron-3-Nano-30B-A3B~20GB
128 GBgpt-oss-120b~65GBBest quality on DGX Spark (quantized); leaves ~63GB for context window and other processes; use 20B/30B if you prefer faster responses

Quality vs. latency: The 120B model gives the best accuracy and capability but has higher per-token latency. If you prefer snappier replies, use gpt-oss-20b (or a 30B model) instead; both run comfortably on DGX Spark with plenty of memory headroom.

Download the model:

LM Studio

lms get openai/gpt-oss-120b

Ollama

ollama pull gpt-oss:120b

(Use the model name that matches your choice from the table; adjust the lms get or ollama pull command accordingly.)

Run the model with a large context window

OpenClaw works best with a context window of 32K tokens or more.

LM Studio

lms load openai/gpt-oss-120b --context-length 32768

Ollama

ollama run gpt-oss:120b

Once the interactive prompt appears, set the context window:

>>> /set parameter num_ctx 32768

Keep this terminal (or process) running so the model stays loaded. You can now chat with the model or press Ctrl+D to exit the interactive mode while keeping the model server running.

TIP

If you see out-of-memory (OOM) errors: Try a smaller context (e.g. 16384) or switch to a smaller model (e.g. gpt-oss-20b). Monitor memory with nvidia-smi while the model is loaded.

Configure OpenClaw to use your local model

If you use LM Studio:

  1. Open the OpenClaw config file in your preferred editor (e.g. nano, vim, or a graphical editor). The config path is:

    ~/.openclaw/openclaw.json
    

    Example with nano:

    nano ~/.openclaw/openclaw.json
    
  2. Add or update the models section so it includes the LM Studio provider. Example for gpt-oss-120b (DGX Spark):

"models": {
  "mode": "merge",
  "providers": {
    "lmstudio": {
      "baseUrl": "http://localhost:1234/v1",
      "apiKey": "lmstudio",
      "api": "openai-responses",
      "models": [
        {
          "id": "openai/gpt-oss-120b",
          "name": "openai/gpt-oss-120b",
          "reasoning": false,
          "input": ["text"],
          "cost": {
            "input": 0,
            "output": 0,
            "cacheRead": 0,
            "cacheWrite": 0
          },
          "contextWindow": 32768,
          "maxTokens": 4096
        }
      ]
    }
  }
}

For gpt-oss-20b or another model, use the same structure but set id and name to match the model you loaded (e.g. openai/gpt-oss-20b). Adjust contextWindow and maxTokens if needed.

If you use Ollama:

Run:

ollama launch openclaw

If the OpenClaw gateway is already running, it should pick up the new configuration automatically. You can add --config to configure without launching the gateway yet.

Verify the setup

  1. In a browser, open the OpenClaw dashboard URL (and use the access token if required).
  2. Start a new conversation and send a short message.
  3. If you get a reply from the agent, the setup is working.

You can also ask OpenClaw which model it’s using. In the gateway chat UI you can switch models by typing: /model MODEL_NAME.

Optional: add skills and learn more

  • Skills add capabilities but also risk; only enable skills you trust (e.g., community-vetted ones). To add a skill:

    • Ask OpenClaw to configure a skill, or
    • Use the sidebar in the web UI to enable skills, or
    • Browse Clawhub for community skills.
  • For more usage and configuration details, see the OpenClaw documentation.