OpenClaw
30 MINS
Run OpenClaw locally on DGX Spark with LM Studio or Ollama
| Symptom | Cause | Fix |
|---|---|---|
| OpenClaw dashboard URL not loading | Gateway not running or wrong host/port | Restart the OpenClaw gateway: For Ollama, run ollama launch openclaw to restart an already-configured gateway. For LM Studio, restart the OpenClaw gateway via the LM Studio UI or restart the OpenClaw service/container. Verify: Check that the gateway process is running with pgrep -f openclaw or `ps aux |
| "Connection refused" to model (e.g. localhost:1234 or Ollama port) | LM Studio or Ollama not running, or wrong port | Start the model in a separate terminal (lms load ... or ollama run ...) and ensure the port in openclaw.json matches (1234 for LM Studio, 11434 for Ollama) |
| OpenClaw says no model available | Model provider not configured or model not loaded | Add the models section to ~/.openclaw/openclaw.json for LM Studio, or run ollama launch openclaw for Ollama; ensure the model is loaded/running |
| Out-of-memory or very slow inference on DGX Spark | Model too large for available GPU memory or other GPU workloads | Free GPU memory (close other apps), choose a smaller model, or check usage with nvidia-smi |
| Install script fails or dependencies missing | Missing system packages on Linux | Install curl and any required build tools; see OpenClaw documentation for current requirements |
| Config changes not applied | Gateway not reloaded | Restart the OpenClaw gateway so it reloads ~/.openclaw/openclaw.json |