OpenClaw

30 MINS

Run OpenClaw locally on DGX Spark with LM Studio or Ollama

SymptomCauseFix
OpenClaw dashboard URL not loadingGateway not running or wrong host/portRestart the OpenClaw gateway: For Ollama, run ollama launch openclaw to restart an already-configured gateway. For LM Studio, restart the OpenClaw gateway via the LM Studio UI or restart the OpenClaw service/container. Verify: Check that the gateway process is running with pgrep -f openclaw or `ps aux
"Connection refused" to model (e.g. localhost:1234 or Ollama port)LM Studio or Ollama not running, or wrong portStart the model in a separate terminal (lms load ... or ollama run ...) and ensure the port in openclaw.json matches (1234 for LM Studio, 11434 for Ollama)
OpenClaw says no model availableModel provider not configured or model not loadedAdd the models section to ~/.openclaw/openclaw.json for LM Studio, or run ollama launch openclaw for Ollama; ensure the model is loaded/running
Out-of-memory or very slow inference on DGX SparkModel too large for available GPU memory or other GPU workloadsFree GPU memory (close other apps), choose a smaller model, or check usage with nvidia-smi
Install script fails or dependencies missingMissing system packages on LinuxInstall curl and any required build tools; see OpenClaw documentation for current requirements
Config changes not appliedGateway not reloadedRestart the OpenClaw gateway so it reloads ~/.openclaw/openclaw.json