OpenClaw
Run OpenClaw locally on DGX Spark with LM Studio or Ollama
Basic idea
OpenClaw (formerly Clawdbot & Moltbot) is a local-first AI agent that runs on your machine. It combines multiple capabilities into a single assistant: it remembers conversations, adapts to your usage, runs continuously, uses context from your files and apps, and can be extended with community skills.
Running OpenClaw and its LLMs fully on your DGX Spark keeps your data private and avoids ongoing cloud API costs. DGX Spark is well suited for this: it runs Linux, is designed to stay on, and has 128GB memory, so you can run large local models for better accuracy and more capable behavior.
What you'll accomplish
You will have OpenClaw installed on your DGX Spark and connected to a local LLM (via LM Studio or Ollama). You can use the OpenClaw web UI to chat with your agent, and optionally connect communication channels and skills. The agent and models run entirely on your Spark—no data leaves your machine unless you add cloud or external integrations.
Popular use cases
- Personal secretary: With access to your inbox, calendar, and files, OpenClaw can help manage your schedule, draft replies, send reminders, and find meeting slots.
- Proactive project management: Check project status over email or messaging, send status updates, and follow up or send reminders.
- Research agent: Combine web search and your local files to produce reports with personalized context.
- Install helper: Search for apps/libraries, run installations, and debug errors using terminal access (larger models recommended).
What to know before starting
- Basic use of the Linux terminal and a text editor
- Optional: familiarity with Ollama or LM Studio if you plan to use a local model
- Awareness of the security considerations below
Important: security and risks
AI agents can introduce real risks. Read OpenClaw’s guidance: OpenClaw Gateway Security.
Main risks:
- Data exposure: Personal information or files may be leaked or stolen.
- Malicious code: The agent or connected tools may expose you to malware or attacks.
You cannot eliminate all risk; proceed at your own risk. Critical security measures:
- STRONGLY RECOMMENDED: Run OpenClaw on a dedicated or isolated system (e.g., a clean DGX Spark or VM) and only copy in the data the agent needs. Do not run this on your primary workstation with sensitive data.
- Use dedicated accounts for the agent instead of your main accounts; grant only the minimum access it needs.
- Enable only skills you trust, preferably those vetted by the community. Skills that provide terminal or file system access increase risk significantly.
- CRITICAL: Ensure the OpenClaw web UI and any messaging channels are never exposed to the public internet without strong authentication. Use SSH tunneling or VPN if accessing remotely.
- Where possible, limit internet access for the agent using firewall rules or network isolation.
- Monitor activity: Regularly review logs and commands executed by the agent.
Prerequisites
- DGX Spark running Linux, connected to your network
- Terminal (SSH or local) access to the Spark
- For local LLMs: enough GPU memory for your chosen model (see Instructions for size guidance; DGX Spark’s 128GB supports large models)
Time and risk
- Duration: About 30 minutes for install and first-time model setup; model download time depends on size and network (gpt-oss-120b is ~65GB and may take longer on slower connections).
- Risk level: Medium to High—the agent has access to whatever files, tools, and channels you configure. Risk increases significantly if you enable terminal/command execution skills or connect external accounts. Without proper isolation, this setup could expose sensitive data or allow code execution. Always follow the security measures above.
- Rollback: You can stop the OpenClaw gateway and uninstall via the same install script or by removing its directory; uninstall Ollama or LM Studio separately if desired.
- Last Updated: 03/11/2026
- First Publication