OpenClaw is a local-first AI agent that runs on your machine, combining memory, file access, tool use, and community skills into a persistent assistant. Running it directly on your system means the agent can access your files, credentials, and network—creating real security risks.
NVIDIA OpenShell solves this problem. It is an open-source sandbox runtime that wraps the agent in kernel-level isolation with declarative YAML policies. OpenShell controls what the agent can read on disk, which network endpoints it can reach, and what privileges it has—without disabling the capabilities that make the agent useful.
By combining OpenClaw with OpenShell on DGX Spark, you get the full power of a local AI agent backed by 128GB of unified memory for large models, while enforcing explicit controls over filesystem access, network egress, and credential handling.
Use a clean environment only. Run this playbook on a fresh device or VM with no personal data, confidential information, or sensitive credentials. Think of it like a sandbox—keep it isolated.
By installing this playbook, you're taking responsibility for all third-party components, including reviewing their licenses, terms, and security posture. Read and accept before you install or use.
The playbook showcases experimental AI agent capabilities. Even with cutting-edge open-source tools like OpenShell in your toolkit, you need to layer in proper security measures for your specific threat model.
Be mindful of these risks with AI agents:
Data leakage – Any materials the agent accesses could be exposed, leaked, or stolen.
Malicious code execution – The agent or its connected tools could expose your system to malicious code or cyber-attacks.
Unintended actions – The agent might modify or delete files, send messages, or access services without explicit approval.
Prompt injection & manipulation – External inputs or connected content could hijack the agent's behavior in unexpected ways.
No system is perfect, but these practices help keep your information and systems safe.
Isolate your environment – Run on a clean PC or isolated virtual machine. Only provision the specific data you want the agent to access.
Never use real accounts – Don't connect personal, confidential, or production accounts. Create dedicated test accounts with minimal permissions.
Vet your skills/plugins – Only enable skills from trusted sources that have been vetted by the community.
Lock down access – Ensure your OpenClaw UI or messaging channels aren't accessible over the network without proper authentication.
Restrict network access – Where feasible, limit the agent's internet connectivity.
Clean up after yourself – When you're done, remove OpenClaw and revoke all credentials, API keys, and account access you granted.
You will install the OpenShell CLI (openshell), deploy a gateway on your DGX Spark, and launch OpenClaw inside a sandboxed environment using the pre-built OpenClaw community sandbox. The sandbox enforces filesystem, network, and process isolation by default. You will also configure local inference routing so OpenClaw uses a model running on your Spark without needing external API keys.
openshell term, then hot-reload updated policies without recreating the sandbox.Hardware Requirements:
Software Requirements:
docker infopython3 --versionuv package manager: uv --version (install with curl -LsSf https://astral.sh/uv/install.sh | sh)ollama --versionCAUTION
Risk level: Medium
openshell logs to diagnose.openshell sandbox delete <sandbox-name>, stop the gateway with openshell gateway stop, and optionally destroy it with openshell gateway destroy. Ollama models can be removed with ollama rm <model>.