Secure Long Running AI Agents with OpenShell on DGX Station
Run OpenClaw in an NVIDIA OpenShell sandbox on DGX Station
Basic idea
OpenClaw is a local-first AI agent that runs on your machine, combining memory, file access, tool use, and community skills into a persistent assistant. Running it directly on your system means the agent can access your files, credentials, and network—creating real security risks.
NVIDIA OpenShell solves this problem. It is an open-source sandbox runtime that wraps the agent in kernel-level isolation with declarative YAML policies. OpenShell controls what the agent can read on disk, which network endpoints it can reach, and what privileges it has—without disabling the capabilities that make the agent useful.
By combining OpenClaw with OpenShell on DGX Station (with NVIDIA GB300 GPUs), you get the full power of a local AI agent backed by GPU memory for local models, while enforcing explicit controls over filesystem access, network egress, and credential handling.
Notice & Disclaimers
Quick Start Safety Check
Use a clean environment only. Run this playbook on a fresh device or VM with no personal data, confidential information, or sensitive credentials. Think of it like a sandbox—keep it isolated.
By installing this playbook, you're taking responsibility for all third-party components, including reviewing their licenses, terms, and security posture. Read and accept before you install or use.
What You're Getting
The playbook showcases experimental AI agent capabilities. Even with cutting-edge open-source tools like OpenShell in your toolkit, you need to layer in proper security measures for your specific threat model.
Key Risks with AI Agents
Be mindful of these risks with AI agents:
- Data leakage – Any materials the agent accesses could be exposed, leaked, or stolen.
- Malicious code execution – The agent or its connected tools could expose your system to malicious code or cyber-attacks.
- Unintended actions – The agent might modify or delete files, send messages, or access services without explicit approval.
- Prompt injection & manipulation – External inputs or connected content could hijack the agent's behavior in unexpected ways.
Security Best Practices
No system is perfect, but these practices help keep your information and systems safe:
- Isolate your environment – Run on a clean PC or isolated virtual machine. Only provision the specific data you want the agent to access.
- Never use real accounts – Don't connect personal, confidential, or production accounts. Create dedicated test accounts with minimal permissions.
- Vet your skills/plugins – Only enable skills from trusted sources that have been vetted by the community.
- Lock down access – Ensure your OpenClaw UI or messaging channels aren't accessible over the network without proper authentication.
- Restrict network access – Where feasible, limit the agent's internet connectivity.
- Clean up after yourself – When you're done, remove OpenClaw and revoke all credentials, API keys, and account access you granted.
What you'll accomplish
You will install the OpenShell CLI (openshell), deploy a gateway on your DGX Station, and launch OpenClaw inside a sandboxed environment using the pre-built OpenClaw community sandbox. The sandbox enforces filesystem, network, and process isolation by default. You will also configure local inference routing so OpenClaw uses a model running on your DGX Station (e.g., via Ollama on the host) without needing external API keys.
Popular use cases
- Secure agent experimentation: Test OpenClaw skills and integrations without exposing your main filesystem or credentials to the agent.
- Private enterprise development: Route all inference to a local model on DGX Station. No data leaves the machine unless you explicitly allow it in the policy.
- Auditable agent access: Version-control the policy YAML alongside your project. Review exactly what the agent can reach before granting access.
- Iterative policy tuning: Monitor denied connections in real time with
openshell term, then hot-reload updated policies without recreating the sandbox.
What to know before starting
- Comfort with the Linux terminal and SSH
- Basic understanding of Docker (OpenShell runs a k3s cluster inside Docker)
- Familiarity with Ollama for local model serving
- Awareness of the security model: OpenShell reduces risk through isolation but cannot eliminate all risk. Review the OpenShell overview and OpenClaw security guidance.
Prerequisites
Hardware Requirements:
- NVIDIA DGX Station with GB300 GPU(s)
- Sufficient GPU memory for your chosen model: we recommend Nemotron 3 Super (
nemotron-3-super) as the default; larger models (e.g., gpt-oss:120b) require more VRAM—checknvidia-smiand model requirements
Software Requirements:
- DGX OS or Ubuntu 24.04 (or compatible Linux)
- Docker Desktop or Docker Engine running:
docker info - Python 3.12 or later:
python3 --version uvpackage manager:uv --version(install withcurl -LsSf https://astral.sh/uv/install.sh | sh)- Ollama (latest recommended):
ollama --version - Network access to download Python packages from PyPI and model weights from Ollama
Time & risk
- Estimated time: 20–30 minutes (plus model download time, which depends on model size and network speed).
- Risk level: Low to Medium
- OpenShell sandboxes enforce kernel-level isolation, significantly reducing the risk compared to running OpenClaw directly on the host.
- The sandbox default policy denies all outbound traffic not explicitly allowed. Misconfigured policies may block legitimate agent traffic; use
openshell logsto diagnose. - Large model downloads may fail on unstable networks.
- Rollback: Delete the sandbox with
openshell sandbox delete dgx-demo(or your sandbox name), stop the gateway withopenshell gateway stop, and optionally destroy it withopenshell gateway destroy. Ollama models can be removed withollama rm <model>. - Last Updated: 03/13/2026