NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
OpenShell

OpenShell

Run any agent more safely. Shape its access not its capabilities, and help keep inference private.

agentsopen sourcesandboxsecurity
View GitHubTry Early Preview

The Runtime for Autonomous Agents

Modern agents need the autonomy to code, research, and evolve—but they shouldn't have unrestricted access to your host system. OpenShell applies the isolation principles of a web browser to the agentic workflow. Every session is sandboxed, every resource is metered, and every permission is verified by the runtime before execution.

  • Unified Governance: Manage coding agents, research assistants, and AI workflows under one policy layer.
  • Host Agnostic: Run identical security profiles across any operating system.
  • Self-Evolving Safety: Enable agents to learn new skills and install packages without risking system integrity.

The OpenShell Architecture

OpenShell is built on three foundational pillars that bridge the gap between agentic autonomy and enterprise compliance.

1. Programmable Sandboxes

Purpose-built isolation for autonomous agents.

Unlike generic containers, the OpenShell sandbox is designed for agents that modify their own environment. It handles skill verification and network isolation, providing a "break-safe" environment where agents can experiment without touching the host. Live Policy Updates: Grant developer approvals in real-time. Full Audit Trail: Every 'allow' and 'deny' decision is logged for forensic-level oversight.

2. Granular Policy Engine

Control the "What," "Where," and "How" of execution.

The engine evaluates actions at the binary, path, and method levels. It allows agents to be autonomous where it matters—like installing a verified skill—while blocking unreviewed binaries or unauthorized network calls. Constraint Reasoning: If an agent hits a wall, it can reason about the roadblock and propose a policy update for your final approval. Deep Enforcement: Governance across the filesystem, network, and process layers.

3. Private Inference Router

Keep sensitive data private.

The inference Router ensures that context stays on-device using local open models, routing to frontier models (like GPT-4 or Claude) only when your specific cost and privacy policies allow. Model Agnostic: Works with any LLM or agent harness. Policy-Driven Routing: Decisions are made based on your rules, not the agent’s preferences.

Resources

  • OpenShell Documentation
  • Run OpenShell on DGX Spark
  • Run OpenShell on DGX Station
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2026 NVIDIA Corporation