Secure Long Running AI Agents with OpenShell on DGX Station

30 MINS

Run OpenClaw in an NVIDIA OpenShell sandbox on DGX Station

Confirm your environment

Verify the OS, GPU, Docker, and Python are available before installing anything.

head -n 2 /etc/os-release
nvidia-smi
docker info --format '{{.ServerVersion}}'
python3 --version

Expected output should show Ubuntu (e.g., 24.04), a detected GPU (e.g., NVIDIA GB300), a Docker server version, and Python 3.12+.

Docker Configuration

First, verify that the local user has Docker permissions using the following command.

docker ps

If you get a permission denied error (permission denied while trying to connect to the docker API at unix:///var/run/docker.sock), add your user to the system's Docker group. This will enable you to run Docker commands without requiring sudo. The command to do so is as follows:

sudo usermod -aG docker $USER
newgrp docker

Note that you should log out and back in (or reboot) after adding the user to the group for this to take persistent effect across all terminal sessions.

Now that we have verified the user's Docker permission, we must configure Docker so that it can use the NVIDIA Container Runtime.

sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker

# Run a sample workload to verify the setup
docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi

Install the OpenShell CLI

Create a virtual environment and install the openshell CLI.

cd ~
uv venv openshell-env && source openshell-env/bin/activate
uv pip install openshell 
openshell --help

If you don't have uv installed yet:

curl -LsSf https://astral.sh/uv/install.sh | sh
export PATH="$HOME/.local/bin:$PATH"

Expected output should show the openshell command tree with subcommands like gateway, sandbox, provider, and inference.

Deploy the OpenShell gateway on DGX Station

The gateway is the control plane that manages sandboxes. Since you are running directly on the DGX Station, it deploys locally inside Docker.

openshell gateway start
openshell status

openshell status should report the gateway as healthy. The first run may take a minute while Docker pulls the required images.

NOTE

Remote gateway deployment requires passwordless SSH access. Ensure your SSH public key is added to ~/.ssh/authorized_keys on the DGX Station before using the --remote flag.

TIP

If you want to manage the gateway from a separate workstation, run openshell gateway start --remote <username>@<dgx-station-hostname> from that workstation instead. All subsequent commands will route through the SSH tunnel.

Install Ollama and pull a model

Install Ollama (if not already present) and download a model for local inference.

curl -fsSL https://ollama.com/install.sh | sh
ollama --version

DGX Station with GB300 GPU(s) can run a range of models depending on available VRAM:

GPU memory availableSuggested modelModel sizeNotes
Default recommendednemotron-3-super:120b-a12b~90GBRecommended default model for DGX Station
25–48 GBgpt-oss:20b~12GBLower latency, good for interactive use
48–80 GB+Nemotron-3-Nano-30B-A3B, gpt-oss:120b~20–65GBLarger models; ensure sufficient VRAM (nvidia-smi)

Verify Ollama is running (it auto-starts as a service after installation). If not, start it manually:

ollama serve &

Next, run a model from Ollama (adjust the model name to match your choice from the Ollama model library). We recommend Nemotron 3 Super as the default; for larger models ensure your GB300 has enough VRAM. ollama run will pull the model if it is not already present.

ollama run nemotron-3-super:120b-a12b

(Optional: to pull without running, use ollama pull nemotron-3-super:120b-a12b.)

Verify the model is available:

ollama list

Create an inference provider

Create an OpenShell provider that points to your local Ollama server so OpenShell can route inference requests to your DGX Station–hosted model. When Ollama runs on the same host as Docker, the command below works as-is (it uses host.docker.internal, which resolves to the host from inside containers). If Ollama is on a different machine, replace host.docker.internal with that machine’s IP or hostname.

openshell provider create \
    --name local-ollama \
    --type openai \
    --credential OPENAI_API_KEY=not-needed \
    --config OPENAI_BASE_URL=http://host.docker.internal:11434/v1

NOTE

If your Ollama listens on a different port, use it in the URL (e.g. http://host.docker.internal:11435/v1).

Configure inference routing

Point the inference.local endpoint (available inside every sandbox) at your Ollama model:

openshell inference set \
    --provider local-ollama \
    --model nemotron-3-super:120b-a12b

(Use the same model name you pulled in Step 5, e.g. gpt-oss:120b for a larger model if your GB300 has enough VRAM.)

Verify the configuration:

openshell inference get

Expected output should show provider: local-ollama and model: nemotron-3-super:120b-a12b (or whichever model you chose).

Deploy OpenShell Sandbox

Create a sandbox using the pre-built OpenClaw community sandbox. This pulls the OpenClaw Dockerfile, bundled policy, and startup scripts from the OpenShell Community catalog:

openshell sandbox create \
  --keep \
  --forward 18789 \
  --name dgx-demo \
  --from openclaw \
  -- openclaw-start

NOTE

Do not pass --policy with a local file path (e.g. openclaw-policy.yaml) when using --from openclaw. The policy is bundled with the community sandbox; a local file path can cause "file not found."

The --keep flag keeps the sandbox running after the initial process exits, so you can reconnect later. This is the default behavior. To terminate the sandbox when the initial process exits, use the --no-keep flag instead.

NOTE

The sandbox name is displayed in the creation output. You can also set it explicitly with --name <your-name>. To find it later, run openshell sandbox list.

To verify the sandbox and its default policy:

openshell sandbox get dgx-demo

The CLI will:

  1. Resolve openclaw against the community catalog
  2. Pull and build the container image
  3. Apply the bundled sandbox policy
  4. Launch OpenClaw inside the sandbox

Configure OpenClaw within OpenShell Sandbox

The sandbox container will spin up and you will be guided through the OpenClaw installation process. Work through the prompts as follows.

Use the arrow keys and Enter key to interact with the installation.

  • If you understand and agree, use the arrow key of your keyboard to select 'Yes' and press the Enter key.
  • Quickstart vs Manual: select Quickstart and press the Enter key.
  • Model/auth Provider: Select Custom Provider, the second-to-last option.
  • API Base URL: update to https://inference.local/v1.
  • How do you want to provide this API key?: Paste API key for now.
  • API key: please enter "ollama".
  • Endpoint compatibility: select OpenAI-compatible and press Enter.
  • Model ID: nemotron-3-super:120b-a12b (or the model you set in Step 7).
    • This may take 1-2 minutes as the Ollama model is spun up in the background.
  • Endpoint ID: leave the default value.
  • Alias: nemotron-3-super (optional; match the model ID).
  • Search Provider: Select Skip for now.
  • Channel: Select Skip for now.
  • Skills: Select No for now.
  • Enable hooks: Select No for now and press Enter.

It might take 1-2 minutes to get through the final stages. Afterwards, you should see a URL with a token you can use to connect to the gateway.

The expected output will be similar, but the token will be unique.

OpenClaw gateway starting in background.
  Logs: /tmp/gateway.log
  UI:   http://127.0.0.1:18789/?token=9b4c9a9c9f6905131327ce55b6d044bd53e0ec423dd6189e

Open the URL shown in the UI (e.g. right-click and Open Link, or copy the URL into your browser).

NOTE

Accessing the dashboard from a remote system or host: Port 18789 is forwarded into the sandbox; nothing on the host forwards it to your client. To reach the dashboard from another machine, create an SSH tunnel that forwards local port 18789 to the sandbox. Use the OpenShell SSH proxy and sandbox connection details (gateway URL, sandbox ID, token, gateway name). Example (replace <gateway-url>, <sandbox-id>, <token>, and <name> with values from your environment; use openshell or the full path to the CLI if needed, e.g. /usr/local/bin/openshell): ssh -o ProxyCommand='openshell ssh-proxy --gateway <gateway-url> --sandbox-id <sandbox-id> --token <token> --gateway-name <name>' -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o LogLevel=ERROR -N -L 18789:127.0.0.1:18789 sandbox Then open http://127.0.0.1:18789/?token=<your-token> in your local browser.

From this page, you can now Chat with your OpenClaw agent within the protected confines of the runtime OpenShell provides.

Conduct Inference within Sandbox

Connecting to the Sandbox (Terminal)

Now that OpenClaw has been configured within the OpenShell protected runtime, you can connect directly into the sandbox environment via:

openshell sandbox connect dgx-demo

Once loaded into the sandbox terminal, you can test connectivity to the Ollama model with this command:

curl https://inference.local/v1/responses \
          -H "Content-Type: application/json" \
          -d '{
        "instructions": "You are a helpful assistant.",
        "input": "Hello!"
      }'

Verify sandbox isolation

Open a second terminal and check the sandbox status and live logs:

source ~/openshell-env/bin/activate
openshell term

The terminal dashboard shows:

  • Sandbox status — name, phase, image, providers, and port forwards
  • Live log stream — outbound connections, policy decisions (allow, deny, inspect_for_inference), and inference interceptions

Verify that the OpenClaw agent can reach inference.local for model requests and that unauthorized outbound traffic is denied.

TIP

Press f to follow live output, s to filter by source, and q to quit the terminal dashboard.

Reconnect to the sandbox

If you exit the sandbox session, reconnect at any time (use the name you gave in Step 8, e.g. dgx-demo):

openshell sandbox connect dgx-demo

To transfer files in or out, replace dgx-demo with your sandbox name if different (from the creation output or openshell sandbox list):

openshell sandbox upload dgx-demo ./local-file /sandbox/destination
openshell sandbox download dgx-demo /sandbox/file ./local-destination

Customize the sandbox policy (optional)

The community sandbox ships a default policy. To tighten or loosen access, pull the current policy, edit it, and push it back — all without recreating the sandbox. Use your sandbox name (e.g. dgx-demo from Step 8).

Pull the current policy:

openshell policy get dgx-demo --full > openclaw-policy.yaml

Edit openclaw-policy.yaml to adjust filesystem paths, network endpoints, or binary permissions. For example, to allow the agent to reach a specific internal API:

network_policies:
  internal_api:
    name: internal-api
    endpoints:
      - { host: api.internal.example.com, port: 443 }
    binaries:
      - { path: /usr/bin/curl }

Push the updated policy (hot-reloaded, no restart needed):

openshell policy set dgx-demo --policy openclaw-policy.yaml --wait

Verify the new revision loaded:

openshell policy list dgx-demo

For the full policy schema reference, see the OpenShell documentation.

Cleanup

Stop and remove the sandbox (use the name you gave in Step 8, e.g. dgx-demo):

openshell sandbox delete dgx-demo

Stop the gateway (preserves state for later):

openshell gateway stop

WARNING

The following command permanently removes the gateway cluster and all its data.

openshell gateway destroy

To also remove the Ollama model:

ollama rm nemotron-3-super:120b-a12b

(Use the model name you pulled, e.g. gpt-oss:120b if you used a larger model.)

Next steps

Try other models

DGX Station has 170+ GB VRAM, so you can run very large local models. Switch the active Ollama model and point OpenShell inference at it; then (if using OpenClaw) set the Model ID in the UI to the same name.

GPT-OSS 120B (reasoning, agentic, function calling; ~65GB, 117B params). Best quality for complex and agentic tasks:

ollama pull gpt-oss:120b
ollama list
openshell inference set --provider local-ollama --model gpt-oss:120b

Qwen 3.5 122B (122B total / 10B active, mixture-of-experts; large-scale reasoning and coding):

ollama pull qwen3.5:122b-a10b
ollama list
openshell inference set --provider local-ollama --model qwen3.5:122b-a10b

After changing the inference model, set the Model ID in the OpenClaw UI to the same name — e.g. GPT-OSS (gpt-oss:120b) or Qwen 3.5 122B (qwen3.5:122b-a10b).

Other next steps

  • Add more providers: Attach GitHub tokens, GitLab tokens, or cloud API keys as providers with openshell provider create. When creating a sandbox, pass the provider name(s) with --provider <name> (e.g. --provider my-github) to inject those credentials into the sandbox securely.
  • Try other community sandboxes: Run openshell sandbox create --from base or --from sdg for other pre-built environments.
  • Connect VS Code: Use openshell sandbox ssh-config dgx-demo (or your sandbox name) and append the output to ~/.ssh/config to connect VS Code Remote-SSH directly into the sandbox.
  • Monitor and audit: Use openshell logs dgx-demo --tail (or your sandbox name) or openshell term to continuously monitor agent activity and policy decisions.