NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
View All Playbooks
View All Playbooks

onboarding

  • Set Up Local Network Access
  • Open WebUI with Ollama

data science

  • Single-cell RNA Sequencing
  • Portfolio Optimization
  • CUDA-X Data Science
  • Text to Knowledge Graph
  • Optimized JAX

tools

  • DGX Dashboard
  • Comfy UI
  • Connect Three DGX Spark in a Ring Topology
  • Connect Multiple DGX Spark through a Switch
  • RAG Application in AI Workbench
  • Set up Tailscale on Your Spark
  • VS Code

fine tuning

  • FLUX.1 Dreambooth LoRA Fine-tuning
  • LLaMA Factory
  • Fine-tune with NeMo
  • Fine-tune with Pytorch
  • Unsloth on DGX Spark

use case

  • NemoClaw with Nemotron 3 Super and Telegram on DGX Spark
  • Secure Long Running AI Agents with OpenShell on DGX Spark
  • OpenClaw 🦞
  • Live VLM WebUI
  • Install and Use Isaac Sim and Isaac Lab
  • Vibe Coding in VS Code
  • Build and Deploy a Multi-Agent Chatbot
  • Connect Two Sparks
  • NCCL for Two Sparks
  • Build a Video Search and Summarization (VSS) Agent
  • Spark & Reachy Photo Booth

inference

  • Speculative Decoding
  • Run models with llama.cpp on DGX Spark
  • vLLM for Inference
  • Nemotron-3-Nano with llama.cpp
  • SGLang for Inference
  • TRT LLM for Inference
  • NVFP4 Quantization
  • Multi-modal Inference
  • NIM on Spark
  • LM Studio on DGX Spark

Secure Long Running AI Agents with OpenShell on DGX Spark

30 MINS

Run OpenClaw with local models in an NVIDIA OpenShell sandbox on DGX Spark

AI AgentDGXOpenShellSecuritySpark
OpenShell on GitHub
OverviewOverviewInstructionsInstructionsTroubleshootingTroubleshooting

Step 1
Confirm your environment

Verify the OS, GPU, Docker, and Python are available before installing anything.

head -n 2 /etc/os-release
nvidia-smi
docker info --format '{{.ServerVersion}}'
python3 --version

Ensure NVIDIA Sync is configured with a custom port: use "OpenClaw" as the Name and "18789" as the port.

Expected output should show Ubuntu 24.04 (DGX OS), a detected GPU, a Docker server version, and Python 3.12+.

Step 2
Docker Configuration

First, verify that the local user has Docker permissions using the following command.

docker ps

If you get a permission denied error (permission denied while trying to connect to the docker API at unix:///var/run/docker.sock), add your user to the system's Docker group. This will enable you to run Docker commands without requiring sudo. The command to do so is as follows:

sudo usermod -aG docker $USER
newgrp docker

Note that you should reboot the Spark after adding the user to the group for this to take persistent effect across all terminal sessions.

Now that we have verified the user's Docker permission, we must configure Docker so that it can use the NVIDIA Container Runtime.

sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker

Run a sample workload to verify the setup:

docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi

Step 3
Install the OpenShell CLI

Create a virtual environment and install the openshell CLI.

cd ~
uv venv openshell-env && source openshell-env/bin/activate
uv pip install openshell 
openshell --help

If you don't have uv installed yet:

curl -LsSf https://astral.sh/uv/install.sh | sh
export PATH="$HOME/.local/bin:$PATH"

Expected output should show the openshell command tree with subcommands like gateway, sandbox, provider, and inference.

Step 4
Deploy the OpenShell gateway on DGX Spark

The gateway is the control plane that manages sandboxes. Since you are running directly on the Spark, it deploys locally inside Docker.

openshell gateway start
openshell status

openshell status should report the gateway as Connected. The first run may take a few minutes while Docker pulls the required images and the internal k3s cluster bootstraps.

NOTE

Remote gateway deployment requires passwordless SSH access. Ensure your SSH public key is added to ~/.ssh/authorized_keys on the DGX Spark before using the --remote flag.

TIP

If you want to manage the Spark gateway from a separate workstation, run openshell gateway start --remote <username>@<spark-ssid>.local from that workstation instead. All subsequent commands will route through the SSH tunnel.

Step 5
Install Ollama and pull a model

Install Ollama (if not already present) and download a model for local inference.

curl -fsSL https://ollama.com/install.sh | sh
ollama --version

DGX Spark's 128GB memory can run large models:

GPU memory availableSuggested modelModel sizeNotes
25–48 GBnemotron-3-nano~24GBLower latency, good for interactive use
48–80 GBgpt-oss:120b~65GBGood balance of quality and speed
128 GBnemotron-3-super:120b~86GBBest quality on DGX Spark

Verify Ollama is running (it auto-starts as a service after installation). If not, start it manually:

ollama serve &

Configure Ollama to listen on all interfaces so the OpenShell gateway container can reach it:

sudo mkdir -p /etc/systemd/system/ollama.service.d
printf '[Service]\nEnvironment="OLLAMA_HOST=0.0.0.0"\n' | sudo tee /etc/systemd/system/ollama.service.d/override.conf
sudo systemctl daemon-reload
sudo systemctl restart ollama

Verify Ollama is running and reachable on all interfaces:

curl http://0.0.0.0:11434

Expected: Ollama is running. If not, start it with sudo systemctl start ollama.

Next, run a model from Ollama (adjust the model name to match your choice from the Ollama model library). The ollama run command will pull the model automatically if it is not already present. Running the model here ensures it is loaded and ready when you use it with OpenClaw, reducing the chance of timeouts later. Example for nemotron-3-super:

ollama run nemotron-3-super:120b

Type /bye to exit.

Verify the model is available:

ollama list

Step 6
Create an inference provider

We are going to create an OpenShell provider that points to your local Ollama server. This lets OpenShell route inference requests to your Spark-hosted model.

First, find the IP address of your DGX Spark:

hostname -I | awk '{print $1}'

Then create the provider, replacing {Machine_IP} with the IP address from the command above (e.g. 10.110.106.169):

openshell provider create \
    --name local-ollama \
    --type openai \
    --credential OPENAI_API_KEY=not-needed \
    --config OPENAI_BASE_URL=http://{Machine_IP}:11434/v1

IMPORTANT

Do not use localhost or 127.0.0.1 here. The OpenShell gateway runs inside a Docker container, so it cannot reach the host via localhost. Use the machine's actual IP address.

Verify the provider was created:

openshell provider list

Step 7
Configure inference routing

Point the inference.local endpoint (available inside every sandbox) at your Ollama model. Replace the model name with your choice from Step 5:

openshell inference set \
    --provider local-ollama \
    --model nemotron-3-super:120b

The output should confirm the route and show a validated endpoint URL, for example: http://10.110.106.169:11434/v1/chat/completions (openai_chat_completions).

NOTE

If you see failed to verify inference endpoint or failed to connect (for example because the gateway cannot reach the host IP from inside its container), add --no-verify to skip endpoint verification: openshell inference set --provider local-ollama --model nemotron-3-super:120b --no-verify. Ensure Ollama is running and listening on all interfaces (see Step 5).

Verify the configuration:

openshell inference get

Expected output should show provider: local-ollama and model: nemotron-3-super:120b (or whichever model you chose).

Step 8
Deploy OpenShell Sandbox

Create a sandbox using the pre-built OpenClaw community sandbox. This pulls the OpenClaw Dockerfile, the default policy, and startup scripts from the OpenShell Community catalog:

openshell sandbox create \
  --keep \
  --forward 18789 \
  --name dgx-demo \
  --from openclaw \
  -- openclaw-start

NOTE

Do not pass --policy with a local file path (e.g. openclaw-policy.yaml) when using --from openclaw. The policy is bundled with the community sandbox; a local file path can cause "file not found."

The --keep flag keeps the sandbox running after the initial process exits, so you can reconnect later. This is the default behavior. To terminate the sandbox when the initial process exits, use the --no-keep flag instead.

The CLI will:

  1. Resolve openclaw against the community catalog
  2. Pull and build the container image
  3. Apply the bundled sandbox policy
  4. Launch OpenClaw inside the sandbox

Step 9
Configure OpenClaw within OpenShell Sandbox

The sandbox container will spin up and the OpenClaw onboarding wizard will launch automatically in your terminal.

IMPORTANT

The onboarding wizard is fully interactive — it requires arrow-key navigation and Enter to select options. It cannot be completed from a non-interactive session (e.g. a script or automation tool). You must run openshell sandbox create from a terminal with full TTY support.

If the wizard did not complete during sandbox creation, reconnect to the sandbox to re-run it:

openshell sandbox connect dgx-demo

Use the arrow keys and Enter key to interact with the installation.

  • If you understand and agree, use the arrow key of your keyboard to select 'Yes' and press the Enter key.
  • Quickstart vs Manual: select Quickstart and press the Enter key.
  • Model/auth Provider: Select Custom Provider, the second-to-last option.
  • API Base URL: update to https://inference.local/v1
  • How do you want to provide this API key?: Paste API key for now.
  • API key: please enter "ollama".
  • Endpoint compatibility: select OpenAI-compatible and press Enter.
  • Model ID: enter the model name you chose in Step 5 (e.g. nemotron-3-super:120b).
    • This may take 1-2 minutes as the Ollama model is spun up in the background.
  • Endpoint ID: leave the default value.
  • Alias: enter the same model name (this is optional).
  • Channel: Select Skip for now.
  • Search provider: Select Skip for now.
  • Skills: Select No for now.
  • Enable hooks: Press spacebar to select Skip for now and press Enter.

It might take 1-2 minutes to get through the final stages. Afterwards, you should see a URL with a token you can use to connect to the gateway.

The expected output will be similar, but the token will be unique.

OpenClaw gateway starting in background.
  Logs: /tmp/gateway.log
  UI:   http://127.0.0.1:18789/?token=9b4c9a9c9f6905131327ce55b6d044bd53e0ec423dd6189e

Now that we have configured OpenClaw within the OpenShell sandbox, let's set the name of our openshell sandbox as an environment variable. This will make future commands easier to run. Note that the name of the sandbox was set in the openshell sandbox create command in Step 8 using the --name flag.

export SANDBOX_NAME=dgx-demo

In order to verify the default policy enabled for your sandbox, please run the following command:

openshell sandbox get $SANDBOX_NAME

NOTE

Step 8’s --forward 18789 already sets up port forwarding from the OpenShell gateway to the sandbox. You do not need a manual ssh command with openshell ssh-proxy for the usual case.

To verify the forward is active, use the following command:

openshell forward list

You should see your sandbox name (e.g. dgx-demo) with port 18789. If it is missing or dead, start it:

openshell forward start --background 18789 $SANDBOX_NAME

Path A: If you are using the Spark as the primary device, right-click on the URL in the UI section and select Open Link.

Path B: If you are using a laptop or workstation that is not on the Spark (e.g. you SSH into the Spark only): Install the OpenShell CLI on that machine.

IMPORTANT

SSH must work from this machine to the Spark before gateway add. Run ssh nvidia@<spark-ip> (or your user/host) and confirm you get a shell without Permission denied (publickey). If that fails, add your public key to the Spark: ssh-copy-id nvidia@<spark-ip> (from the same machine), or paste your ~/.ssh/id_ed25519.pub (or id_rsa.pub) into ~/.ssh/authorized_keys on the Spark. OpenShell uses this SSH session to reach the remote Docker API and extract gateway TLS certificates. If you use a non-default key, pass --ssh-key ~/.ssh/your_key to gateway add (same as Step 4’s remote gateway note).

Register the Spark’s already-running gateway. Do not use openshell gateway add user@ip alone—that is parsed as a cloud URL and will not write mtls/ca.crt.

Per the OpenShell gateway docs, register using hostname openshell, not the raw Spark IP, for HTTPS.

WARNING

The gateway TLS certificate is valid for openshell, localhost, and 127.0.0.1 — not for your Spark’s LAN IP. If you use https://10.x.x.x:8080 or ssh://user@10.x.x.x:8080, openshell status may fail with certificate not valid for name "10.x.x.x".

On your laptop/WSL, map openshell to the Spark (once per machine):

# Replace with your Spark’s IP. Requires sudo on Linux/WSL.
echo "<spark-ip> openshell" | sudo tee -a /etc/hosts
# Example: echo "10.110.17.10 openshell" | sudo tee -a /etc/hosts

Then add the gateway (SSH target stays the real IP or hostname; HTTPS URL uses openshell):

openshell gateway add https://openshell:8080 --remote <user>@<spark-ip>

Example:

openshell gateway add https://openshell:8080 --remote nvidia@10.110.17.10

If you already registered with the IP and see the cert error, remove that entry and re-add:

openshell gateway destroy 
openshell gateway add https://openshell:8080 --remote nvidia@10.110.17.10

(Use openshell gateway select if the destroy name differs.)

Complete any browser or CLI prompts until the command finishes (do not Ctrl+C early). Then:

openshell status   # should show Connected, not TLS CA errors
openshell forward start --background 18789 dgx-demo

Then on the laptop browser open (use #token= so the UI receives the gateway token):

http://127.0.0.1:18789/#token=<your-token>

Use the token value from the OpenClaw wizard output on the Spark. Path B requires SSH from the laptop to the Spark so the CLI can reach the gateway on :8080.

NVIDIA Sync: Right-click the URL in the UI and select Copy Link. Connect to your Spark in Sync, open the OpenClaw entry, and paste the URL in the browser address bar.

From this page, you can now Chat with your OpenClaw agent within the protected confines of the runtime OpenShell provides.

Step 10
Conduct Inference within Sandbox

Connecting to the Sandbox (Terminal)

Now that OpenClaw has been configured within the OpenShell protected runtime, you can connect directly into the sandbox environment via:

openshell sandbox connect $SANDBOX_NAME

Once loaded into the sandbox terminal, you can test connectivity to the Ollama model with this command:

curl https://inference.local/v1/responses \
          -H "Content-Type: application/json" \
          -d '{
        "instructions": "You are a helpful assistant.",
        "input": "Hello!"
      }'

Step 11
Verify sandbox isolation

Open a second terminal and check the sandbox status and live logs:

source ~/openshell-env/bin/activate
openshell term

The terminal dashboard shows:

  • Sandbox status — name, phase, image, providers, and port forwards
  • Live log stream — outbound connections, policy decisions (allow, deny, inspect_for_inference), and inference interceptions

Verify that the OpenClaw agent can reach inference.local for model requests and that unauthorized outbound traffic is denied.

TIP

Press f to follow live output, s to filter by source, and q to quit the terminal dashboard.

Step 12
Reconnect to the sandbox

If you exit the sandbox session, reconnect at any time:

openshell sandbox connect $SANDBOX_NAME

NOTE

openshell sandbox connect is interactive-only — it opens a terminal session inside the sandbox. There is no way to pass a command for non-interactive execution. Use openshell sandbox upload/download for file transfers, or openshell sandbox ssh-config for scripted SSH (see Step 14).

To transfer files in or out out of the sandbox, please use the following:

openshell sandbox upload $SANDBOX_NAME ./local-file /sandbox/destination
openshell sandbox download $SANDBOX_NAME /sandbox/file ./local-destination

Step 13
Cleanup

Stop and remove the sandbox:

openshell sandbox delete $SANDBOX_NAME

Remove the inference provider you created in Step 6:

openshell provider delete local-ollama

Stop the gateway (preserves state for later):

openshell gateway stop

WARNING

The following command permanently removes the gateway cluster and all its data.

openshell gateway destroy

To also remove the Ollama model:

ollama rm nemotron-3-super:120b

Step 14
Next steps

  • Add more providers: Attach GitHub tokens, GitLab tokens, or cloud API keys as providers with openshell provider create. When creating the sandbox, pass the provider name(s) with --provider <name> (e.g. --provider my-github) to inject those credentials into the sandbox securely.
  • Try other community sandboxes: Run openshell sandbox create --from base or --from sdg for other pre-built environments.
  • Connect VS Code: Use openshell sandbox ssh-config <sandbox-name> and append the output to ~/.ssh/config to connect VS Code Remote-SSH directly into the sandbox.
  • Monitor and audit: Use openshell logs <sandbox-name> --tail or openshell term to continuously monitor agent activity and policy decisions.

Resources

  • NVIDIA OpenShell Documentation
  • OpenShell PyPI
  • OpenClaw Documentation
  • OpenClaw Gateway Security
  • DGX Spark Documentation
  • DGX Spark Forum
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2026 NVIDIA Corporation