NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2026 NVIDIA Corporation

View All Playbooks
View All Playbooks

onboarding

  • MIG on DGX Station

data science

  • Topic Modeling
  • Text to Knowledge Graph on DGX Station

tools

  • NVFP4 Quantization

fine tuning

  • Nanochat Training

use case

  • Run NemoClaw on DGX Station
  • Local Coding Agent
  • Secure Long Running AI Agents with OpenShell on DGX Station

inference

  • Serve Qwen3-235B with vLLM

Run NemoClaw on DGX Station

60 MINS

Run OpenClaw in an OpenShell sandbox on DGX Station with Ollama (Nemotron)

AI AgentDGX StationGB300NemoClawOllamaOpenShell
NemoClaw on GitHub
OverviewOverviewInstructionsInstructionsTroubleshootingTroubleshooting

Step 1
Docker configuration

Verify Docker permissions and configure the NVIDIA runtime. OpenShell's gateway runs k3s inside Docker; on some systems (including DGX Station with cgroup v2), the gateway needs a cgroup setting to start correctly.

Verify Docker:

docker ps

If you get a permission denied error, add your user to the Docker group:

sudo usermod -aG docker $USER

Log out and back in for the group to take effect.

Configure Docker for the NVIDIA runtime and set cgroup namespace mode for OpenShell on DGX Station:

sudo nvidia-ctk runtime configure --runtime=docker

sudo python3 -c "
import json, os
path = '/etc/docker/daemon.json'
d = json.load(open(path)) if os.path.exists(path) else {}
d['default-cgroupns-mode'] = 'host'
json.dump(d, open(path, 'w'), indent=2)
"

sudo systemctl restart docker

Verify:

docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi

NOTE

On DGX Station (and other systems using cgroup v2), OpenShell's gateway embeds k3s inside Docker and may need host cgroup namespace access. Without default-cgroupns-mode: host, the gateway can fail with "Failed to start ContainerManager" errors.

Step 2
Install Node.js

NemoClaw is installed via npm and requires Node.js.

curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -
sudo apt-get install -y nodejs

Verify: node --version should show v22.x.x.

Step 3
Install Ollama and download a model

Install Ollama:

curl -fsSL https://ollama.com/install.sh | sh

Verify it is running:

curl http://localhost:11434

Expected: Ollama is running. If not, start it: ollama serve &

DGX Station has 170+ GB VRAM, so you can run large models. Download Nemotron 3 Super 120B (~87GB; may take several minutes):

ollama pull nemotron-3-super:120b

Run it briefly to pre-load weights (type /bye to exit):

ollama run nemotron-3-super:120b

Configure Ollama to listen on all interfaces so the sandbox container can reach it:

sudo systemctl edit ollama.service

Add the following on the third line of the file (above "Edits below this comment will be discarded"):

[Service]
Environment="OLLAMA_HOST=0.0.0.0"

Save (Ctrl+X, then Y), then restart:

sudo systemctl daemon-reload
sudo systemctl restart ollama

Step 4
Install the OpenShell CLI

The OpenShell binary is distributed via GitHub releases. You need the GitHub CLI and access to the NVIDIA organization.

sudo apt-get install -y gh
gh auth login

When using SSH, gh will show a one-time code. Visit https://github.com/login/device in a browser, enter the code, and authorize for the NVIDIA org.

Configure git for NVIDIA SAML SSO and download OpenShell:

gh auth setup-git

ARCH=$(uname -m)
case "$ARCH" in
  x86_64|amd64) ARCH="x86_64" ;;
  aarch64|arm64) ARCH="aarch64" ;;
esac
gh release download --repo NVIDIA/OpenShell \
  --pattern "openshell-${ARCH}-unknown-linux-musl.tar.gz"
tar xzf openshell-${ARCH}-unknown-linux-musl.tar.gz
sudo install -m 755 openshell /usr/local/bin/openshell
rm -f openshell openshell-${ARCH}-unknown-linux-musl.tar.gz

Verify: openshell --version

Step 5
Install NemoClaw

Clone the NemoClaw plugin into your home directory and install it globally:

cd ~
git clone https://github.com/NVIDIA/openshell-openclaw-plugin.git
cd openshell-openclaw-plugin
sudo npm install -g .

Verify: nemoclaw --help

NOTE

OpenClaw (the AI agent) is installed automatically inside the sandbox during onboarding. You do not install it on the host.

Step 6
Run the NemoClaw onboard wizard

Ensure Ollama is running (curl http://localhost:11434 should return "Ollama is running"). From the plugin directory (you should be in ~/openshell-openclaw-plugin after Step 5), run:

nemoclaw onboard

The wizard walks you through seven steps:

  1. NVIDIA API key — Paste your key from build.nvidia.com (starts with nvapi-). Only needed once.
  2. Preflight — Checks Docker and OpenShell. On DGX Station with GB300, nvidia-smi typically shows GPU; if the wizard reports "No GPU detected," it can still proceed and use Ollama for inference.
  3. Gateway — Starts the OpenShell gateway (30–60 seconds on first run).
  4. Sandbox — Enter a name or press Enter for the default. First build takes 2–5 minutes.
  5. Inference — The wizard auto-detects Ollama (e.g. "Ollama detected on localhost:11434 — using it").
  6. OpenClaw — Configured on first connect.
  7. Policies — Press Enter or Y to accept suggested presets (pypi, npm).

When complete you will see something like:

  Dashboard    http://localhost:18789/
  Sandbox      my-assistant (Landlock + seccomp + netns)
  Model        nemotron-3-nano (ollama-local)

Step 7
Configure inference for Nemotron 3 Super

The onboard wizard defaults to nemotron-3-nano. Switch the inference route to the Super model you downloaded in Step 3:

openshell inference set --provider ollama-local --model nemotron-3-super:120b

Verify:

openshell inference get

Expected: provider: ollama-local and model: nemotron-3-super:120b.

Step 8
Start the OpenClaw web UI

Connect to the sandbox (use the name you chose in Step 6, e.g. my-assistant):

openshell sandbox connect my-assistant

You are now inside the sandbox. Run these commands in order.

Set API key placeholders so the gateway uses local Ollama. Use the literal value local-ollama for both (this tells the gateway to use local Ollama; it is not your NVIDIA API key from build.nvidia.com):

export NVIDIA_API_KEY=local-ollama
export ANTHROPIC_API_KEY=local-ollama

Initialize NemoClaw (this may drop you into a new shell when done):

nemoclaw-start

After the "NemoClaw ready" banner, re-export the environment variables:

export NVIDIA_API_KEY=local-ollama
export ANTHROPIC_API_KEY=local-ollama

Create memory files and start the web UI:

mkdir -p /sandbox/.openclaw/workspace/memory
echo "# Memory" > /sandbox/.openclaw/workspace/MEMORY.md

openclaw config set gateway.controlUi.dangerouslyAllowHostHeaderOriginFallback true

nohup openclaw gateway run \
  --allow-unconfigured --dev \
  --bind loopback --port 18789 \
  > /tmp/gateway.log 2>&1 &

Wait a few seconds, then get your dashboard URL:

openclaw dashboard

This prints something like:

Dashboard URL: http://127.0.0.1:18789/#token=YOUR_UNIQUE_TOKEN

Save this URL. Type exit to leave the sandbox (the gateway keeps running).

Step 9
Open the chat interface

Open the dashboard URL from Step 8 in your browser (on the DGX Station or via port forwarding if you connect remotely):

http://127.0.0.1:18789/#token=YOUR_UNIQUE_TOKEN

IMPORTANT

The token is in the URL as a hash fragment (#token=...), not a query parameter (?token=). Paste the full URL including #token=... into the address bar.

You should see the OpenClaw dashboard with Version and Health: OK. Click Chat in the left sidebar and send a message to your agent.

Try: "Hello! What can you help me with?" or "How many rs are there in the word strawberry?"

NOTE

Nemotron 3 Super 120B responses may take 30–90 seconds. This is normal for a 120B parameter model running locally on DGX Station.

Step 10
Using the agent from the command line

Connect to the sandbox:

openshell sandbox connect my-assistant

Run a prompt:

export NVIDIA_API_KEY=local-ollama
export ANTHROPIC_API_KEY=local-ollama
openclaw agent --agent main --local -m "How many rs are there in strawberry?" --session-id s1

Test sandbox isolation (this should be blocked by the network policy):

curl -sI https://httpbin.org/get

Type exit to leave the sandbox.

Step 11
Monitoring with the OpenShell TUI

In a separate terminal on the host:

openshell term

Press f to follow live output, s to filter by source, q to quit.

Step 12
Cleanup

Remove the sandbox and destroy the NemoClaw gateway:

openshell sandbox delete my-assistant
openshell gateway destroy -g nemoclaw

To fully uninstall NemoClaw:

sudo npm uninstall -g nemoclaw
rm -rf ~/.nemoclaw

Step 13
Clean slate (start over)

To remove everything and start again from Step 5:

cd ~
openshell sandbox delete my-assistant 2>/dev/null
openshell gateway destroy -g nemoclaw 2>/dev/null
sudo npm uninstall -g nemoclaw
rm -rf ~/openshell-openclaw-plugin ~/.nemoclaw

Verify:

which nemoclaw        # Should report "not found"
openshell status      # Should report "No gateway configured"

Then restart from Step 5 (Install NemoClaw).

Step 14
Optional: Remote access via SSH

If you access the DGX Station remotely, forward port 18789 to your machine.

SSH tunnel (from your local machine, not the DGX Station):

ssh -L 18789:127.0.0.1:18789 your-user@your-dgx-station-ip

Then open the dashboard URL in your local browser.

Cursor / VS Code: Open the Ports tab in the bottom panel, click Forward a Port, enter 18789, then open the dashboard URL in your browser.

Step 15
Useful commands

CommandDescription
openshell statusCheck gateway health
openshell sandbox listList all running sandboxes
openshell sandbox connect my-assistantShell into the sandbox
openshell termOpen the monitoring TUI
openshell inference getShow current inference routing
openshell forward listList active port forwards
nemoclaw my-assistant connectConnect to sandbox (alternate)
nemoclaw my-assistant statusShow sandbox status

Resources

  • openshell-openclaw-plugin (NemoClaw)
  • OpenClaw Documentation
  • OpenShell (PyPI)
  • DGX Station Documentation
  • NVIDIA GB300