NemoClaw

60 MINS

Run OpenClaw in an OpenShell sandbox on DGX Spark with Ollama (Nemotron)

Docker configuration

Verify Docker permissions and configure the NVIDIA runtime. OpenShell's gateway runs k3s inside Docker and on DGX Spark requires a cgroup setting so the gateway can start correctly.

Verify Docker:

docker ps

If you get a permission denied error, add your user to the Docker group:

sudo usermod -aG docker $USER

Log out and back in for the group to take effect.

Configure Docker for the NVIDIA runtime and set cgroup namespace mode for OpenShell on DGX Spark:

sudo nvidia-ctk runtime configure --runtime=docker

sudo python3 -c "
import json, os
path = '/etc/docker/daemon.json'
d = json.load(open(path)) if os.path.exists(path) else {}
d['default-cgroupns-mode'] = 'host'
json.dump(d, open(path, 'w'), indent=2)
"

sudo systemctl restart docker

Verify:

docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi

NOTE

DGX Spark uses cgroup v2. OpenShell's gateway embeds k3s inside Docker and needs host cgroup namespace access. Without default-cgroupns-mode: host, the gateway can fail with "Failed to start ContainerManager" errors.

Install Node.js

NemoClaw is installed via npm and requires Node.js.

curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -
sudo apt-get install -y nodejs

Verify: node --version should show v22.x.x.

Install Ollama and download a model

Install Ollama:

curl -fsSL https://ollama.com/install.sh | sh

Verify it is running:

curl http://localhost:11434

Expected: Ollama is running. If not, start it: ollama serve &

Download Nemotron 3 Super 120B (~87GB; may take several minutes):

ollama pull nemotron-3-super:120b

Run it briefly to pre-load weights (type /bye to exit):

ollama run nemotron-3-super:120b

Configure Ollama to listen on all interfaces so the sandbox container can reach it:

sudo systemctl edit ollama.service

Add the following on the third line of the file (above "Edits below this comment will be discarded"):

[Service]
Environment="OLLAMA_HOST=0.0.0.0"

Save (Ctrl+X, then Y), then restart:

sudo systemctl daemon-reload
sudo systemctl restart ollama

Install the OpenShell CLI

The OpenShell binary is distributed via GitHub releases. You need the GitHub CLI and access to the NVIDIA organization.

sudo apt-get install -y gh
gh auth login

When using SSH, gh will show a one-time code. Visit https://github.com/login/device in a browser, enter the code, and authorize for the NVIDIA org.

Configure git for NVIDIA SAML SSO and download OpenShell:

gh auth setup-git

ARCH=$(uname -m)
case "$ARCH" in
  x86_64|amd64) ARCH="x86_64" ;;
  aarch64|arm64) ARCH="aarch64" ;;
esac
gh release download --repo NVIDIA/OpenShell \
  --pattern "openshell-${ARCH}-unknown-linux-musl.tar.gz"
tar xzf openshell-${ARCH}-unknown-linux-musl.tar.gz
sudo install -m 755 openshell /usr/local/bin/openshell
rm -f openshell openshell-${ARCH}-unknown-linux-musl.tar.gz

Verify: openshell --version

Install NemoClaw

Clone the NemoClaw plugin and install it globally:

git clone https://github.com/NVIDIA/NemoClaw
cd NemoClaw
sudo npm install -g .

Verify: nemoclaw --help

NOTE

OpenClaw (the AI agent) is installed automatically inside the sandbox during onboarding. You do not install it on the host.

Run the NemoClaw onboard wizard

Ensure Ollama is running (curl http://localhost:11434 should return "Ollama is running"). From the directory where you cloned the plugin in Step 5 (e.g. ~/openshell-openclaw-plugin), or that directory in a new terminal, run:

cd ~/openshell-openclaw-plugin
nemoclaw onboard

The wizard walks you through seven steps:

  1. NVIDIA API key — Paste your key from build.nvidia.com (starts with nvapi-). Only needed once.
  2. Preflight — Checks Docker and OpenShell. "No GPU detected" is normal on DGX Spark (GB10 reports unified memory differently).
  3. Gateway — Starts the OpenShell gateway (30–60 seconds on first run).
  4. Sandbox — Enter a name or press Enter for the default. First build takes 2–5 minutes.
  5. Inference — The wizard auto-detects Ollama (e.g. "Ollama detected on localhost:11434 — using it").
  6. OpenClaw — Configured on first connect.
  7. Policies — Press Enter or Y to accept suggested presets (pypi, npm).

When complete you will see something like:

  Dashboard    http://localhost:18789/
  Sandbox      my-assistant (Landlock + seccomp + netns)
  Model        nemotron-3-nano (ollama-local)

Configure inference for Nemotron 3 Super

The onboard wizard defaults to nemotron-3-nano. Switch the inference route to the Super model you downloaded in Step 3:

openshell inference set --provider ollama-local --model nemotron-3-super:120b

Verify:

openshell inference get

Expected: provider: ollama-local and model: nemotron-3-super:120b.

Start the OpenClaw web UI

Connect to the sandbox (use the name you chose in Step 6, e.g. my-assistant):

openshell sandbox connect my-assistant

You are now inside the sandbox. Run these commands in order.

Set the API key environment variables (required for the gateway). For local Ollama, use the value local-ollama — no real API key is required. If you use a different inference provider later, replace with your API key:

export NVIDIA_API_KEY=local-ollama
export ANTHROPIC_API_KEY=local-ollama

Initialize NemoClaw (this may drop you into a new shell when done):

nemoclaw-start

After the "NemoClaw ready" banner, re-export the environment variables:

export NVIDIA_API_KEY=local-ollama
export ANTHROPIC_API_KEY=local-ollama

Create memory files and start the web UI:

mkdir -p /sandbox/.openclaw/workspace/memory
echo "# Memory" > /sandbox/.openclaw/workspace/MEMORY.md

openclaw config set gateway.controlUi.dangerouslyAllowHostHeaderOriginFallback true

nohup openclaw gateway run \
  --allow-unconfigured --dev \
  --bind loopback --port 18789 \
  > /tmp/gateway.log 2>&1 &

Wait a few seconds, then get your dashboard URL:

openclaw dashboard

This prints something like:

Dashboard URL: http://127.0.0.1:18789/#token=YOUR_UNIQUE_TOKEN

Save this URL. Type exit to leave the sandbox (the gateway keeps running).

Open the chat interface

Open the dashboard URL from Step 8 in your Spark's web browser:

http://127.0.0.1:18789/#token=YOUR_UNIQUE_TOKEN

IMPORTANT

The token is in the URL as a hash fragment (#token=...), not a query parameter (?token=). Paste the full URL including #token=... into the address bar.

You should see the OpenClaw dashboard with Version and Health: OK. Click Chat in the left sidebar and send a message to your agent.

Try: "Hello! What can you help me with?" or "How many rs are there in the word strawberry?"

NOTE

Nemotron 3 Super 120B responses may take 30–90 seconds. This is normal for a 120B parameter model running locally.

Using the agent from the command line

Connect to the sandbox:

openshell sandbox connect my-assistant

Run a prompt:

export NVIDIA_API_KEY=local-ollama
export ANTHROPIC_API_KEY=local-ollama
openclaw agent --agent main --local -m "How many rs are there in strawberry?" --session-id s1

Test sandbox isolation (this should be blocked by the network policy):

curl -sI https://httpbin.org/get

Type exit to leave the sandbox.

Monitoring with the OpenShell TUI

In a separate terminal on the host:

openshell term

Press f to follow live output, s to filter by source, q to quit.

Cleanup

Remove the sandbox and destroy the NemoClaw gateway:

openshell sandbox delete my-assistant
openshell gateway destroy -g nemoclaw

To fully uninstall NemoClaw:

sudo npm uninstall -g nemoclaw
rm -rf ~/.nemoclaw

Clean slate (start over)

To remove everything and start again from Step 5:

cd ~
openshell sandbox delete my-assistant 2>/dev/null
openshell gateway destroy -g nemoclaw 2>/dev/null
sudo npm uninstall -g nemoclaw
rm -rf ~/openshell-openclaw-plugin ~/.nemoclaw

Verify:

which nemoclaw        # Should report "not found"
openshell status      # Should report "No gateway configured"

Then restart from Step 5 (Install NemoClaw).

Optional: Remote access via SSH

If you access the Spark remotely, forward port 18789 to your machine.

SSH tunnel (from your local machine, not the Spark):

ssh -L 18789:127.0.0.1:18789 your-user@your-spark-ip

Then open the dashboard URL in your local browser.

Cursor / VS Code: Open the Ports tab in the bottom panel, click Forward a Port, enter 18789, then open the dashboard URL in your browser.

Useful commands

CommandDescription
openshell statusCheck gateway health
openshell sandbox listList all running sandboxes
openshell sandbox connect my-assistantShell into the sandbox
openshell termOpen the monitoring TUI
openshell inference getShow current inference routing
openshell forward listList active port forwards
nemoclaw my-assistant connectConnect to sandbox (alternate)
nemoclaw my-assistant statusShow sandbox status