Verify Docker permissions and configure the NVIDIA runtime. OpenShell's gateway runs k3s inside Docker; on some systems (including DGX Station with cgroup v2), the gateway needs a cgroup setting to start correctly.
Verify Docker:
docker ps
If you get a permission denied error, add your user to the Docker group:
sudo usermod -aG docker $USER
Log out and back in for the group to take effect.
Configure Docker for the NVIDIA runtime and set cgroup namespace mode for OpenShell on DGX Station:
sudo nvidia-ctk runtime configure --runtime=docker
sudo python3 -c "
import json, os
path = '/etc/docker/daemon.json'
d = json.load(open(path)) if os.path.exists(path) else {}
d['default-cgroupns-mode'] = 'host'
json.dump(d, open(path, 'w'), indent=2)
"
sudo systemctl restart docker
Verify:
docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi
NOTE
On DGX Station (and other systems using cgroup v2), OpenShell's gateway embeds k3s inside Docker and may need host cgroup namespace access. Without default-cgroupns-mode: host, the gateway can fail with "Failed to start ContainerManager" errors.
NemoClaw is installed via npm and requires Node.js.
curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -
sudo apt-get install -y nodejs
Verify: node --version should show v22.x.x.
Install Ollama:
curl -fsSL https://ollama.com/install.sh | sh
Verify it is running:
curl http://localhost:11434
Expected: Ollama is running. If not, start it: ollama serve &
DGX Station has 170+ GB VRAM, so you can run large models. Download Nemotron 3 Super 120B (~87GB; may take several minutes):
ollama pull nemotron-3-super:120b
Run it briefly to pre-load weights (type /bye to exit):
ollama run nemotron-3-super:120b
Configure Ollama to listen on all interfaces so the sandbox container can reach it:
sudo systemctl edit ollama.service
Add the following on the third line of the file (above "Edits below this comment will be discarded"):
[Service]
Environment="OLLAMA_HOST=0.0.0.0"
Save (Ctrl+X, then Y), then restart:
sudo systemctl daemon-reload
sudo systemctl restart ollama
The OpenShell binary is distributed via GitHub releases. You need the GitHub CLI and access to the NVIDIA organization.
sudo apt-get install -y gh
gh auth login
When using SSH, gh will show a one-time code. Visit https://github.com/login/device in a browser, enter the code, and authorize for the NVIDIA org.
Configure git for NVIDIA SAML SSO and download OpenShell:
gh auth setup-git
ARCH=$(uname -m)
case "$ARCH" in
x86_64|amd64) ARCH="x86_64" ;;
aarch64|arm64) ARCH="aarch64" ;;
esac
gh release download --repo NVIDIA/OpenShell \
--pattern "openshell-${ARCH}-unknown-linux-musl.tar.gz"
tar xzf openshell-${ARCH}-unknown-linux-musl.tar.gz
sudo install -m 755 openshell /usr/local/bin/openshell
rm -f openshell openshell-${ARCH}-unknown-linux-musl.tar.gz
Verify: openshell --version
Clone the NemoClaw plugin into your home directory and install it globally:
cd ~
git clone https://github.com/NVIDIA/openshell-openclaw-plugin.git
cd openshell-openclaw-plugin
sudo npm install -g .
Verify: nemoclaw --help
NOTE
OpenClaw (the AI agent) is installed automatically inside the sandbox during onboarding. You do not install it on the host.
Ensure Ollama is running (curl http://localhost:11434 should return "Ollama is running"). From the plugin directory (you should be in ~/openshell-openclaw-plugin after Step 5), run:
nemoclaw onboard
The wizard walks you through seven steps:
nvapi-). Only needed once.nvidia-smi typically shows GPU; if the wizard reports "No GPU detected," it can still proceed and use Ollama for inference.When complete you will see something like:
Dashboard http://localhost:18789/
Sandbox my-assistant (Landlock + seccomp + netns)
Model nemotron-3-nano (ollama-local)
The onboard wizard defaults to nemotron-3-nano. Switch the inference route to the Super model you downloaded in Step 3:
openshell inference set --provider ollama-local --model nemotron-3-super:120b
Verify:
openshell inference get
Expected: provider: ollama-local and model: nemotron-3-super:120b.
Connect to the sandbox (use the name you chose in Step 6, e.g. my-assistant):
openshell sandbox connect my-assistant
You are now inside the sandbox. Run these commands in order.
Set API key placeholders so the gateway uses local Ollama. Use the literal value local-ollama for both (this tells the gateway to use local Ollama; it is not your NVIDIA API key from build.nvidia.com):
export NVIDIA_API_KEY=local-ollama
export ANTHROPIC_API_KEY=local-ollama
Initialize NemoClaw (this may drop you into a new shell when done):
nemoclaw-start
After the "NemoClaw ready" banner, re-export the environment variables:
export NVIDIA_API_KEY=local-ollama
export ANTHROPIC_API_KEY=local-ollama
Create memory files and start the web UI:
mkdir -p /sandbox/.openclaw/workspace/memory
echo "# Memory" > /sandbox/.openclaw/workspace/MEMORY.md
openclaw config set gateway.controlUi.dangerouslyAllowHostHeaderOriginFallback true
nohup openclaw gateway run \
--allow-unconfigured --dev \
--bind loopback --port 18789 \
> /tmp/gateway.log 2>&1 &
Wait a few seconds, then get your dashboard URL:
openclaw dashboard
This prints something like:
Dashboard URL: http://127.0.0.1:18789/#token=YOUR_UNIQUE_TOKEN
Save this URL. Type exit to leave the sandbox (the gateway keeps running).
Open the dashboard URL from Step 8 in your browser (on the DGX Station or via port forwarding if you connect remotely):
http://127.0.0.1:18789/#token=YOUR_UNIQUE_TOKEN
IMPORTANT
The token is in the URL as a hash fragment (#token=...), not a query parameter (?token=). Paste the full URL including #token=... into the address bar.
You should see the OpenClaw dashboard with Version and Health: OK. Click Chat in the left sidebar and send a message to your agent.
Try: "Hello! What can you help me with?" or "How many rs are there in the word strawberry?"
NOTE
Nemotron 3 Super 120B responses may take 30–90 seconds. This is normal for a 120B parameter model running locally on DGX Station.
Connect to the sandbox:
openshell sandbox connect my-assistant
Run a prompt:
export NVIDIA_API_KEY=local-ollama
export ANTHROPIC_API_KEY=local-ollama
openclaw agent --agent main --local -m "How many rs are there in strawberry?" --session-id s1
Test sandbox isolation (this should be blocked by the network policy):
curl -sI https://httpbin.org/get
Type exit to leave the sandbox.
In a separate terminal on the host:
openshell term
Press f to follow live output, s to filter by source, q to quit.
Remove the sandbox and destroy the NemoClaw gateway:
openshell sandbox delete my-assistant
openshell gateway destroy -g nemoclaw
To fully uninstall NemoClaw:
sudo npm uninstall -g nemoclaw
rm -rf ~/.nemoclaw
To remove everything and start again from Step 5:
cd ~
openshell sandbox delete my-assistant 2>/dev/null
openshell gateway destroy -g nemoclaw 2>/dev/null
sudo npm uninstall -g nemoclaw
rm -rf ~/openshell-openclaw-plugin ~/.nemoclaw
Verify:
which nemoclaw # Should report "not found"
openshell status # Should report "No gateway configured"
Then restart from Step 5 (Install NemoClaw).
If you access the DGX Station remotely, forward port 18789 to your machine.
SSH tunnel (from your local machine, not the DGX Station):
ssh -L 18789:127.0.0.1:18789 your-user@your-dgx-station-ip
Then open the dashboard URL in your local browser.
Cursor / VS Code: Open the Ports tab in the bottom panel, click Forward a Port, enter 18789, then open the dashboard URL in your browser.
| Command | Description |
|---|---|
openshell status | Check gateway health |
openshell sandbox list | List all running sandboxes |
openshell sandbox connect my-assistant | Shell into the sandbox |
openshell term | Open the monitoring TUI |
openshell inference get | Show current inference routing |
openshell forward list | List active port forwards |
nemoclaw my-assistant connect | Connect to sandbox (alternate) |
nemoclaw my-assistant status | Show sandbox status |