Vibe Coding in VS Code
Use DGX Spark as a local or remote Vibe Coding assistant with Ollama and Continue.dev
Install Ollama
Install the latest version of Ollama using the following command:
curl -fsSL https://ollama.com/install.sh | sh
Start the Ollama service:
ollama serve
Once the service is running, pull the desired model:
ollama pull gpt-oss:120b
(Optional) Enable Remote Access
To allow remote connections (e.g., from a workstation using VSCode and Continue.dev), modify the Ollama systemd service:
sudo systemctl edit ollama
Add the following lines beneath the commented section:
[Service]
Environment="OLLAMA_HOST=0.0.0.0:11434"
Environment="OLLAMA_ORIGINS=*"
Reload and restart the service:
sudo systemctl daemon-reload
sudo systemctl restart ollama
If using a firewall, open port 11434:
sudo ufw allow 11434/tcp
Install VSCode
For DGX Spark (ARM-based), download and install VSCode:
wget https://code.visualstudio.com/sha/download?build=stable&os=linux-deb-arm64 -O vscode-arm64.deb
sudo apt install ./vscode-arm64.deb
If using a remote workstation, install VSCode appropriate for your system architecture.
Install Continue.dev Extension
Open VSCode and install Continue.dev from the Marketplace.
After installation, click the Continue icon on the right-hand bar.
Skip login and open the manual configuration via the gear (⚙️) icon.
This opens config.yaml, which controls model settings.
Local Inference Setup
- In the Continue chat window, use
Ctrl/Cmd + Lto focus the chat. - Click Select Model → + Add Chat Model
- Choose Ollama as the provider.
- Set Install Provider to default.
- For Model, select Autodetect.
- Click Connect.
You can now select your downloaded model (e.g., gpt-oss:120b) for local inference.
Remote Setup for DGX Spark
To connect Continue.dev to a remote DGX Spark instance, edit config.yaml in Continue and add:
models:
- model: gpt-oss:120b
title: gpt-oss:120b
apiBase: http://YOUR_SPARK_IP:11434/
provider: ollama
Replace YOUR_SPARK_IP with the IP address of your DGX Spark.
Add additional model entries for any other Ollama models you wish to host remotely.