NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
View All Playbooks
View All Playbooks

onboarding

  • Set Up Local Network Access
  • Open WebUI with Ollama

data science

  • Single-cell RNA Sequencing
  • Portfolio Optimization
  • CUDA-X Data Science
  • Text to Knowledge Graph
  • Optimized JAX

tools

  • DGX Dashboard
  • Comfy UI
  • RAG Application in AI Workbench
  • Set up Tailscale on Your Spark
  • VS Code
  • Connect Three DGX Spark in a Ring Topology
  • Connect Multiple DGX Spark through a Switch

fine tuning

  • FLUX.1 Dreambooth LoRA Fine-tuning
  • LLaMA Factory
  • Fine-tune with NeMo
  • Fine-tune with Pytorch
  • Unsloth on DGX Spark

use case

  • NemoClaw with Nemotron 3 Super and Telegram on DGX Spark
  • cuTile Kernels
  • CLI Coding Agent
  • Live VLM WebUI
  • Install and Use Isaac Sim and Isaac Lab
  • Vibe Coding in VS Code
  • Build and Deploy a Multi-Agent Chatbot
  • Connect Two Sparks
  • NCCL for Two Sparks
  • Build a Video Search and Summarization (VSS) Agent
  • Spark & Reachy Photo Booth
  • Secure Long Running AI Agents with OpenShell on DGX Spark
  • OpenClaw 🦞

inference

  • LM Studio on DGX Spark
  • Speculative Decoding
  • Run models with llama.cpp on DGX Spark
  • Nemotron-3-Nano with llama.cpp
  • SGLang for Inference
  • TRT LLM for Inference
  • NVFP4 Quantization
  • Multi-modal Inference
  • NIM on Spark
  • vLLM for Inference

Single-cell RNA Sequencing

15 MIN

An end-to-end GPU-powered workflow for scRNA-seq using RAPIDS

data science
View on GitHub
OverviewOverviewInstructionsInstructionsTroubleshootingTroubleshooting

Step 1
Verify your environment

Let's first verify that you have a working GPU, git, and Docker. Open up Terminal, then copy and paste in the below commands:

nvidia-smi
git --version
docker --version
  • nvidia-smi will output information about your GPU. If it doesn't, your GPU is not properly configured.
  • git --version will print something like git version 2.43.0. If you get an error saying that git is not installed, please reinstall it.
  • docker --version will print something like Docker version 28.3.3, build 980b856. If you get an error saying that Docker is not installed, please reinstall it. If you see a permission denied error, add your user to the docker group by running sudo usermod -aG docker $USER && newgrp docker.

Step 2
Installation

Open up Terminal, then copy and paste in the below commands:

git clone https://github.com/NVIDIA/dgx-spark-playbooks
cd dgx-spark-playbooks/nvidia/single-cell/assets
bash ./setup/start_playbook.sh

start_playbook.sh will:

  1. pull the RAPIDS 25.10 Notebooks Docker container
  2. build all the environments needed for the playbook in the container using setup_playbook.sh
  3. start JupyterLab

Please keep the Terminal window open while using the playbook.

You can access your JupyterLab server in two ways

  1. at http://127.0.0.1:8888 if running locally on the DGX Spark.
  2. at http://<SPARK_IP>:8888 if using your DGX Spark headless over your network.

Once in JupyterLab, you'll be greeted with a directory containing scRNA_analysis_preprocessing.ipynb, and the folders cuDF, cuML, cuGraph, and playbook.

  • scRNA_analysis_preprocessing.ipynbis the playbook notebook. You will want to open this by double clicking on the file.
  • cuDF, cuML, cuGraph folders contain the standard RAPIDS library example notebooks to help you continue exploring.
  • playbook contains the playbook files. The contents of this folder are read-only inside of a rootless Docker Container.

If you want to install any of the playbook notebooks on your own system, check out the readmes within the folder that accompanies the notebook

Step 3
Run the notebook

Once in JupyterLab, there all you have to do is run the scRNA_analysis_preprocessing.ipynb. You'll get both these playbook notebooks as well as the standard RAPIDS library example notebooks to help you get going.

You can use Shift + Enter to manually run each cell at your own pace, or Run > Run All to run all the cells.

Once you're done with exploring the scRNA_analysis_preprocessing notebook, you can explore other RAPIDS notebooks by going into the folders, selecting other notebooks, and doing the same thing.

Step 4
Download your work

Since the docker container cannot privileged write back to the host system, you can use JupyterLab to download any files you may want to keep once the docker container is shut down.

Simply right click the file you want, in the browser, and click Download in the dropdown.

Step 5
Cleanup

Once you have downloaded all your work, go back to the Terminal window where you started running the playbook.

In the Terminal window,

  1. Type Ctrl + C
  2. Quickly either enter y and then hit Enter at the prompt or hit Ctrl + C again
  3. The Docker container will proceed to shut down

WARNING

This will delete ALL data that wasn't already downloaded from the Docker container. The browser window may still show cached files if it is still open.

Resources

  • DGX Spark Documentation
  • DGX Spark Forum
  • DGX Spark User Performance Guide
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2026 NVIDIA Corporation