NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
View All Playbooks
View All Playbooks

onboarding

  • Set Up Local Network Access
  • Open WebUI with Ollama

data science

  • Single-cell RNA Sequencing
  • Portfolio Optimization
  • CUDA-X Data Science
  • Text to Knowledge Graph
  • Optimized JAX

tools

  • DGX Dashboard
  • Comfy UI
  • RAG Application in AI Workbench
  • Set up Tailscale on Your Spark
  • VS Code
  • Connect Three DGX Spark in a Ring Topology
  • Connect Multiple DGX Spark through a Switch

fine tuning

  • FLUX.1 Dreambooth LoRA Fine-tuning
  • LLaMA Factory
  • Fine-tune with NeMo
  • Fine-tune with Pytorch
  • Unsloth on DGX Spark

use case

  • NemoClaw with Nemotron 3 Super and Telegram on DGX Spark
  • cuTile Kernels
  • CLI Coding Agent
  • Live VLM WebUI
  • Install and Use Isaac Sim and Isaac Lab
  • Vibe Coding in VS Code
  • Build and Deploy a Multi-Agent Chatbot
  • Connect Two Sparks
  • NCCL for Two Sparks
  • Build a Video Search and Summarization (VSS) Agent
  • Spark & Reachy Photo Booth
  • Secure Long Running AI Agents with OpenShell on DGX Spark
  • OpenClaw 🦞

inference

  • LM Studio on DGX Spark
  • Speculative Decoding
  • Run models with llama.cpp on DGX Spark
  • Nemotron-3-Nano with llama.cpp
  • SGLang for Inference
  • TRT LLM for Inference
  • NVFP4 Quantization
  • Multi-modal Inference
  • NIM on Spark
  • vLLM for Inference

Text to Knowledge Graph

30 MIN

Transform unstructured text into interactive knowledge graphs with LLM inference and graph visualization

DGXGraph DatabasesGraph VisualizationGraphRAGKnowledge GraphsNLPOllamaSpark
View on GitHub
OverviewOverviewInstructionsInstructionsTroubleshootingTroubleshooting

Step 1
Clone the repository

In a terminal, clone the txt2kg repository and navigate to the project directory.

git clone https://github.com/NVIDIA/dgx-spark-playbooks
cd dgx-spark-playbooks/nvidia/txt2kg/assets

Step 2
Start the txt2kg services

Use the provided start script to launch all required services. This will set up Ollama, ArangoDB, and the Next.js frontend:

./start.sh

The script will automatically:

  • Check for GPU availability
  • Start Docker Compose services
  • Set up ArangoDB database
  • Launch the web interface

Step 3
Pull an Ollama model (optional)

Download a language model for knowledge extraction. The default model loaded is Llama 3.1 8B:

docker exec ollama-compose ollama pull <model-name>

Browse available models at https://ollama.com/search

NOTE

The unified memory architecture enables running larger models like 70B parameters, which produce significantly more accurate knowledge triples.

Step 4
Access the web interface

Open your browser and navigate to:

http://localhost:3001

You can also access individual services:

  • ArangoDB Web Interface: http://localhost:8529
  • Ollama API: http://localhost:11434

Step 5
Upload documents and build knowledge graphs

5.1. Document Upload

  • Use the web interface to upload text documents (markdown, text, CSV supported)
  • Documents are automatically chunked and processed for triple extraction

5.2. Knowledge Graph Generation

  • The system extracts subject-predicate-object triples using Ollama
  • Triples are stored in ArangoDB for relationship querying

5.3. Interactive Visualization

  • View your knowledge graph in 2D or 3D with GPU-accelerated rendering
  • Explore nodes and relationships interactively

5.4. Graph-based Queries

  • Ask questions about your documents using the query interface
  • Graph traversal enhances context with entity relationships from ArangoDB
  • LLM generates responses using the enriched graph context

Future Enhancement: GraphRAG capabilities with vector-based KNN search for entity retrieval are planned.

Step 6
Cleanup and rollback

Stop all services and optionally remove containers:

# Stop services
docker compose down

# Remove containers and volumes (optional)
docker compose down -v

# Remove downloaded models (optional)
docker exec ollama-compose ollama rm llama3.1:8b

Step 7
Next steps

  • Experiment with different Ollama models for varied extraction quality
  • Customize triple extraction prompts for domain-specific knowledge
  • Explore advanced graph querying and visualization features

Resources

  • DGX Spark Documentation
  • Ollama Documentation
  • ArangoDB Documentation
  • DGX Spark Forum
  • DGX Spark User Performance Guide
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2026 NVIDIA Corporation