NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
View All Playbooks
View All Playbooks

onboarding

  • Set Up Local Network Access
  • Open WebUI with Ollama

data science

  • Single-cell RNA Sequencing
  • Portfolio Optimization
  • CUDA-X Data Science
  • Text to Knowledge Graph
  • Optimized JAX

tools

  • VS Code
  • DGX Dashboard
  • Comfy UI
  • RAG Application in AI Workbench
  • Set up Tailscale on Your Spark

fine tuning

  • FLUX.1 Dreambooth LoRA Fine-tuning
  • LLaMA Factory
  • Fine-tune with NeMo
  • Fine-tune with Pytorch
  • Unsloth on DGX Spark

use case

  • Spark & Reachy Photo Booth
  • Live VLM WebUI
  • Install and Use Isaac Sim and Isaac Lab
  • Vibe Coding in VS Code
  • Build and Deploy a Multi-Agent Chatbot
  • Connect Two Sparks
  • NCCL for Two Sparks
  • Build a Video Search and Summarization (VSS) Agent

inference

  • LM Studio on DGX Spark
  • Nemotron-3-Nano with llama.cpp
  • Speculative Decoding
  • SGLang for Inference
  • TRT LLM for Inference
  • vLLM for Inference
  • NVFP4 Quantization
  • Multi-modal Inference
  • NIM on Spark
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2026 NVIDIA Corporation

Text to Knowledge Graph

30 MIN

Transform unstructured text into interactive knowledge graphs with LLM inference and graph visualization

View on GitHub
OverviewOverviewInstructionsInstructionsTroubleshootingTroubleshooting

Basic idea

This playbook demonstrates how to build and deploy a comprehensive knowledge graph generation and visualization solution that serves as a reference for knowledge graph extraction. The unified memory architecture enables running larger, more accurate models that produce higher-quality knowledge graphs and deliver superior downstream GraphRAG performance.

This txt2kg playbook transforms unstructured text documents into structured knowledge graphs using:

  • Knowledge Triple Extraction: Using Ollama with GPU acceleration for local LLM inference to extract subject-predicate-object relationships
  • Graph Database Storage: ArangoDB for storing and querying knowledge triples with relationship traversal
  • GPU-Accelerated Visualization: Three.js WebGPU rendering for interactive 2D/3D graph exploration

Future Enhancements: Vector embeddings and GraphRAG capabilities are planned enhancements.

What you'll accomplish

You will have a fully functional system capable of processing documents, generating and editing knowledge graphs, and providing querying, accessible through an interactive web interface. The setup includes:

  • Local LLM Inference: Ollama for GPU-accelerated LLM inference with no API keys required
  • Graph Database: ArangoDB for storing and querying triples with relationship traversal
  • Interactive Visualization: GPU-accelerated graph rendering with Three.js WebGPU
  • Modern Web Interface: Next.js frontend with document management and query interface
  • Fully Containerized: Reproducible deployment with Docker Compose and GPU support

Prerequisites

  • DGX Spark with latest NVIDIA drivers
  • Docker installed and configured with NVIDIA Container Toolkit
  • Docker Compose

Time & risk

  • Duration:

    • 2-3 minutes for initial setup and container deployment
    • 5-10 minutes for Ollama model download (depending on model size)
    • Immediate document processing and knowledge graph generation
  • Risks:

    • GPU memory requirements depend on chosen Ollama model size
    • Document processing time scales with document size and complexity
  • Rollback: Stop and remove Docker containers, delete downloaded models if needed

  • Last Updated: 01/08/2025

    • Migrated from Pinecone to Qdrant for ARM64 compatibility
    • Added vLLM support with Neo4j
    • Added Palette UI components with accessibility improvements
    • Added CPU-only mode for development (./start.sh --cpu)
    • Optimized ArangoDB with deterministic keys and BM25 search
    • Added GNN preprocessing scripts for knowledge graph training

Resources

  • DGX Spark Documentation
  • Ollama Documentation
  • ArangoDB Documentation
  • DGX Spark Forum
  • DGX Spark User Performance Guide