Text to Knowledge Graph on DGX Station
Transform unstructured text into interactive knowledge graphs with LLM inference and graph visualization
Basic idea
This playbook demonstrates how to build and deploy a comprehensive knowledge graph generation and visualization solution that serves as a reference for knowledge graph extraction. The GB300 Ultra's massive GPU memory enables running the Llama 3.1 405B model, producing the highest-quality knowledge graphs and delivering superior downstream GraphRAG performance.
This txt2kg playbook transforms unstructured text documents into structured knowledge graphs using:
- Knowledge Triple Extraction: Using Ollama with GPU acceleration for local LLM inference to extract subject-predicate-object relationships
- Graph Database Storage: ArangoDB for storing and querying knowledge triples with relationship traversal
- GPU-Accelerated Visualization: Three.js WebGPU rendering for interactive 2D/3D graph exploration
Future Enhancements: Vector embeddings and GraphRAG capabilities are planned enhancements.
What you'll accomplish
You will have a fully functional system capable of processing documents, generating and editing knowledge graphs, and providing querying, accessible through an interactive web interface. The setup includes:
- Local LLM Inference: Ollama for GPU-accelerated LLM inference with no API keys required
- Graph Database: ArangoDB for storing and querying triples with relationship traversal
- Interactive Visualization: GPU-accelerated graph rendering with Three.js WebGPU
- Modern Web Interface: Next.js frontend with document management and query interface
- Fully Containerized: Reproducible deployment with Docker Compose and GPU support
What to know before starting
- Basic Docker container usage
- Familiarity with command line operations
- Understanding of knowledge graphs (helpful but not required)
Prerequisites
- NVIDIA DGX Station with GB300 Ultra Blackwell GPU
- Docker installed and configured with NVIDIA Container Toolkit
- Docker Compose
- Network access for container image downloads
Ancillary files
All required assets are in the playbook directory nvidia/station-txt2kg/assets (see Instructions, Step 1). Key files:
start.sh- Launch script for all servicesstop.sh- Stop script to shut down servicesdeploy/compose/- Docker Compose configurations
Time & risk
-
Duration:
- 2-3 minutes for initial setup and container deployment
- 5-10 minutes for Ollama model download (depending on model size)
- Immediate document processing and knowledge graph generation
-
Risks:
- GPU memory requirements depend on chosen Ollama model size
- Document processing time scales with document size and complexity
-
Rollback: Stop and remove Docker containers, delete downloaded models if needed
- Last Updated: 03/02/2026
- First Publication