Transform unstructured text into interactive knowledge graphs with LLM inference and graph visualization
In a terminal, clone the txt2kg repository and navigate to the project directory.
git clone https://github.com/NVIDIA/dgx-spark-playbooks
cd dgx-spark-playbooks/nvidia/txt2kg/assets
Use the provided start script to launch all required services. This will set up Ollama, ArangoDB, and the Next.js frontend:
./start.sh
The script will automatically:
Download a language model for knowledge extraction. The default model loaded is Llama 3.1 8B:
docker exec ollama-compose ollama pull <model-name>
Browse available models at https://ollama.com/search
NOTE
The unified memory architecture enables running larger models like 70B parameters, which produce significantly more accurate knowledge triples.
Open your browser and navigate to:
http://localhost:3001
You can also access individual services:
Future Enhancement: GraphRAG capabilities with vector-based KNN search for entity retrieval are planned.
Stop all services and optionally remove containers:
# Stop services
docker compose down
# Remove containers and volumes (optional)
docker compose down -v
# Remove downloaded models (optional)
docker exec ollama-compose ollama rm llama3.1:8b