Use Pytorch to fine-tune models locally
This playbook guides you through setting up and using Pytorch for fine-tuning large language models on NVIDIA Spark devices.
You'll establish a complete fine-tuning environment for large language models (1-70B parameters) on your NVIDIA Spark device. By the end, you'll have a working installation that supports parameter-efficient fine-tuning (PEFT) and supervised fine-tuning (SFT).
Recipes are specifically for DIGITS SPARK. Please make sure that OS and drivers are latest.
ALl files required for fine-tuning are included in the folder in the GitHub repository here.