Fine tune with Pytorch

1 HR

Use Pytorch to fine-tune models locally

Configure Docker permissions

To easily manage containers without sudo, you must be in the docker group. If you choose to skip this step, you will need to run Docker commands with sudo.

Open a new terminal and test Docker access. In the terminal, run:

docker ps

If you see a permission denied error (something like permission denied while trying to connect to the Docker daemon socket), add your user to the docker group so that you don't need to run the command with sudo .

sudo usermod -aG docker $USER
newgrp docker

Pull the latest Pytorch container

docker pull nvcr.io/nvidia/pytorch:25.09-py3

Launch Docker

docker run --gpus all -it --rm --ipc=host \
-v $HOME/.cache/huggingface:/root/.cache/huggingface \
-v ${PWD}:/workspace -w /workspace \
nvcr.io/nvidia/pytorch:25.09-py3

Install dependencies inside the container

pip install transformers peft datasets "trl==0.19.1" "bitsandbytes==0.48"

Authenticate with Huggingface

huggingface-cli login
#<input your huggingface token.
#<Enter n for git credential>

Clone the git repo with fine-tuning recipes

git clone https://github.com/NVIDIA/dgx-spark-playbooks
cd nvidia/pytorch-fine-tune/assets

Run the fine-tuning recipes

To run LoRA on Llama3-8B use the following command:

python Llama3_8B_LoRA_finetuning.py

To run qLoRA fine-tuning on llama3-70B use the following command:

python Llama3_70B_qLoRA_finetuning.py

To run full fine-tuning on llama3-3B use the following command:

python Llama3_3B_full_finetuning.py