Fine-tune with NeMo

1 HR

Use NVIDIA NeMo to fine-tune models locally

Verify system requirements

Check your NVIDIA Spark device meets the prerequisites for NeMo AutoModel installation. This step runs on the host system to confirm CUDA toolkit availability and Python version compatibility.

# Verify CUDA installation
nvcc --version

# Check Python version (3.10+ required)
python3 --version

# Verify GPU accessibility
nvidia-smi

# Check available system memory
free -h

# Docker permission:
docker ps

# if there is permission issue, (e.g., permission denied while trying to connect to the Docker daemon socket), then do:
sudo usermod -aG docker $USER
newgrp docker

Configure Docker permissions

To easily manage containers without sudo, you must be in the docker group. If you choose to skip this step, you will need to run Docker commands with sudo.

Open a new terminal and test Docker access. In the terminal, run:

docker ps

If you see a permission denied error (something like permission denied while trying to connect to the Docker daemon socket), add your user to the docker group so that you don't need to run the command with sudo .

sudo usermod -aG docker $USER
newgrp docker

Get the container image with NeMo AutoModel

docker pull nvcr.io/nvidia/nemo-automodel:26.02

Launch Docker

Launch an interactive container with GPU access. The --rm flag ensures the container is removed when you exit.

docker run \
  --gpus all \
  --ulimit memlock=-1 \
  -it --ulimit stack=67108864 \
  --entrypoint /usr/bin/bash \
  --rm nvcr.io/nvidia/nemo-automodel:26.02

Explore available examples

Review the pre-configured training recipes available for different model types and training scenarios. These recipes provide optimized configurations for ARM64 and Blackwell architecture.

# Navigate to /opt/Automodel
cd /opt/Automodel

# List LLM fine-tuning examples
ls examples/llm_finetune/

# View example recipe configuration
cat examples/llm_finetune/finetune.py | head -20

Run sample fine-tuning

The following commands show how to perform full fine-tuning (SFT), parameter-efficient fine-tuning (PEFT) with LoRA and QLoRA.

First, export your HF_TOKEN so that gated models can be downloaded.

# Run basic LLM fine-tuning example
export HF_TOKEN=<your_huggingface_token>

NOTE

Replace <your_huggingface_token> with your personal Hugging Face access token. A valid token is required to download any gated model.

The same steps apply for any other gated model you use: visit its model card on Hugging Face, request access, accept the license, and wait for approval.

LoRA fine-tuning example:

Execute a basic fine-tuning example to validate the complete setup. This demonstrates parameter-efficient fine-tuning using a small model suitable for testing. For the examples below, we are using YAML for configuration, and parameter overrides are passed as command line arguments.

# Run basic LLM fine-tuning example
cd /opt/Automodel
python3 examples/llm_finetune/finetune.py \
-c examples/llm_finetune/llama3_2/llama3_2_1b_squad_peft.yaml \
--model.pretrained_model_name_or_path meta-llama/Llama-3.1-8B \
--packed_sequence.packed_sequence_size 1024 \
--step_scheduler.max_steps 20

These overrides ensure the Llama-3.1-8B LoRA run behaves as expected:

  • --model.pretrained_model_name_or_path: selects the Llama-3.1-8B model to fine-tune from the Hugging Face model hub (weights fetched via your Hugging Face token).
  • --packed_sequence.packed_sequence_size: sets the packed sequence size to 1024 to enable packed sequence training.
  • --step_scheduler.max_steps: sets the maximum number of training steps. We set it to 20 for demonstration purposes, please adjust this based on your needs.

NOTE

The recipe YAML llama3_2_1b_squad_peft.yaml defines training hyperparameters (LoRA rank, learning rate, etc.) that are reusable across Llama model sizes. The --model.pretrained_model_name_or_path override determines which model weights are actually loaded.

QLoRA fine-tuning example:

We can use QLoRA to fine-tune large models in a memory-efficient manner.

cd /opt/Automodel
python3 examples/llm_finetune/finetune.py \
-c examples/llm_finetune/llama3_1/llama3_1_8b_squad_qlora.yaml \
--model.pretrained_model_name_or_path meta-llama/Meta-Llama-3-70B \
--loss_fn._target_ nemo_automodel.components.loss.te_parallel_ce.TEParallelCrossEntropy \
--step_scheduler.local_batch_size 1 \
--packed_sequence.packed_sequence_size 1024 \
--step_scheduler.max_steps 20

These overrides ensure the 70B QLoRA run behaves as expected:

  • --model.pretrained_model_name_or_path: selects the 70B base model to fine-tune (weights fetched via your Hugging Face token).
  • --loss_fn._target_: uses the TransformerEngine-parallel cross-entropy loss variant compatible with tensor-parallel training for large LLMs.
  • --step_scheduler.local_batch_size: sets the per-GPU micro-batch size to 1 to fit 70B in memory; overall effective batch size is still driven by gradient accumulation and data/tensor parallel settings from the recipe.
  • --step_scheduler.max_steps: sets the maximum number of training steps. We set it to 20 for demonstration purposes, please adjust this based on your needs.
  • --packed_sequence.packed_sequence_size: sets the packed sequence size to 1024 to enable packed sequence training.

Full Fine-tuning example:

Run the following command to perform full (SFT) fine-tuning:

cd /opt/Automodel
python3 examples/llm_finetune/finetune.py \
-c examples/llm_finetune/qwen/qwen3_8b_squad_spark.yaml \
--model.pretrained_model_name_or_path Qwen/Qwen3-8B \
--step_scheduler.local_batch_size 1 \
--step_scheduler.max_steps 20 \
--packed_sequence.packed_sequence_size 1024

These overrides ensure the Qwen3-8B SFT run behaves as expected:

  • --model.pretrained_model_name_or_path: selects the Qwen/Qwen3-8B model to fine-tune from the Hugging Face model hub (weights fetched via your Hugging Face token). Adjust this if you want to fine-tune a different model.
  • --step_scheduler.max_steps: sets the maximum number of training steps. We set it to 20 for demonstration purposes, please adjust this based on your needs.
  • --step_scheduler.local_batch_size: sets the per-GPU micro-batch size to 1 to fit in memory; overall effective batch size is still driven by gradient accumulation and data/tensor parallel settings from the recipe.
  • --packed_sequence.packed_sequence_size: sets the packed sequence size to 1024 to enable packed sequence training.

Validate successful training completion

Validate the fine-tuned model by inspecting artifacts contained in the checkpoint directory.

# Inspect logs and checkpoint output.
# The LATEST is a symlink pointing to the latest checkpoint.
# The checkpoint is the one that was saved during training.
# below is an example of the expected output (username and domain-users are placeholders).
ls -lah checkpoints/LATEST/

# $ ls -lah checkpoints/LATEST/
# total 32K
# drwxr-xr-x 6 username domain-users 4.0K Oct 16 22:33 .
# drwxr-xr-x 4 username domain-users 4.0K Oct 16 22:33 ..
# -rw-r--r-- 1 username domain-users 1.6K Oct 16 22:33 config.yaml
# drwxr-xr-x 2 username domain-users 4.0K Oct 16 22:33 dataloader
# drwxr-xr-x 2 username domain-users 4.0K Oct 16 22:33 model
# drwxr-xr-x 2 username domain-users 4.0K Oct 16 22:33 optim
# drwxr-xr-x 2 username domain-users 4.0K Oct 16 22:33 rng
# -rw-r--r-- 1 username domain-users 1.3K Oct 16 22:33 step_scheduler.pt

Cleanup (Optional)

The container was launched with the --rm flag, so it is automatically removed when you exit. To reclaim disk space used by the Docker image, run:

WARNING

This will remove the NeMo AutoModel image. You will need to pull it again if you want to use it later.

docker rmi nvcr.io/nvidia/nemo-automodel:26.02

Optional: Publish your fine-tuned model checkpoint on Hugging Face Hub

Publish your fine-tuned model checkpoint on Hugging Face Hub.

NOTE

This is an optional step and is not required for using the fine-tuned model. It is useful if you want to share your fine-tuned model with others or use it in other projects. You can also use the fine-tuned model in other projects by cloning the repository and using the checkpoint. To use the fine-tuned model in other projects, you need to have the Hugging Face CLI installed. You can install the Hugging Face CLI by running pip install huggingface_hub. For more information, please refer to the Hugging Face CLI documentation.

TIP

You can use the hf command to upload the fine-tuned model checkpoint to Hugging Face Hub. For more information, please refer to the Hugging Face CLI documentation.

# Publish the fine-tuned model checkpoint to Hugging Face Hub
# will be published under the namespace <your_huggingface_username>/my-cool-model, adjust name as needed.
hf upload my-cool-model checkpoints/LATEST/model

TIP

The above command can fail if you don't have write permissions to the Hugging Face Hub, with the HF_TOKEN you used. Sample error message:

user@host:/opt/Automodel$ hf upload my-cool-model checkpoints/LATEST/model
Traceback (most recent call last):
  File "/home/user/.local/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 409, in hf_raise_for_status
    response.raise_for_status()
  File "/home/user/.local/lib/python3.10/site-packages/requests/models.py", line 1024, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/api/repos/create

To fix this, you need to create an access token with write permissions, please see the Hugging Face guide here for instructions.

Next steps

Begin using NeMo AutoModel for your specific fine-tuning tasks. Start with provided recipes and customize based on your model requirements and dataset.

# Copy a recipe for customization
cp examples/llm_finetune/finetune.py my_custom_training.py

# Edit configuration for your specific model and data, then run:
python3 my_custom_training.py

Explore the NeMo AutoModel GitHub repository for more recipes, documentation, and community examples. Consider setting up custom datasets, experimenting with different model architectures, and scaling to multi-node distributed training for larger models.