NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
View All Playbooks
View All Playbooks

onboarding

  • MIG on DGX Station

data science

  • Topic Modeling
  • Text to Knowledge Graph on DGX Station

tools

  • NVFP4 Quantization

fine tuning

  • Nanochat Training

use case

  • Run NemoClaw on DGX Station
  • Local Coding Agent
  • Secure Long Running AI Agents with OpenShell on DGX Station

inference

  • Serve Qwen3-235B with vLLM
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2026 NVIDIA Corporation

MIG on DGX Station

15 MIN

Enable and configure Multi-Instance GPU (MIG) on DGX Station with GB300 Ultra (B300 GPUs)

B300DGXGB300GPU PartitioningMIGStationSystem Configuration
MIG User Guide
OverviewOverviewInstructionsInstructionsTroubleshootingTroubleshooting

Step 1
Prerequisites and verify B300 GPUs

Ensure your DGX Station has B300 GPUs (GB300 Ultra), a supported NVIDIA driver (see Troubleshooting for driver requirements), and that nvidia-smi is available. You need root or sudo to enable MIG and create instances.

Before enabling MIG: All GPU processes must be stopped. Desktop environments (e.g. GNOME, Xwayland), NVIDIA services (e.g. nvsm_core, nvidia-pe, nv-hostengine), or workloads like vLLM can hold the GPU and cause "In use by another client" when you run MIG commands. Check what is using the GPUs:

sudo fuser -v /dev/nvidia*

Stop or suspend any processes that are using the GPUs before proceeding to Step 2.

nvidia-smi
nvidia-smi -L

Expected output should list one or more NVIDIA GB300 devices. If you see GB300 GPUs, you can proceed to enable MIG.

Step 2
Enable MIG mode on the B300 GPUs

Ensure no GPU processes are running (see Step 1). Enable MIG for all GPUs or for a specific GPU. This must be done with elevated privileges.

Enable MIG on all GPUs:

sudo nvidia-smi -mig 1

Or enable MIG on a single GPU (e.g. GPU 0 only):

sudo nvidia-smi -i 0 -mig 1

Expected output: Success typically shows no error message; the command returns to the prompt. If you see "In use by another client", stop all GPU processes (e.g. desktop, services, containers) and run sudo fuser -v /dev/nvidia* to confirm nothing is using the GPUs, then retry.

If MIG mode shows Pending after enablement (e.g. in nvidia-smi -q | grep -i mig), wait a short time and run the command again, or reboot the system to allow the driver to apply the MIG state.

Enabling MIG partitions each B300 into multiple GPU Instances; you will create and assign profiles in the next steps.

Step 3
Verify MIG mode and inspect B300 profiles

Confirm that MIG mode is enabled:

nvidia-smi -q | grep -i mig
# or for a specific GPU:
nvidia-smi -i 0 -q | grep -i "MIG Mode"

Expected output should show MIG Mode: Enabled.

List the GPU Instance Profiles available on a B300 (e.g. GPU 0). These profile IDs are used when creating MIG instances:

nvidia-smi mig -lgip -i 0

On GB300 you should see profiles such as (exact memory sizes may match your driver; IDs are used in commands):

  • MIG 1g.35gb (ID 19)
  • MIG 1g.35gb+me (ID 20)
  • MIG 1g.70gb (ID 15)
  • MIG 2g.70gb (ID 14)
  • MIG 3g.139gb (ID 9)
  • MIG 4g.139gb (ID 5)
  • MIG 7g.278gb (ID 0)

Note the IDs; you will pass them to -cgi when creating the layout.

Step 4
Create a MIG layout (example for B300)

Create GPU and compute instances using the profile IDs from Step 3. The basic pattern is:

sudo nvidia-smi mig -cgi <profile_id,profile_id,...> -C -i <gpu_index>

This example assumes a 6-GPU DGX Station. If you have fewer GPUs (e.g. 1 or 2), run only the -cgi lines for the GPU indices that exist on your system (e.g. -i 0 and -i 1 only). Each GPU can have any combination of profiles that fits within its capacity:

# GPU 0: 7 × 1g.35gb
sudo nvidia-smi mig -cgi 19,19,19,19,19,19,19 -C -i 0

# GPU 1: 4 × 1g.70gb
sudo nvidia-smi mig -cgi 15,15,15,15 -C -i 1

# GPU 2: 3 × 2g.70gb
sudo nvidia-smi mig -cgi 14,14,14 -C -i 2

# GPU 3: 2 × 3g.139gb
sudo nvidia-smi mig -cgi 9,9 -C -i 3

# GPU 4: 1 × 4g.139gb
sudo nvidia-smi mig -cgi 5 -C -i 4

# GPU 5: 1 × 7g.278gb (full GPU as a single MIG instance)
sudo nvidia-smi mig -cgi 0 -C -i 5

You can choose any valid combination of profile IDs per GPU that fits within the GB300’s capacity; the above is a known-good example.

Step 5
Verify MIG instances

Check the resulting MIG device layout:

nvidia-smi -L

You should see each physical GPU (e.g. NVIDIA GB300) followed by its MIG devices, for example:

GPU 0: NVIDIA GB300 (UUID: GPU-...)
  MIG 1g.35gb Device 0: (UUID: MIG-...)
  MIG 1g.35gb Device 1: (UUID: MIG-...)
  ...
GPU 1: NVIDIA GB300 (UUID: GPU-...)
  MIG 1g.70gb Device 0: (UUID: MIG-...)
  ...

To list GPU instances and compute instances (requires sudo):

sudo nvidia-smi mig -lgi     # list GPU instances
sudo nvidia-smi mig -lci     # list compute instances

Step 6
Using the MIG devices

Bare-metal CUDA applications: set CUDA_VISIBLE_DEVICES to a MIG device UUID (from nvidia-smi -L):

export CUDA_VISIBLE_DEVICES=MIG-<uuid>
./your_app

Verify a MIG instance is visible: From the same shell where you set CUDA_VISIBLE_DEVICES, run nvidia-smi. You should see only the single MIG device (e.g. one "MIG 1g.35gb" device). Example:

export CUDA_VISIBLE_DEVICES=MIG-<uuid-from-nvidia-smi-L>
nvidia-smi

Containers (Docker): Use the MIG device UUID in the --gpus option. Example:

docker run --gpus '"device=MIG-<uuid>"' nvcr.io/nvidia/cuda:13.0.1-devel-ubuntu24.04 nvidia-smi

Replace <uuid> with a full MIG UUID from nvidia-smi -L. For Kubernetes and nvidia-container-toolkit workflows, see the MIG User Guide (Getting Started with MIG and Kubernetes sections).

Step 7
Disabling MIG and restoring full GPU

When you need full NVLink P2P and a single full-GPU instance again, you must destroy all MIG instances first, then disable MIG. If you run sudo nvidia-smi -mig 0 without destroying instances, it will fail with "In use by another client."

1. Destroy compute instances and GPU instances on each GPU. For each GPU index that has MIG instances, run (replace N with the GPU index, e.g. 0, 1, … 5 for a 6-GPU system):

# Destroy all compute instances on GPU N (required before destroying GPU instances)
sudo nvidia-smi mig -dci -i N

# Destroy all GPU instances on GPU N
sudo nvidia-smi mig -dgi -i N

Repeat for every GPU that has MIG instances. Example for a 6-GPU system:

for i in 0 1 2 3 4 5; do sudo nvidia-smi mig -dci -i $i; sudo nvidia-smi mig -dgi -i $i; done

2. Disable MIG mode on all GPUs:

WARNING

This returns each GB300 to a single full-GPU instance. Any workloads using MIG UUIDs must be stopped first and will need to be reconfigured or restarted.

sudo nvidia-smi -mig 0

3. Verify MIG is fully disabled:

nvidia-smi -q | grep -A2 "MIG Mode"

Expected output should show Current: Disabled for each GPU.

On DGX/HGX B200/B300, ensure Fabric Manager is running after disabling MIG so NVLinks and NVSwitch fabric are re-initialized (see Troubleshooting).

Resources

  • MIG User Guide (Getting Started with MIG)