NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2026 NVIDIA Corporation

nvidia

parakeet-ctc-0.6b-asr

Run Anywhere

State-of-the-art accuracy and speed for English transcriptions.

ASRBatchEnglishFastNVIDIA NIMRun-on-RTXStreamingSpeech-to-Text
Get API Key
API Reference
Accelerated by DGX Cloud
Deploying your application in production? Get started with a 90-day evaluation of NVIDIA AI Enterprise

Follow the steps below to download and run the NVIDIA NIM inference microservice for this model on your infrastructure of choice.

Requirements

  • NVIDIA GeForce RTX 40xx or above (see supported GPUs)
  • Install the latest NVIDIA GPU Driver on Windows (Version 570+)

Step 1
Open the Windows Subsystem for Linux 2 - WSL2 - Distro

Install WSL2. For additional instructions refer to the documentation.

Once installed, open the NVIDIA-Workbench WSL2 distro using the following command in the Windows terminal.

wsl -d NVIDIA-Workbench

Step 2
Run the Container

$ podman login nvcr.io
Username: $oauthtoken
Password: <PASTE_API_KEY_HERE>

Pull and run the NVIDIA NIM with the command below.

export NGC_API_KEY=<PASTE_API_KEY_HERE>
export LOCAL_NIM_CACHE=~/.cache/nim
mkdir -p "$LOCAL_NIM_CACHE"
chmod -R a+w "$LOCAL_NIM_CACHE"
podman run -it --rm \
    --device nvidia.com/gpu=all \
    --shm-size=16GB \
    -e NGC_API_KEY=$NGC_API_KEY \
    -e NIM_TAGS_SELECTOR=name=parakeet-0-6b-ctc-en-us,mode=ofl,bs=1 \
    -e NIM_HTTP_API_PORT=9000 \
    -e NIM_GRPC_API_PORT=50051 \
    -e NIM_RELAX_MEM_CONSTRAINTS=1 \
    -v "$LOCAL_NIM_CACHE:/opt/nim/.cache" \
    -u $(id -u) \
    -p 9000:9000 \
    -p 50051:50051 \
    nvcr.io/nim/nvidia/parakeet-0-6b-ctc-en-us:latest

It may take up to 30 minutes depending on your network speed for the container to be ready and start accepting requests from the time the docker container is started.

Step 3
Test the NIM

Open a new Distro instance and run following command to check if the service is ready to handle inference requests

curl -X 'GET' 'http://localhost:9000/v1/health/ready'

If the service is ready, you get a response similar to the following.

{"ready":true}

Install the Riva Python client package

sudo apt-get install python3-pip
pip install nvidia-riva-client

Download Riva sample clients

git clone https://github.com/nvidia-riva/python-clients.git

Run Speech to Text inference in streaming modes. Riva ASR supports Mono, 16-bit audio in WAV, OPUS and FLAC formats.

python3 python-clients/scripts/asr/transcribe_file_offline.py --server 0.0.0.0:50051 --input-file <path_to_speech_file> --language-code en-US

For more details on getting started with this NIM, visit the NVIDIA NIM Docs.