NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2025 NVIDIA Corporation

baidu

paddleocr

Run Anywhere

Model for table extraction that receives an image as input, runs OCR on the image, and returns the text within the image and its bounding boxes.

Optical Character DetectionOptical Character RecognitionTable Extractiondata ingestionextractionnemo retrieverrun-on-rtx
Get API Key
API Reference
Accelerated by DGX Cloud
Deploying your application in production? Get started with a 90-day evaluation of NVIDIA AI Enterprise

Follow the steps below to download and run the NVIDIA NIM inference microservice for this model on your infrastructure of choice.

Requirements

  • NVIDIA GeForce RTX 40xx or above (see supported GPUs).
  • Install the latest NVIDIA GPU Driver on Windows (Version 570+).

Step 1
Ensure virtualization is enabled in the system BIOS

In Windows, open the Task Manager. Select the Performance tab and click on CPU. Check if Virtualization is enabled. If it is disabled, see here to enable it.

Step 2
Open the Windows Subsystem for Linux 2 - WSL2 - Distro

Install WSL2. For additional instructions refer to the documentation.

Once installed, open the NVIDIA-Workbench WSL2 distro using the following command in the Windows terminal.

wsl -d NVIDIA-Workbench

Step 3
Export API Key

Export your personal credentials as environment variables:

export NGC_API_KEY=<PASTE_API_KEY_HERE>

Step 4
Login to NVIDIA NGC

Login to NVIDIA NGC so that you can pull the NIM container:

echo "$NGC_API_KEY" | podman login nvcr.io --username '$oauthtoken' --password-stdin

Step 5
Pull and Run NVIDIA NIM

Pull and run the NVIDIA NIM with the command below.

export LOCAL_NIM_CACHE=~/.cache/nim
mkdir -p "$LOCAL_NIM_CACHE"
chmod o+w "$LOCAL_NIM_CACHE"
podman run -it --rm \
    --device nvidia.com/gpu=all \
    --shm-size=16GB \
    -e NGC_API_KEY=$NGC_API_KEY \
    -v "$LOCAL_NIM_CACHE:/opt/nim/.cache" \
    -e NIM_RELAX_MEM_CONSTRAINTS=1 \
    -u $(id -u) \
    -p 8000:8000 \
    nvcr.io/nim/baidu/paddleocr:latest

Step 6
Test the NIM

You can now make a local API call by opening another Distro instance and using this curl command:

HOSTNAME="localhost"
SERVICE_PORT=8000
curl -X "POST" \
  "http://${HOSTNAME}:${SERVICE_PORT}/v1/infer" \
  -H 'accept: application/json' \
  -H 'Content-Type: application/json' \
  -d '{
        "input": [
          {
            "type": "image_url",
            "url": "data:image/png;base64,<BASE64_ENCODED_IMAGE>"
          },
          {
            "type": "image_url",
            "url": "data:image/png;base64,<BASE64_ENCODED_IMAGE>"
          }
        ]
      }'

For more details on getting started with this NIM, visit the NVIDIA NIM Docs.