NVIDIA
Explore Models Blueprints GPUs Docs
Terms of Use

|

Privacy Policy

|

Manage My Privacy

|

Contact

Copyright © 2025 NVIDIA Corporation

nvidia

nv-yolox-page-elements-v1

Run Anywhere

Model for object detection, fine-tuned to detect charts, tables, and titles in documents.

chart detectiondata ingestionobject detectiontable detectionextractionnemo retrieverrun-on-rtx
Get API Key
API Reference
Accelerated by DGX Cloud
Deploying your application in production? Get started with a 90-day evaluation of NVIDIA AI Enterprise

Follow the steps below to download and run the NVIDIA NIM inference microservice for this model on your infrastructure of choice.

Prerequisites

  • NVIDIA GeForce RTX 4080 or above (see supported GPUs)
  • Install the latest NVIDIA GPU Driver on Windows (Version 570+)
  • Ensure virtualization is enabled in the system BIOS. In Windows, open Task Manager, select the Performance tab, and find Virtualization. If Disabled, see here to enable.

Step 1
Open the Windows Subsystem for Linux 2 - WSL2 - Distro

Install WSL2. For additional instructions refer to the documentation.

Once installed, open the NVIDIA-Workbench WSL2 distro using the following command in the Windows terminal.

wsl -d NVIDIA-Workbench

Step 2
Run the Container

$ podman login nvcr.io Username: $oauthtoken Password: <PASTE_API_KEY_HERE>

Pull and run the NVIDIA NIM with the command below.

export NGC_API_KEY=<PASTE_API_KEY_HERE> export LOCAL_NIM_CACHE=~/.cache/nim mkdir -p "$LOCAL_NIM_CACHE" chmod o+w "$LOCAL_NIM_CACHE" podman run -it --rm \ --device nvidia.com/gpu=all \ --shm-size=16GB \ -e NGC_API_KEY=$NGC_API_KEY \ -v "$LOCAL_NIM_CACHE:/opt/nim/.cache" \ -e NIM_RELAX_MEM_CONSTRAINTS=1 \ -u $(id -u) \ -p 8000:8000 \ nvcr.io/nim/nvidia/nv-yolox-page-elements-v1:1.1.0-rtx

The first few inference requests may take longer than subsequent ones. This is due to the model being loaded into memory and initialized for the first time.

Step 3
Test the NIM

You can now make a local API call by opening another Distro instance and using this curl command:

HOSTNAME="localhost" SERVICE_PORT=8000 curl -X "POST" \ "http://${HOSTNAME}:${SERVICE_PORT}/v1/infer" \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d '{ "input": [ { "type": "image_url", "url": "data:image/png;base64,<BASE64_ENCODED_IMAGE>" }, { "type": "image_url", "url": "data:image/png;base64,<BASE64_ENCODED_IMAGE>" } ] }'

For more details on getting started with this NIM, visit the NVIDIA NIM Docs.