NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2026 NVIDIA Corporation

This API will be deprecated on 05/18/2026. It will no longer be supported after 05/18/2026. Please transition to another model to avoid any service interruptions. For more models information, visit our API Reference.

nvidia

nemoretriever-page-elements-v3

Deprecation in 82dDownloadable

Model for object detection, fine-tuned to detect charts, tables, and titles in documents.

Chart DetectionObject DetectionTable Detectiondata ingestionnemo retriever
Get API Key
API ReferenceAPI Reference
Accelerated by DGX Cloud

Model Overview

Description

The NeMo Retriever Page Elements v3 model is a specialized object detection model designed to identify and extract key elements from charts and graphs. While the underlying technology builds upon work from Megvii Technology, we developed our own base model through complete retraining rather than using pre-trained weights. YOLOX is an anchor-free version of YOLO (You Only Look Once), this model combines a simpler architecture with enhanced performance. The model is trained to detect tables, charts, infographics, titles, header/footers and texts in documents.

This model supersedes the nemoretriever-page-elements model and is a part of the NVIDIA NeMo Retriever family of NIM microservices specifically for object detection and multimodal extraction of enterprise documents.

This model is ready for commercial/non-commercial use.

License/Terms of use

The use of this model is governed by the NVIDIA AI Foundation Models Community License Agreement.

You are responsible for ensuring that your use of NVIDIA provided models complies with all applicable laws.

Deployment Geography

Global

Use Case

The NeMo Retriever Page Elements v3 model is designed for automating extraction of text, charts, tables, infographics etc in enterprise documents. It can be used for document analysis, understanding and processing. Key applications include:

  • Enterprise document extraction, embedding and indexing
  • Augmenting Retrieval Augmented Generation (RAG) workflows with multimodal retrieval
  • Data extraction from legacy documents and reports

Release Date

11/15/2025 via https://build.nvidia.com/nvidia/nemoretriever-page-elements-v3

References

  • YOLOX paper: https://arxiv.org/abs/2107.08430
  • YOLOX repo: https://github.com/Megvii-BaseDetection/YOLOX
  • Previous version of the Page Element model: https://build.nvidia.com/nvidia/nemoretriever-page-elements-v2
  • Technical blog: https://developer.nvidia.com/blog/approaches-to-pdf-data-extraction-for-information-retrieval/

Model Architecture

Architecture Type: YOLOX
Network Architecture: DarkNet53 Backbone + FPN Decoupled head (one 1x1 convolution + 2 parallel 3x3 convolutions (one for the classification and one for the bounding box prediction). YOLOX is a single-stage object detector that improves on Yolo-v3.
This model was developed based on the Yolo architecture
Number of model parameters: 5.4 × 10⁷

Input

Input Type(s): Image
Input Format(s): Red, Green, Blue (RGB)
Input Parameters: Two-Dimensional (2D)
Other Properties Related to Input: Image size resized to (1024, 1024)

Output

Output Type(s): Array
Output Format: A dictionary of dictionaries containing np.ndarray objects. The outer dictionary has entries for each sample (page), and the inner dictionary contains a list of dictionaries, each with a bounding box (np.ndarray), class label, and confidence score for that page.
Output Parameters: One-Dimensional (1D)
Other Properties Related to Output: The output contains bounding boxes, detection confidence scores, and object classes (chart, table, infographic, title, text, headers and footers). The thresholds used for non-maximum suppression are conf_thresh=0.01 and iou_thresh=0.5.
Output Classes:

  • Table
    • Data structured in rows and columns
  • Chart
    • Specifically bar charts, line charts, or pie charts
  • Infographic
    • Visual representations of information that is more complex than a chart, including diagrams and flowcharts
    • Maps are not considered infographics
  • Title
    • Titles can be section titles, or table/chart/infographic titles
  • Header/footer
    • Page headers and footers
  • Text
    • Texts are regions of one or more text paragraphs, or standalone text not belonging to any of the classes above

Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.

Software Integration

Runtime Engine(s):

  • NeMo Retriever Page Elements v3 NIM

Supported Hardware Microarchitecture Compatibility [List in Alphabetic Order]:

  • NVIDIA Ampere
  • NVIDIA Hopper
  • NVIDIA Lovelace

Preferred/Supported Operating System(s):

  • Linux

The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment. This AI model can be embedded as an Application Programming Interface (API) call into the software environment described above.

Model Version(s):

  • nemoretriever-page-elements-v3

Training and Evaluation Datasets:

Training Dataset

Data Modality: Image
Image Training Data Size: Less than a Million Images
Data collection method by dataset: Automated
Labeling method by dataset: Hybrid: Automated, Human
Pretraining (by NVIDIA): 118,287 images of the COCO train2017 dataset
Finetuning (by NVIDIA): 36,093 images from Digital Corpora dataset, with annotations from Azure AI Document Intelligence and data annotation team
Number of bounding boxes per class: 35,328 tables, 44,178 titles, 11,313 charts and 6,500 infographics, 90,812 texts and 10,743 header/footers. The layout model of Document Intelligence was used with 2024-02-29-preview API version.

Evaluation Dataset

The primary evaluation set is a cut of the Azure labels and digital corpora images. Number of bounding boxes per class: 1,985 tables, 2,922 titles, 498 charts, 572 infographics, 4,400 texts and 492 header/footers. Mean Average Precision (mAP) was used as an evaluation metric, which measures the model's ability to correctly identify and localize objects across different confidence thresholds.

Data collection method by dataset: Hybrid: Automated, Human
Labeling method by dataset: Hybrid: Automated, Human
Properties: We evaluated with Azure labels from manually selected pages, as well as manual inspection on public PDFs and powerpoint slides.

Per-class Performance Metrics:

ClassAP (%)AR (%)
table44.64362.242
chart54.19177.557
title38.52956.315
infographic66.86369.306
text45.41873.017
header_footer53.89575.670

Inference:

Acceleartion Engine: TensorRT
Test hardware: See Support Matrix from NIM documentation

Ethical Considerations

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards.

Please report security vulnerabilities or NVIDIA AI Concerns here.

Get Help

Enterprise Support

Get access to knowledge base articles and support cases or submit a ticket.