
Model for object detection, fine-tuned to detect charts, tables, and titles in documents.
The NeMo Retriever Page Elements v3 model is a specialized object detection model designed to identify and extract key elements from charts and graphs. While the underlying technology builds upon work from Megvii Technology, we developed our own base model through complete retraining rather than using pre-trained weights. YOLOX is an anchor-free version of YOLO (You Only Look Once), this model combines a simpler architecture with enhanced performance. The model is trained to detect tables, charts, infographics, titles, header/footers and texts in documents.
This model supersedes the nemoretriever-page-elements model and is a part of the NVIDIA NeMo Retriever family of NIM microservices specifically for object detection and multimodal extraction of enterprise documents.
This model is ready for commercial/non-commercial use.
The use of this model is governed by the NVIDIA AI Foundation Models Community License Agreement.
You are responsible for ensuring that your use of NVIDIA provided models complies with all applicable laws.
Global
The NeMo Retriever Page Elements v3 model is designed for automating extraction of text, charts, tables, infographics etc in enterprise documents. It can be used for document analysis, understanding and processing. Key applications include:
11/15/2025 via https://build.nvidia.com/nvidia/nemoretriever-page-elements-v3
Architecture Type: YOLOX
Network Architecture: DarkNet53 Backbone + FPN Decoupled head (one 1x1 convolution + 2 parallel 3x3 convolutions (one for the classification and one for the bounding box prediction). YOLOX is a single-stage object detector that improves on Yolo-v3.
This model was developed based on the Yolo architecture
Number of model parameters: 5.4 × 10⁷
Input Type(s): Image
Input Format(s): Red, Green, Blue (RGB)
Input Parameters: Two-Dimensional (2D)
Other Properties Related to Input: Image size resized to (1024, 1024)
Output Type(s): Array
Output Format: A dictionary of dictionaries containing np.ndarray objects. The outer dictionary has entries for each sample (page), and the inner dictionary contains a list of dictionaries, each with a bounding box (np.ndarray), class label, and confidence score for that page.
Output Parameters: One-Dimensional (1D)
Other Properties Related to Output: The output contains bounding boxes, detection confidence scores, and object classes (chart, table, infographic, title, text, headers and footers). The thresholds used for non-maximum suppression are conf_thresh=0.01 and iou_thresh=0.5.
Output Classes:
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
Runtime Engine(s):
Supported Hardware Microarchitecture Compatibility [List in Alphabetic Order]:
Preferred/Supported Operating System(s):
The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment. This AI model can be embedded as an Application Programming Interface (API) call into the software environment described above.
nemoretriever-page-elements-v3Data Modality: Image
Image Training Data Size: Less than a Million Images
Data collection method by dataset: Automated
Labeling method by dataset: Hybrid: Automated, Human
Pretraining (by NVIDIA): 118,287 images of the COCO train2017 dataset
Finetuning (by NVIDIA): 36,093 images from Digital Corpora dataset, with annotations from Azure AI Document Intelligence and data annotation team
Number of bounding boxes per class: 35,328 tables, 44,178 titles, 11,313 charts and 6,500 infographics, 90,812 texts and 10,743 header/footers. The layout model of Document Intelligence was used with 2024-02-29-preview API version.
The primary evaluation set is a cut of the Azure labels and digital corpora images. Number of bounding boxes per class: 1,985 tables, 2,922 titles, 498 charts, 572 infographics, 4,400 texts and 492 header/footers. Mean Average Precision (mAP) was used as an evaluation metric, which measures the model's ability to correctly identify and localize objects across different confidence thresholds.
Data collection method by dataset: Hybrid: Automated, Human
Labeling method by dataset: Hybrid: Automated, Human
Properties: We evaluated with Azure labels from manually selected pages, as well as manual inspection on public PDFs and powerpoint slides.
Per-class Performance Metrics:
| Class | AP (%) | AR (%) |
|---|---|---|
| table | 44.643 | 62.242 |
| chart | 54.191 | 77.557 |
| title | 38.529 | 56.315 |
| infographic | 66.863 | 69.306 |
| text | 45.418 | 73.017 |
| header_footer | 53.895 | 75.670 |
Acceleartion Engine: TensorRT
Test hardware: See Support Matrix from NIM documentation
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards.
Please report security vulnerabilities or NVIDIA AI Concerns here.
Get access to knowledge base articles and support cases or submit a ticket.