nvidia

nemoretriever-graphic-elements-v1

Run Anywhere

Model for object detection, fine-tuned to detect charts, tables, and titles in documents.

Model Overview

Description

The NeMo Retriever Graphic Elements v1 model is a specialized object detection system designed to identify and extract key elements from charts and graphs. Based on YOLOX, an anchor-free version of YOLO (You Only Look Once), this model combines a simpler architecture with enhanced performance. While the underlying technology builds upon work from Megvii Technology, we developed our own base model through complete retraining rather than using pre-trained weights.

The model excels at detecting and localizing various graphic elements within chart images, including titles, axis labels, legends, and data point annotations. This capability makes it particularly valuable for document understanding tasks and automated data extraction from visual content.

This model is ready for commercial use and is a part of the NVIDIA NeMo Retriever family of NIM microservices specifically for object detection and multimodal extraction of enterprise documents.

This model supersedes the CACHED model.

License/Terms of use

Use of this model is governed by the NVIDIA AI Foundation Models Community License Agreement.

You are responsible for ensuring that your use of NVIDIA AI Foundation Models complies with all applicable laws.

Deployment Geography: Global

Use Case:

This model is designed for automating extraction of graphic elements of charts in enterprise documents. Key applications include:

  • Enterprise document extraction, embedding and indexing
  • Augmenting Retrieval Augmented Generation (RAG) workflows with multimodal retrieval
  • Data extraction from legacy documents and reports

Release Date: 2025-03-17

Model Architecture

Architecture type: YOLOX
Network architecture: DarkNet53 Backbone + FPN Decoupled head (one 1x1 convolution + 2 parallel 3x3 convolutions (one for the classification and one for the bounding box prediction)

YOLOX is a single-stage object detector that improves on Yolo-v3. The model is fine-tuned to detect 10 classes of objects in documents:

  1. Chart title
  2. X-axis title
  3. Y-axis title
  4. X-axis label(s)
  5. Y-axis label(s)
  6. Legend label(s)
  7. Legend title
  8. Markings and values labels
  9. Miscellaneous other texts on the chart

Input

Input type(s): Image
Input format(s): Red, Green, Blue (RGB)
Input parameters: Two Dimensional (2D)
Other properties related to input: Expected input is a np.ndarray image of shape [Channel, Width, Height], or an np.ndarray batch of image of shape [Batch, Channel, Width, Height].

Output

Output type(s): Text associated to each of the following classes :

  • ["chart_title", "x_title", "y_title", "xlabel", "ylabel", "other", "legend_label", "legend_title", "mark_label", "value_label"]

Output format: Dict of String
Output parameters: 1D
Other properties related to output: None

Software Integration

Runtime Engine: NeMo Retriever Graphic Elements v1 NIM
Supported Hardware Microarchitecture Compatibility: NVIDIA Ampere, NVIDIA Hopper, NVIDIA Lovelace
Supported Operating System(s): Linux

Model Version(s):

  • nemoretriever-graphic-elements-v1

Training Dataset:

  • PubMed Central (PMC) Chart Dataset

    • Link: https://chartinfo.github.io/index_2022.html
    • Data collection method: Automated, Human
    • Labeling method: Human
    • Description: A real-world dataset collected from PubMed Central Documents and manually annotated, released in the ICPR 2022 CHART-Infographic competition. There are 5,614 images for chart element detection, 4,293 images for final plot detection and data extraction, and 22,924 images for chart classification.
  • DeepRule dataset

    • Link: https://github.com/soap117/DeepRule
    • Data collection method: Automated, Human
    • Labeling method: Distillation by the CACHED model
    • Description: The original dataset consists of 386,966 chart images obtained by crawling public Excel sheets from the web with texts overwritten to protect privacy. The CACHED model is used to pseudo-label the relevant classes. We used a subsample of 9,091 charts where a title was detected for training alongside with the 5,614 PMC training images.

Evaluation Results

Results were evaluated using the PMC Chart dataset. The Mean Average Precision (mAP) was used as the evaluation metric to measure the model's ability to correctly identify and localize objects across different confidence thresholds.

Data Collection & Labeling

  • Data collection method: Hybrid (Automated & Human)
  • Labeling method: Hybrid (Automated & Human)
  • Properties: The validation dataset is the same as the PMC Chart dataset.

Dataset Overview

Number of bounding boxes and images per class:

LabelImagesBoxes
chart_title3838
legend_label3181077
legend_title1719
mark_label42219
other113464
value_label52726
x_title404437
xlabel5534091
y_title502505
ylabel5343944
Total56011,520

Per-Class Performance Metrics

Average Precision (AP)

ClassAPClassAPClassAP
chart_title82.38x_title88.77y_title89.48
xlabel85.04ylabel86.22other55.14
legend_label84.09legend_title60.61mark_label49.31
value_label62.66

Average Recall (AR)

ClassARClassARClassAR
chart_title93.16x_title92.31y_title92.32
xlabel88.93ylabel89.40other79.48
legend_label88.07legend_title68.42mark_label73.61
value_label68.32

Inference:

Engine: Tensor(RT)
Test hardware: Tested on all supported hardware listed in compatibility section

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards.

Please report security vulnerabilities or NVIDIA AI Concerns here.