
Model for object detection, fine-tuned to detect charts, tables, and titles in documents.
nemotron-graphic-elements-v1 is a specialized object detection model designed to identify and extract key elements from charts and graphs. Based on YOLOX (an anchor-free version of YOLO), it detects and localizes graphic elements including titles, axis labels, legends, and data point annotations. While the underlying approach builds upon work from the YOLOX ecosystem, NVIDIA developed its own base model through complete retraining rather than using pre-trained weights.
This model supersedes the CACHED model.
This model is ready for commercial/non-commercial use.
GOVERNING TERMS: The trial service is governed by the NVIDIA API Trial Terms of Service. Use of this model is governed by the NVIDIA Open Model License Agreement.
You are responsible for ensuring that your use of NVIDIA provided models complies with all applicable laws.
Model Developer: NVIDIA
Global
This model is designed for automating extraction of graphic elements in enterprise documents, including:
Build.NVIDIA.com 03/02/2026 via nemotron-graphic-elements-v1
References:
Architecture Type: YOLOX
Network Architecture: DarkNet53 Backbone + FPN decoupled head (one 1x1 convolution + 2 parallel 3x3 convolutions: one for classification and one for bounding box prediction)
Classes Detected: chart_title, x_title, y_title, xlabel, ylabel, other, legend_label, legend_title, mark_label, value_label
Number of Model Parameters: 5.4e7
Input Types: Image
Input Formats: RGB
Input Parameters: Two Dimensional (2D)
Other Input Properties: Expects an image array (single image or batch). Expected input is a np.ndarray image of shape [Channel, Width, Height], or an np.ndarray batch of images of shape [Batch, Channel, Width, Height].
Output Types: Structured detections (bounding boxes + labels + confidence)
Output Format: Dict / JSON-compatible structure
Output Parameters: One Dimensional (1D)
Other Output Properties: Outputs detections per input image.
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
Runtime Engines: TensorRT
Supported Hardware Microarchitecture Compatibility:
NVIDIA Ampere
NVIDIA Hopper
NVIDIA Lovelace
Operating Systems: Linux
Acceleration Engine: TensorRT
Test Hardware: Tested on supported hardware in the compatibility section
nemotron-graphic-elements-v1
Short Name: nemotron-graphic-elements-v1
Data Modality: Image
Training Data Collection: Hybrid (Automated + Human)
Training Labeling: Hybrid (Automated + Human)
Training Properties: Trained using a mixture of real-world chart images and pseudo-labeled charts, including:
Evaluation Data Collection: Hybrid (Automated + Human)
Evaluation Labeling: Hybrid (Automated + Human)
Evaluation Properties: Evaluated using the PMC Chart dataset. Mean Average Precision (mAP) was used as the evaluation metric. The validation dataset is the same as the PMC Chart dataset.
Number of bounding boxes and images per class:
| Label | Images | Boxes |
|---|---|---|
| chart_title | 38 | 38 |
| legend_label | 318 | 1077 |
| legend_title | 17 | 19 |
| mark_label | 42 | 219 |
| other | 113 | 464 |
| value_label | 52 | 726 |
| x_title | 404 | 437 |
| xlabel | 553 | 4091 |
| y_title | 502 | 505 |
| ylabel | 534 | 3944 |
| Total | 560 | 11520 |
Per-class Performance Metrics
Average Precision (AP)
| Class | AP | Class | AP | Class | AP |
|---|---|---|---|---|---|
| chart_title | 82.38 | x_title | 88.77 | y_title | 89.48 |
| xlabel | 85.04 | ylabel | 86.22 | other | 55.14 |
| legend_label | 84.09 | legend_title | 60.61 | mark_label | 49.31 |
| value_label | 62.66 |
Average Recall (AR)
| Class | AR | Class | AR | Class | AR |
|---|---|---|---|---|---|
| chart_title | 93.16 | x_title | 92.31 | y_title | 92.32 |
| xlabel | 88.93 | ylabel | 89.40 | other | 79.48 |
| legend_label | 88.07 | legend_title | 68.42 | mark_label | 73.61 |
| value_label | 68.32 |
The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant industry and use case, and address unforeseen product misuse.
For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards.
Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns here.