NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2026 NVIDIA Corporation

nvidia

ising-calibration-1-35b-a3b

Free Endpoint

Open VLM for quantum computer calibration chart understanding across a range of qubit modalities.

nvidia

ising-calibration-1-35b-a3b

Free Endpoint

Open VLM for quantum computer calibration chart understanding across a range of qubit modalities.

QuantumVision Language Modelcalibrationreasoning

Model Overview

Description:

Ising-Calibration-1-35B-A3B analyzes quantum computing calibration experiment plots and generates structured technical text across six analysis question categories. Ising-Calibration-1-35B-A3B was developed by NVIDIA as a quantum calibration vision-language model built on Qwen3.5-35B-A3B. This model is ready for commercial/non-commercial use.

License/Terms of Use:

GOVERNING HOSTING TERMS The Ising-Calibration-1-35B-A3B is governed by the NVIDIA Open Model License Agreement. By continuing you consent to processing and agree to the NVIDIA API Trial Terms of Service. ADDITIONAL INFORMATION: For Qwen3.5-35B-A3B Apache License, Version 2.0.

Deployment Geography:

Global

Use Case:

Quantum computing researchers, calibration engineers, and developers can use this model to analyze experiment plot images and generate technical descriptions, experimental conclusions, significance assessments, fit quality evaluations, parameter extractions, and experiment success classifications. Model outputs should be validated by domain experts before acting on experimental conclusions.

Release Date:

Hugging Face: 04/14/2026 via https://huggingface.co/nvidia/Ising-Calibration-1-35B-A3B
Build.NVIDIA.com: 04/14/2026 via https://build.nvidia.com/nvidia/ising-calibration-1-35b-a3b

References(s):

  • Qwen3.5
  • QCalEval Benchmark

Model Architecture:

Architecture Type: Mixture-of-Experts Vision-Language Model (MoE VLM)

Network Architecture: Integrated vision encoder for experiment plot images combined with the Qwen3.5-35B-A3B MoE language model for autoregressive text generation.

This model was developed based on: Qwen3.5-35B-A3B

Number of model parameters: ~35B total parameters, ~3B active per token (256 experts, 8 active)

Input(s):

Input Type(s): Image, Text

Input Format(s):

  • Image: PNG, JPEG
  • Text: String

Input Parameters:

  • Image: Two-Dimensional (2D)
  • Text: One-Dimensional (1D)

Other Properties Related to Input: Single-image or multi-image quantum calibration experiment plots with text prompts delivered through an OpenAI-compatible API. Default inference settings use temperature=0 and max_tokens=16384.

Output(s)

Output Type(s): Text

Output Format(s):

  • Text: String

Output Parameters:

  • Text: One-Dimensional (1D)

Other Properties Related to Output: Natural language technical analysis, experimental conclusions, significance assessments, fit quality evaluations, parameter extractions, and experiment success classifications.

Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA hardware and software frameworks, the model achieves faster inference times compared to CPU-only solutions.

Software Integration:

Runtime Engine(s):

  • vLLM

Supported Hardware Microarchitecture Compatibility:

  • NVIDIA Ada Lovelace
  • NVIDIA Blackwell
  • NVIDIA Hopper

Supported Operating System(s):

  • Linux (Ubuntu 22.04+)

The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks before deployment.

Model Version(s):

v1.0.0

Training, Testing, and Evaluation Datasets:

Training Dataset:

Data Modality:

  • Image
  • Text

Training Data Size:

72.5K total entries (Phase 1: 23.8K ICL-formatted entries; Phase 2: 48.7K zero-shot entries).

Data Collection Method by dataset

  • Synthetic (LLM-augmented via Qwen3.5-397B-A17B)

Labeling Method by dataset

  • Synthetic

Properties: Synthetically generated quantum calibration experiment plots with paired analytical text.

Testing and Evaluation Dataset:

Benchmark Score: QCalEval Benchmark zero-shot scores.

Description: QCalEval is a VLM benchmark for quantum calibration plots: 243 entries across 87 scenario types from 22 experiment families, spanning superconducting qubits and neutral atoms. It evaluates six question types: technical description (Q1), experimental conclusion (Q2), experimental significance (Q3), fit quality assessment (Q4), parameter extraction (Q5), and experiment success classification (Q6).

Data Collection Method by dataset:

  • Synthetic

Labeling Method by dataset:

  • Synthetic

Properties: Curated quantum calibration experiments with ground-truth labels derived from simulation parameters.

Inference:

Acceleration Engine: vLLM
Test Hardware:

  • 2x NVIDIA L40S (48GB)

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and developers should ensure this model meets the requirements of their use case and addresses foreseeable misuse before deployment.

For more detailed information on ethical considerations for this model, please see the Model Card++ Bias, Explainability, Safety & Security, and Privacy Subcards.

Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns here.