
Open VLM for quantum computer calibration chart understanding across a range of qubit modalities.
Ising-Calibration-1-35B-A3B analyzes quantum computing calibration experiment plots and generates structured technical text across six analysis question categories. Ising-Calibration-1-35B-A3B was developed by NVIDIA as a quantum calibration vision-language model built on Qwen3.5-35B-A3B. This model is ready for commercial/non-commercial use.
GOVERNING HOSTING TERMS
The Ising-Calibration-1-35B-A3B is governed by the NVIDIA Open Model License Agreement.
By continuing you consent to processing and agree to the NVIDIA API Trial Terms of Service. ADDITIONAL INFORMATION: For Qwen3.5-35B-A3B Apache License, Version 2.0.
Global
Quantum computing researchers, calibration engineers, and developers can use this model to analyze experiment plot images and generate technical descriptions, experimental conclusions, significance assessments, fit quality evaluations, parameter extractions, and experiment success classifications. Model outputs should be validated by domain experts before acting on experimental conclusions.
Hugging Face: 04/14/2026 via https://huggingface.co/nvidia/Ising-Calibration-1-35B-A3B
Build.NVIDIA.com: 04/14/2026 via https://build.nvidia.com/nvidia/ising-calibration-1-35b-a3b
Architecture Type: Mixture-of-Experts Vision-Language Model (MoE VLM)
Network Architecture: Integrated vision encoder for experiment plot images combined with the Qwen3.5-35B-A3B MoE language model for autoregressive text generation.
This model was developed based on: Qwen3.5-35B-A3B
Number of model parameters: ~35B total parameters, ~3B active per token (256 experts, 8 active)
Input Type(s): Image, Text
Input Format(s):
Input Parameters:
Other Properties Related to Input: Single-image or multi-image quantum calibration experiment plots with text prompts delivered through an OpenAI-compatible API. Default inference settings use temperature=0 and max_tokens=16384.
Output Type(s): Text
Output Format(s):
Output Parameters:
Other Properties Related to Output: Natural language technical analysis, experimental conclusions, significance assessments, fit quality evaluations, parameter extractions, and experiment success classifications.
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA hardware and software frameworks, the model achieves faster inference times compared to CPU-only solutions.
Runtime Engine(s):
Supported Hardware Microarchitecture Compatibility:
Supported Operating System(s):
Ubuntu 22.04+) The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks before deployment.
v1.0.0
72.5K total entries (Phase 1: 23.8K ICL-formatted entries; Phase 2: 48.7K zero-shot entries).
Data Collection Method by dataset
Labeling Method by dataset
Properties: Synthetically generated quantum calibration experiment plots with paired analytical text.
Benchmark Score: QCalEval Benchmark zero-shot scores.
Description: QCalEval is a VLM benchmark for quantum calibration plots: 243 entries across 87 scenario types from 22 experiment families, spanning superconducting qubits and neutral atoms. It evaluates six question types: technical description (Q1), experimental conclusion (Q2), experimental significance (Q3), fit quality assessment (Q4), parameter extraction (Q5), and experiment success classification (Q6).
Data Collection Method by dataset:
Labeling Method by dataset:
Properties: Curated quantum calibration experiments with ground-truth labels derived from simulation parameters.
Acceleration Engine: vLLM
Test Hardware:
NVIDIA believes Trustworthy AI is a shared responsibility and developers should ensure this model meets the requirements of their use case and addresses foreseeable misuse before deployment.
For more detailed information on ethical considerations for this model, please see the Model Card++ Bias, Explainability, Safety & Security, and Privacy Subcards.
Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns here.