Advanced transformer for multi-frame bird's-eye-view 3D perception in autonomous driving.
BEVFormer is a transformer-based model that combines multi-frame camera data into a unified bird's-eye-view (BEV) representation for 3D perception. BEVFormer learns these representations using spatiotemporal transformers. This approach allows the system to exploit both spatial and temporal information by interacting with spatial and temporal spaces through predefined grid-shaped BEV queries.
This NIM previews an example of deploying BEVFormer with explicit quantization with NVIDIA's ModelOpt Toolkit.
This model is ready for commercial/non-commercial use.
This model is not owned or developed by NVIDIA. It has been developed and built to a third-party's requirements for this application and use case; see link to BEVFormer.
GOVERNING TERMS: The trial service is governed by the NVIDIA API Trial Terms of Service. Use of this model is governed by the NVIDIA Community Model License. ADDITIONAL INFORMATION: Apache 2.0.
Global
BEVFormer is most suitable for Physical AI developers, especially ADAS and AV developers working on Perception tasks.
03/18/2025 via https://build.nvidia.com/nvidia/bevformer
nuScenes dataset was used for training, testing and evaluation (see details below).
The nuScenes dataset (pronounced /nuːsiːnz/) is a public large-scale dataset for autonomous driving developed by the team at Motional (formerly nuTonomy). Motional is making driverless vehicles a safe, reliable, and accessible reality.
nuScenes dataset collects approximately 15h of driving data in Boston and Singapore. Driving routes are carefully chosen to capture challenging scenarios. nuScenes aims for a diverse set of locations, times and weather conditions. To balance the class frequency distribution, nuScenes includes more scenes with rare classes (such as bicycles). Using these criteria, data was manually selected to include 1000 scenes of 20s duration each. These scenes are carefully annotated using human experts.
Annotation partner Scale is used for annotation. All objects in the nuScenes dataset come with a semantic category, as well as a 3D bounding box and attributes for each frame they occur in. Ground truth labels for 23 object classes are provided.
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report security vulnerabilities or NVIDIA AI Concerns here.