End-to-end autonomous driving stack integrating perception, prediction, and planning with sparse scene representations for efficiency and safety.
SparseDrive is an end-to-end autonomous driving model that performs motion prediction and planning simultaneously, outputting a safe planning trajectory. It first encodes multi-view images into feature maps, then learns sparse scene representation through symmetric sparse perception, and finally performs motion prediction and planning in a parallel manner.
This NIM previews an end-to-end example of deploying SparseDrive based on the paper with explicit quantization with NVIDIA's ModelOpt Toolkit.
This model is ready for commercial/non-commercial use.
This model is not owned or developed by NVIDIA. It has been developed and built to a third-party’s requirements for this application and use case; see link to SparseDrive.
GOVERNING TERMS: The trial service is governed by the NVIDIA API Trial Terms of Service. Use of this model is governed by the NVIDIA Community Model License. ADDITIONAL INFORMATION: MIT License.
Global
Researchers and developers in the field of autonomous driving and motion forecasting, specifically those working with 3D object detection and tracking, would be expected to use this system for tasks such as object detection, tracking, motion prediction, and planning.
03/18/2025 via https://build.nvidia.com/nvidia/sparsedrive
nuScenes dataset was used for training, testing and evaluation (see details below).
The nuScenes dataset (pronounced /nuːsiːnz/) is a public large-scale dataset for autonomous driving developed by the team at Motional (formerly nuTonomy). Motional is making driverless vehicles a safe, reliable, and accessible reality.
nuScenes dataset collects approximately 15h of driving data in Boston and Singapore. Driving routes are carefully chosen to capture challenging scenarios. nuScenes aims for a diverse set of locations, times and weather conditions. To balance the class frequency distribution, nuScenes includes more scenes with rare classes (such as bicycles). Using these criteria, data was manually selected to include 1000 scenes of 20s duration each. These scenes are carefully annotated using human experts.
Annotation partner Scale is used for annotation. All objects in the nuScenes dataset come with a semantic category, as well as a 3D bounding box and attributes for each frame they occur in. Ground truth labels for 23 object classes are provided.
Engine: Tensor(RT)
Test Hardware:
Ethical considerations and guidelines. NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report security vulnerabilities or NVIDIA AI Concerns here.