
nvidia
Test Multi-Robot Fleets for Industrial Automation
Simulate, test, and optimize physical AI and robotic fleets at scale in industrial digital twins before real-world deployment.
Use Case Description
Physical AI is poised to transform manufacturing, supply chain and logistics—bringing unprecedented levels of industrial automation, intelligence, and autonomy to the world’s factories, warehouses, and industrial facilities.
In the smart factories and warehouses of today and of the future, humans and fleets of robots, including AGVs/AMRs, humanoid robots, intelligent cameras, and visual AI agents work together to achieve their objectives. To ensure their efficient operation in the real world, enterprises will rely on digital twins of their facilities to simulate interactions and performance of these different robot types and their objectives, ensuring they can work together seamlessly to accomplish their tasks.
The Mega NVIDIA Omniverse Blueprint, powered by NVIDIA Omniverse™, OpenUSD, and Isaac™ ROS, enables enterprises to combine real-time sensor simulation and synthetic data generation to simulate these complex human-robot interactions and verify the performance of physical AI systems in industrial digital twins before real-world deployment.
Experience Walkthrough
When starting the experience, users are presented with a sample warehouse populated with racks, boxes, and autonomous mobile robots (AMRs) equipped with 3D LiDAR and RGB camera sensors.
To set up the simulation, users can select a location inside the warehouse and configure two AMRs. For each AMR configuration, users can:
- Select either a “smart” robot that can detect and avoid obstacles on its path or a “simple” robot that can only follow preprogrammed paths.
- Select from one of the four camera views (front, left, right, back) and the type of render they want to generate.
- Create an AMR path to navigate interactively.
Once the AMRs are configured, the user clicks the “Run Simulation” button, and the AMR brain, World Simulator, and Sensor RTX™ service are deployed. As shown in the architectural diagram:
- The AMR brains control the AMRs and send control signals to actuate them in the World Simulator.
- The World Simulator runs the physics-based simulation of the AMR’s movement.
- Each AMR has multiple sensors that are simulated: one 3D LiDAR, one IMU, and one RGB camera with multiple AOVs, in addition to a top-view camera using NVIDIA Sensor RTX APIs.
- Sensor data is streamed back to the AMR controllers to perceive the surroundings and determine the next step of control signals.
Note that simulations typically take 15–20 minutes to complete. During periods of high demand, results may take longer to generate. Users receive a simulation ID (valid for 14 days) that allows them to return to the experience to view the results. While waiting for the simulation to complete, users receive periodic simulation progress updates and an introduction video to the reference architecture. To get more in-depth information about the blueprint, read the technical blog.
Architecture Diagram
What’s Included in the Blueprint
NVIDIA Blueprints are comprehensive reference workflows designed to streamline AI application development across industries and accelerate deployment to production. Built with NVIDIA AI and Omniverse libraries, SDKs, and microservices, they provide a foundation for custom AI solutions. Each blueprint includes a reference code for constructing workflows, tools and documentation for deployment and customization, and a reference architecture outlining API definitions and microservice interoperability. By enabling rapid prototyping and speeding time to deployment, these blueprints empower enterprises to operationalize AI-driven solutions like AI agents, digital twins, synthetic data generation, and more.
Terms of Use
GOVERNING TERMS: The trial service is governed by the NVIDIA API Trial Terms of Service; use of the model is governed by the NVIDIA AI Foundation Models Community License Agreement.