
Vision language model that excels in understanding the physical world using structured reasoning on videos or images.

Generates physics-aware video world states for physical AI development using text prompts and multiple spatial control inputs derived from real-world data or simulation.

Generate exponentially large amounts of synthetic motion trajectories for robot manipulation from just a few human demonstrations.

Generates future frames of a physics-aware world state based on simply an image or short video prompt for physical AI development.

Simulate, test, and optimize physical AI and robotic fleets at scale in industrial digital twins before real-world deployment.

ProteinMPNN is a deep learning model for predicting amino acid sequences for protein backbones.