
A context‑aware safety model that applies reasoning to enforce domain‑specific policies.

Leading multilingual content safety model for enhancing the safety and moderation capabilities of LLMs

End-to-end autonomous driving stack integrating perception, prediction, and planning with sparse scene representations for efficiency and safety.

Improve safety, security, and privacy of AI systems at build, deploy and run stages.

Industry leading jailbreak classification model for protection from adversarial attempts

Multi-modal model to classify safety for input prompts as well output responses.

Leading content safety model for enhancing the safety and moderation capabilities of LLMs

Topic control model to keep conversations focused on approved topics, avoiding inappropriate content.

Advanced AI model detects faces and identifies deep fake images.

Robust image classification model for detecting and managing AI-generated content.