
Mixture of Experts (MoE) reasoning LLM (text-only) designed to fit within 80GB GPU.

Rapidly identify and mitigate container security vulnerabilities with generative AI.

An edge computing AI model which accepts text, audio and image input, ideal for resource-constrained environments

An edge computing AI model which accepts text, audio and image input, ideal for resource-constrained environments


Cutting-edge open multimodal model exceling in high-quality reasoning from images.

Cutting-edge text generation model text understanding, transformation, and code generation.

Cutting-edge text generation model text understanding, transformation, and code generation.

A lightweight, multilingual, advanced SLM text model for edge computing, resource constraint applications

Advanced small language generative AI model for edge applications

Powerful mid-size code model with a 32K context length, excelling in coding in multiple languages.

Estimate gaze angles of a person in a video and redirect to make it frontal.