Mixture of Experts (MoE) reasoning LLM (text-only) designed to fit within 80GB GPU.
An edge computing AI model which accepts text, audio and image input, ideal for resource-constrained environments
An edge computing AI model which accepts text, audio and image input, ideal for resource-constrained environments
Cutting-edge open multimodal model exceling in high-quality reasoning from images.
A lightweight, multilingual, advanced SLM text model for edge computing, resource constraint applications
Continuously extract, embed, and index multimodal data for fast, accurate semantic search. Built on world-class NeMo Retriever models, the RAG blueprint connects AI applications to multimodal enterprise data wherever it resides.
Powerful mid-size code model with a 32K context length, excelling in coding in multiple languages.
Rapidly identify and mitigate container security vulnerabilities with generative AI.
Advanced small language generative AI model for edge applications
Estimate gaze angles of a person in a video and redirect to make it frontal.
Cutting-edge text generation model text understanding, transformation, and code generation.
Cutting-edge text generation model text understanding, transformation, and code generation.