A general purpose multimodal, multilingual 128 MoE model with 17B parameters.
A multimodal, multilingual 16 MoE model with 17B parameters.
Cutting-edge open multimodal model exceling in high-quality reasoning from images.
Cutting-edge vision-language model exceling in retrieving text and metadata from images.
Automate research, and generate blogs with AI Agents using LlamaIndex and Llama3.3-70B NIM LLM.
Multi-modal vision-language model that understands text/img/video and creates informative responses
Advanced AI model detects faces and identifies deep fake images.
Ingest massive volumes of live or archived videos and extract insights for summarization and interactive Q&A
Create intelligent virtual assistants for customer service across every industry
State-of-the-art small language model delivering superior accuracy for chatbot, virtual assistants, and content generation.
Cutting-edge vision-language model exceling in high-quality reasoning from images.
Cutting-edge vision-Language model exceling in high-quality reasoning from images.
Robust image classification model for detecting and managing AI-generated content.
Cutting-edge open multimodal model exceling in high-quality reasoning from images.
State-of-the-art small language model delivering superior accuracy for chatbot, virtual assistants, and content generation.
Grounding dino is an open vocabulary zero-shot object detection model.
Expressive and engaging English voices for Q&A assistants, brand ambassadors, and service robots
Vision foundation model capable of performing diverse computer vision and vision language tasks.
Visual Changenet detects pixel-level change maps between two images and outputs a semantic change segmentation mask
EfficientDet-based object detection network to detect 100 specific retail objects from an input video.
Cutting-edge open multimodal model exceling in high-quality reasoning from images.