FLUX.1 Kontext is a multimodal model that enables in-context image generation and editing.
Multi-modal model to classify safety for input prompts as well output responses.
Multimodal question-answer retrieval representing user queries as text and documents as images.
Efficient multimodal model excelling at multilingual tasks, image understanding, and fast-responses
Powerful, multimodal language model designed for enterprise applications, including software development, data analysis, and reasoning.
A general purpose multimodal, multilingual 128 MoE model with 17B parameters.
A multimodal, multilingual 16 MoE model with 17B parameters.
Build artificial general agents (AGA) powered by AGI models that continuously process and synthesize multimodal enterprise data, enabling reasoning, planning, and refinement to generate comprehensive reports.
Cutting-edge open multimodal model exceling in high-quality reasoning from images.
Cutting-edge open multimodal model exceling in high-quality reasoning from image and audio inputs.
Continuously extract, embed, and index multimodal data for fast, accurate semantic search. Built on world-class NeMo Retriever models, the RAG blueprint connects AI applications to multimodal enterprise data wherever it resides.
Cutting-edge open multimodal model exceling in high-quality reasoning from images.
Vision foundation model capable of performing diverse computer vision and vision language tasks.