A general purpose multimodal, multilingual 128 MoE model with 17B parameters.
A multimodal, multilingual 16 MoE model with 17B parameters.
Natural and expressive voices in multiple languages. For voice agents and brand ambassadors.
A lightweight, multilingual, advanced SLM text model for edge computing, resource constraint applications
Lightweight multilingual LLM powering AI applications in latency bound, memory/compute constrained environments
Robust Speech Recognition via Large-Scale Weak Supervision.
Multi-lingual model supporting speech-to-text recognition and translation.
Multi-lingual model supporting speech-to-text recognition and translation.
Latency-optimized language model excelling in code, math, general knowledge, and instruction-following.
NVIDIA DGX Cloud trained multilingual LLM designed for mission critical use cases in regulated industries including financial services, government, heavy industry
Multilingual LLM with emphasis on European languages supporting regulated use cases including financial services, government, heavy industry
Multilingual and cross-lingual text question-answering retrieval with long context support and optimized data storage efficiency.
Fine-tuned reranking model for multilingual, cross-lingual text question-answering retrieval, with long context support.
Lightweight multilingual LLM powering AI applications in latency bound, memory/compute constrained environments
Most advanced language model for reasoning, code, multilingual tasks; runs on a single GPU.
Multilingual text reranking model.
Multilingual text question-answering retrieval, transforming textual information into dense vector representations.