
Multi-modal model to classify safety for input prompts as well output responses.

Use the multi-LLM compatible NIM container to deploy a broad range of LLMs from Hugging Face.

Multi-modal vision-language model that understands text/img and creates informative responses

Develop AI powered weather analysis and forecasting application visualizing multi-layered geospatial data.

Simulate, test, and optimize physical AI and robotic fleets at scale in industrial digital twins before real-world deployment.

Multi-lingual model supporting speech-to-text recognition and translation.

Transform PDFs into AI podcasts for engaging on-the-go audio content.

Multi-modal vision-language model that understands text/img/video and creates informative responses

Fine-tuned Llama 3.1 70B model for code generation, summarization, and multi-language tasks.