Mistral Small
By Mistral AI x OUTSCALE

Develop your low-latency conversational use cases, including those requiring RAG (Retrieval-Augmented Generation), in a 100% sovereign and secure environment thanks to the Mistral AI x OUTSCALE partnership.

Mistral Small Features

Mistral Small 1.5 (24.09) is Mistral AI’s most efficient large language model (LLM). It can be used for any language-based task requiring high data volumes and low latency.

Classification and Summarization

Identify, classify, and summarize essential information from large datasets.

Automation of Complex Workflows

Streamline your business processes by automating the most complex workflows.

Text Generation

Boost efficiency by automating error detection and resolution.

Retrieval-Augmented Generation

Retrieve relevant documents and generate contextual explanations or summaries.

Ready to develop your conversational use cases?

Take advantage of the Mistral AI x OUTSCALE offer for your Generative AI needs in a trusted environment.

Benefit from the Mistral AI x OUTSCALE offer for your Generative AI needs in a trusted environment.

High-Performance Models

Leverage Mistral AI models that offer one of the best performance-cost ratios on the market.

Sovereign Compliance and Enhanced Security

OUTSCALE’s Cloud infrastructure is ISO, HDS, TISAX certified, and SecNumCloud 3.2 qualified, providing the highest levels of sensitive data protection.

Rapid Deployment

Launch your dedicated LLM instance in just ten minutes via OUTSCALE Marketplace (compared to several weeks with open-source options).

Contact us now

Discover how our Cloud solutions can transform your projects.