Le Chat is an AI assistant and conversational chatbot developed by Mistral AI, a French artificial intelligence company headquartered in Paris. First launched as a beta product on February 26, 2024, Le Chat (French for "The Cat") provides a consumer-facing interface for interacting with Mistral's family of large language models, including Mistral Large, Mistral Medium, Mistral Small, and the multimodal Pixtral models. The platform competes directly with ChatGPT by OpenAI, Claude by Anthropic, and Gemini by Google DeepMind.
Le Chat gained significant attention in February 2025 when it relaunched with a major overhaul, mobile apps for iOS and Android, and the introduction of Flash Answers powered by Cerebras hardware, which delivered speeds of over 1,100 tokens per second. The mobile app surpassed 1 million downloads in just 13 days after launch.
Mistral AI launched Le Chat on February 26, 2024, alongside the release of the Mistral Large model and a partnership announcement with Microsoft. At launch, Le Chat was available as a free beta product accessible through the web at chat.mistral.ai. The initial version offered access to three models: Mistral Large, Mistral Small, and a prototype called Mistral Next that was designed to produce brief and concise responses.
The beta version had notable limitations. Le Chat could not access the internet, meaning it relied entirely on its training data and could return outdated information. The platform included a tunable system-level moderation mechanism with non-invasive content warnings. Mistral AI also announced Le Chat Enterprise at this time, designed for team productivity with self-deployment capabilities and fine-grained moderation.
On November 18, 2024, Mistral AI rolled out a substantial update that transformed Le Chat from a basic chat interface into a full-featured AI assistant. This update added several capabilities:
This update coincided with the release of Pixtral Large, a 124-billion-parameter multimodal model with a 128,000-token context window capable of processing up to 30 high-resolution images per input. Most of these new capabilities were offered free of charge during the beta period.
On January 16, 2025, Mistral AI announced a multi-year partnership with Agence France-Presse (AFP), one of the world's leading news agencies. Through this agreement, Le Chat gained access to AFP's daily production of approximately 2,300 text stories in six languages (French, English, Spanish, Portuguese, German, and Arabic), along with AFP's archives dating back to 1983. The partnership was designed to provide Le Chat users with more reliable and verified information when answering questions that require current news context. Photos and videos were not included in the agreement.
On February 6, 2025, Mistral AI launched a completely redesigned version of Le Chat alongside mobile apps for iOS and Android. This relaunch introduced several major new features and pricing tiers:
The mobile app surpassed 1 million downloads within 13 days of launch. Within the first month, Le Chat attracted 4.2 million active users.
On May 7, 2025, Mistral AI introduced Le Chat Enterprise, a business-focused version of the AI assistant. Le Chat Enterprise was powered by the Mistral Medium 3 model and included enterprise search across knowledge bases, a no-code agent builder, data connectors for Google Drive, SharePoint, OneDrive, Gmail, and Google Calendar, and hybrid deployment options across self-hosted, private cloud, or Mistral-managed infrastructure. The product was made available through the Google Cloud Marketplace, with Azure AI and AWS Marketplace availability planned for later.
In late May 2025, Mistral AI integrated an agent creation feature directly into Le Chat, replacing the older Agent Builder. The public beta rolled out between May 26 and May 31, 2025, for all free and Pro users. Early testers reported 25 to 35 percent lower first-token latency compared to OpenAI's GPT Builder or Google's Gemini Gems.
On June 10, 2025, Mistral AI released Magistral, its first dedicated reasoning model. Magistral powers "Think mode" in Le Chat, enabling chain-of-thought reasoning for complex problems in mathematics, science, coding, and logic. Two variants were released: Magistral Small (24 billion parameters, Apache 2.0 licensed) and Magistral Medium (API-only). Magistral Medium achieved 73.6 percent accuracy on the AIME 2024 math olympiad benchmark.
In July 2025, Mistral AI added several new features to Le Chat:
On September 2, 2025, Mistral AI introduced support for the Model Context Protocol (MCP) in Le Chat, along with a formal release of the Memories feature. The update brought over 20 pre-configured MCP connectors for enterprise tools including Databricks, Snowflake, GitHub, Atlassian, Asana, Outlook, Box, Stripe, and Zapier. Users could also create custom connectors by specifying their own MCP server configurations. The Memories feature allowed Le Chat to store and recall user preferences across sessions, with a reported retrieval accuracy rate of 86 percent. Memory data consumed tokens from the 128,000-token context window at runtime.
Le Chat serves as a general-purpose AI chatbot for answering questions, writing text, brainstorming, summarizing documents, and performing a wide range of language tasks. It automatically selects the appropriate underlying Mistral model based on the task at hand, whether reasoning, code generation, image analysis, or general question-and-answer.
Le Chat can search the public web and combine results with its pre-trained knowledge. Responses include inline citations linking to source material. The web search draws on multiple sources including web pages, journalism (notably from AFP), and social media. For quick factual questions, web search provides fast, up-to-date answers; for deeper inquiries, users can invoke the Deep Research mode.
Flash Answers is Le Chat's instant-response feature, powered by a partnership between Mistral AI and Cerebras Systems. Using Cerebras' Wafer Scale Engine 3 hardware combined with speculative decoding techniques, Flash Answers delivers responses at over 1,100 tokens per second for the Mistral Large 2 model (123 billion parameters). This makes Le Chat roughly 10 times faster than comparable models from OpenAI, Anthropic, and DeepSeek on text-based queries. When Flash Answers is active, a lightning bolt icon appears in the Le Chat interface.
Le Chat generates images using Black Forest Labs' Flux Ultra model, one of the leading text-to-image models available. Users can describe desired visuals in natural language and receive photorealistic images or stylized content. An advanced image editing feature, introduced in July 2025, allows users to modify generated images through follow-up prompts (for example, removing objects or changing backgrounds) while maintaining consistency in characters and details.
The Canvas feature provides a free-form interactive workspace alongside the chat interface. Users can create and iterate on documents, code, presentations, research mockups, essays, posters, and other content types. Canvas supports in-place editing without requiring full response regeneration, version tracking of drafts, and live previews of designs.
Le Chat includes a sandboxed Python code execution environment that allows users to run code, perform scientific analysis, create data visualizations, and execute simulations directly within the chat. This is useful for validating algorithms, exploring datasets, and prototyping solutions.
Le Chat processes uploaded documents (PDFs, spreadsheets, logs) and images using Mistral's multimodal models and optical character recognition (OCR) capabilities. It can analyze content containing graphs, tables, diagrams, text, mathematical formulas, and equations, then provide summaries, answers, or structured extractions.
Introduced in July 2025, Deep Research mode transforms Le Chat into a coordinated research assistant. When given a complex question, the agent breaks it down into a multi-step research plan, autonomously searches credible web sources, evaluates findings, and synthesizes everything into a structured report with numbered citations. The resulting report appears directly in the conversation with a summary at the top and detailed analysis below. Deep Research is designed for use cases including market trend analysis, business strategy, scientific literature review, and personal planning.
Le Chat supports voice interaction through the Voxtral speech recognition model. Users can speak to Le Chat using natural language and receive responses, enabling hands-free use for brainstorming, quick answers, and meeting transcription. The voice mode offers low-latency recognition in multiple languages.
Le Chat allows users to create custom AI agents that can be tailored to specific business processes, connected to knowledge bases, or equipped with specific tools. The agent system supports:
The Memories feature allows Le Chat to learn and remember user preferences, names, and recurring topics across conversations. Memories are opt-in and available on all tiers, including the free plan. Stored memories are injected into the prompt at runtime, consuming part of the 128,000-token context window. Mistral reports an 86 percent retrieval accuracy rate for stored memories.
Projects let users organize related conversations, uploaded files, and ideas into focused workspaces. Each project has its own default library and tool settings, making it easier to manage multi-topic research or team workflows.
Le Chat uses different Mistral AI models depending on the task. The platform automatically routes queries to the most appropriate model, though users can also influence model selection in some cases.
| Model | Parameters | Context Window | Description |
|---|---|---|---|
| Mistral Large 3 | Not disclosed | 128,000 tokens | Flagship general-purpose model for complex reasoning and generation |
| Mistral Medium 3 | Not disclosed | 128,000 tokens | Balanced model powering Le Chat Enterprise |
| Mistral Small 3.2 | Not disclosed | 128,000 tokens | Efficient model for fast, cost-effective responses |
| Pixtral Large | 124 billion | 128,000 tokens | Multimodal model for text and image understanding |
| Magistral Medium | Not disclosed | Not disclosed | Reasoning model powering Think mode |
| Magistral Small | 24 billion | Not disclosed | Open-weight reasoning model (Apache 2.0) |
| Codestral | 22 billion | Not disclosed | Specialized model for code generation and completion (80+ languages) |
| Voxtral | Not disclosed | Not disclosed | Speech recognition model for voice mode |
Le Chat offers multiple pricing tiers designed for individual users, students, teams, and large organizations.
| Tier | Price | Key Features |
|---|---|---|
| Free | $0 | Core features with daily limits (reset every 3 hours), access to Mistral Medium and Small models, code interpreter, document uploads, web search, AFP news integration |
| Pro | $14.99/month | Up to 6x message limits, 5x web searches, 40x image generation, 20x document processing; access to all models including Mistral Large; 150 Flash Answers/day; No Telemetry Mode |
| Student | $7.04/month | Same as Pro with 53% discount (requires student verification) |
| Team | $24.99/user/month ($19.99 annually) | 200 messages per user, 30 GB storage, Google Drive and SharePoint connectors, role-based access control, centralized billing, domain verification |
| Enterprise | Custom pricing | All Team features plus SAML SSO, audit logs, self-hosted or private cloud deployment, custom usage limits, premium support and SLAs, EU data residency options |
Free users do not have access to Mistral Large, Flash Answers, or No Telemetry Mode. Le Chat subscription plans are separate from Mistral's API pricing, which charges per token starting at $0.02 per million input tokens.
Le Chat Enterprise is the business-focused tier of Le Chat, launched on May 7, 2025. It is designed for large organizations that require enhanced security, compliance, and customization.
As a European-headquartered company, Mistral AI positions Le Chat Enterprise as a privacy-first alternative to US-based competitors. The platform's infrastructure resides in ISO 27001-certified European data centers. Key privacy safeguards include opt-in logging with user prompts auto-deleted after 30 days, on-premises deployment behind firewalls for enterprise customers, and role-based access control aligned with EU banking and health-sector norms. This European foundation has made Le Chat attractive to organizations subject to strict data residency and regulatory requirements.
Le Chat Enterprise is available through the Google Cloud Marketplace, with availability on Azure AI Marketplace and AWS Marketplace planned. Enterprise customers receive hands-on support from Mistral's AI engineering team across deployment, customization, safety, and value delivery.
The speed advantage of Le Chat's Flash Answers feature comes from a partnership with Cerebras Systems, an AI hardware company. Cerebras powers the Flash Answers feature using its Wafer Scale Engine 3 (WSE-3), which uses an SRAM-based inference architecture rather than the traditional HBM (High Bandwidth Memory) approach used by GPU-based systems. Combined with speculative decoding techniques developed collaboratively between Cerebras and Mistral researchers, this architecture achieves over 1,100 tokens per second on the 123-billion-parameter Mistral Large 2 model.
In practical demonstrations, Le Chat generated a Python-based snake game in 1.3 seconds, compared to 19 seconds for Claude and 46 seconds for ChatGPT.
Le Chat supports the Model Context Protocol (MCP), an open standard for connecting AI models to external tools and data sources. The platform includes a directory of over 20 pre-configured MCP connectors and also allows custom MCP server configurations for internal tools or niche services. Organization administrators configure MCP connectors before individual users can access them.
Le Chat competes in the rapidly growing AI assistant market alongside several major products.
| Product | Developer | Key Differentiator |
|---|---|---|
| Le Chat | Mistral AI | Speed (Flash Answers), EU data sovereignty, open-weight models |
| ChatGPT | OpenAI | Largest user base, extensive plugin ecosystem, GPT-4 and GPT-4o models |
| Claude | Anthropic | Focus on safety and helpfulness, long context windows |
| Gemini | Google DeepMind | Google ecosystem integration, multimodal from the ground up |
| DeepSeek | DeepSeek | Cost-efficient open-source models, strong reasoning |
| Copilot | Microsoft | Integration with Microsoft 365 suite |
| Grok | xAI | Real-time access to X (Twitter) data |
Mistral AI differentiates Le Chat primarily through speed (claiming to be the world's fastest AI assistant), European data sovereignty and GDPR compliance, and an aggressive free tier that includes many features competitors reserve for paid plans. The company's open-weight model philosophy also appeals to developers and organizations seeking transparency and the ability to self-host.
Le Chat experienced rapid growth following its February 2025 relaunch:
Le Chat is available on the following platforms:
All features are accessible without a credit card on the free tier. The Pro, Team, and Enterprise tiers require paid subscriptions.