Le Chat is an AI assistant and conversational chatbot developed by Mistral AI, a French artificial intelligence company headquartered in Paris. First launched as a beta product on February 26, 2024, Le Chat (French for "The Cat") provides a consumer-facing interface for interacting with Mistral's family of large language models, including Mistral Large, Mistral Medium, Mistral Small, and the multimodal Pixtral models. The platform competes directly with ChatGPT by OpenAI, Claude by Anthropic, and Gemini by Google DeepMind.
Le Chat gained significant attention in February 2025 when it relaunched with a major overhaul, mobile apps for iOS and Android, and the introduction of Flash Answers powered by Cerebras hardware, which delivered speeds of over 1,100 tokens per second. The mobile app surpassed 1 million downloads in just 13 days after launch. Throughout 2025 and into 2026, Le Chat became a focal point of European technology policy, championed by French president Emmanuel Macron as a sovereign alternative to American chatbots and adopted by major French enterprises including BNP Paribas, AXA, Stellantis, CMA CGM, and Orange.
Le Chat is the flagship consumer product of Mistral AI, a startup founded in April 2023 in Paris by three French researchers: Arthur Mensch, Guillaume Lample, and Timothée Lacroix. The trio met as students at École Polytechnique. Mensch, who serves as CEO, previously worked as a research scientist at Google DeepMind, while Lample and Lacroix came from Meta's FAIR research lab, where Lample was a co-creator of the LLaMA language model unveiled in February 2023. Their decision to leave well-paid roles at American tech giants and build a European AI company became a national story in France within months of incorporation.
Mistral raised seed and Series A capital in 2023 from investors including Lightspeed Venture Partners, Andreessen Horowitz, General Catalyst, and Bpifrance. A €600 million Series B in June 2024 valued the company at €5.8 billion. The Series C round closed on September 9, 2025, with €1.7 billion at an €11.7 billion (roughly $13.7 billion) post-money valuation, led by Dutch lithography equipment maker ASML, which took an approximately 11 percent fully diluted stake for €1.3 billion. ASML's CFO Roger Dassen joined Mistral's strategic committee. Following the round, the three co-founders each became paper billionaires with reported net worth around $1.1 billion, making them the first French AI billionaires according to the Bloomberg Billionaires Index. Mistral has also announced a partnership with Microsoft for distribution of its models on Azure, and in March 2026 raised $830 million in debt financing to purchase 13,800 NVIDIA GB300 GPUs for a new datacenter in Bruyères-le-Châtel, south of Paris, scheduled to come online by mid-2026.
Mistral AI launched Le Chat on February 26, 2024, alongside the release of the Mistral Large model and a partnership announcement with Microsoft. At launch, Le Chat was available as a free beta product accessible through the web at chat.mistral.ai. The initial version offered access to three models: Mistral Large, Mistral Small, and a prototype called Mistral Next that was designed to produce brief and concise responses.
The beta version had notable limitations. Le Chat could not access the internet, meaning it relied entirely on its training data and could return outdated information. The platform included a tunable system-level moderation mechanism with non-invasive content warnings. Mistral AI also announced Le Chat Enterprise at this time, designed for team productivity with self-deployment capabilities and fine-grained moderation.
On November 18, 2024, Mistral AI rolled out a substantial update that transformed Le Chat from a basic chat interface into a full-featured AI assistant. This update added several capabilities:
This update coincided with the release of Pixtral Large, a 124-billion-parameter multimodal model with a 128,000-token context window capable of processing up to 30 high-resolution images per input. Most of these new capabilities were offered free of charge during the beta period.
On January 16, 2025, Mistral AI announced a multi-year partnership with Agence France-Presse (AFP), one of the world's leading news agencies. Through this agreement, Le Chat gained access to AFP's daily production of approximately 2,300 text stories in six languages (French, English, Spanish, Portuguese, German, and Arabic), along with AFP's archives dating back to 1983. The partnership was designed to provide Le Chat users with more reliable and verified information when answering questions that require current news context. Photos and videos were not included in the agreement.
On February 6, 2025, Mistral AI launched a completely redesigned version of Le Chat alongside mobile apps for iOS and Android. This relaunch introduced several major new features and pricing tiers:
The mobile app surpassed 1 million downloads within 13 days of launch. Within the first month, Le Chat attracted 4.2 million active users. Days after the relaunch, French president Emmanuel Macron urged the public on national television to "download Le Chat," name-checking the product during interviews timed to the AI Action Summit in Paris. French mobile carrier Free began bundling Le Chat Pro with selected mobile subscriptions, accelerating consumer pickup outside the early-adopter base.
On May 7, 2025, Mistral AI introduced Le Chat Enterprise, a business-focused version of the AI assistant. Le Chat Enterprise was powered by the Mistral Medium 3 model and included enterprise search across knowledge bases, a no-code agent builder, data connectors for Google Drive, SharePoint, OneDrive, Gmail, and Google Calendar, and hybrid deployment options across self-hosted, private cloud, or Mistral-managed infrastructure. The product was made available through the Google Cloud Marketplace, with Azure AI and AWS Marketplace availability planned for later. A dedicated wiki page covers the enterprise tier in more depth at Le Chat Enterprise.
In late May 2025, Mistral AI integrated an agent creation feature directly into Le Chat, replacing the older Agent Builder. The public beta rolled out between May 26 and May 31, 2025, for all free and Pro users. Early testers reported 25 to 35 percent lower first-token latency compared to OpenAI's GPT Builder or Google's Gemini Gems.
On June 10, 2025, Mistral AI released Magistral, its first dedicated reasoning model. Magistral powers "Think mode" in Le Chat, enabling chain-of-thought reasoning for complex problems in mathematics, science, coding, and logic. Two variants were released: Magistral Small (24 billion parameters, Apache 2.0 licensed) and Magistral Medium (API-only). Magistral Medium achieved 73.6 percent accuracy on the AIME 2024 math olympiad benchmark.
In July 2025, Mistral AI added several new features to Le Chat:
On September 2, 2025, Mistral AI introduced support for the Model Context Protocol (MCP) in Le Chat, along with a formal release of the Memories feature. The update brought over 20 pre-configured MCP connectors for enterprise tools including Databricks, Snowflake, GitHub, Atlassian, Asana, Outlook, Box, Stripe, and Zapier. Users could also create custom connectors by specifying their own MCP server configurations. The Memories feature allowed Le Chat to store and recall user preferences across sessions, with a reported retrieval accuracy rate of 86 percent. Memory data consumed tokens from the 128,000-token context window at runtime.
The September 9, 2025 closing of Mistral's €1.7 billion Series C round, led by ASML at an €11.7 billion valuation, gave Le Chat the runway to scale infrastructure independently of American hyperscalers. Within weeks the French sovereign cloud provider OUTSCALE (a Dassault Systèmes subsidiary) began hosting Le Chat inside its SecNumCloud-certified perimeter, the highest French security qualification for sensitive workloads. This made Le Chat one of the few mainstream AI assistants available to French and EU public-sector customers governed by strict data-localization rules.
On December 2, 2025, Mistral released the Mistral 3 model family, headlined by Mistral Large 3, a sparse mixture-of-experts model with 41 billion active parameters out of 675 billion total parameters. Mistral Large 3 was positioned as Mistral's first true open-weight frontier model and rolled into Le Chat as the default for complex tasks within days of release. The same launch introduced Ministral 3 (a lineup of 3 billion, 7 billion, and 14 billion parameter models for edge use) in Base, Instruct, and Reasoning variants. On December 10, 2025, Mistral followed with Devstral 2 and Devstral Small 2 for software engineering tasks, plus Mistral Vibe, a command-line interface for AI-assisted development that pairs with Le Chat sessions.
On March 26, 2026, Mistral released Voxtral TTS, a text-to-speech model that completed the Voxtral voice stack. Voxtral TTS is a 4-billion-parameter model supporting nine languages (English, French, German, Spanish, Dutch, Portuguese, Italian, Hindi, and Arabic) with voice cloning from as little as three seconds of reference audio and a time-to-first-audio of about 90 milliseconds for a 500-character sample. Mistral published human-evaluation results showing Voxtral TTS scoring higher on naturalness than ElevenLabs Flash v2.5 at comparable latency. The model is available inside Le Chat for two-way voice conversations, in Mistral Studio for developers, and as open weights on Hugging Face under the CC BY-NC 4.0 license. API pricing starts at $0.016 per 1,000 characters.
Through the first quarter of 2026, Mistral began previewing a Workflow builder inside Le Chat that lets users chain steps, add branching logic, and reuse the resulting flows across projects. A consolidated mode selector unifies access to Think mode, Deep Research, libraries, agents, and tools behind a single pop-up. A new Connectors area centralizes MCP and native integrations so they can be shared across agents and workflows rather than reconfigured per use case.
Le Chat serves as a general-purpose AI chatbot for answering questions, writing text, brainstorming, summarizing documents, and performing a wide range of language tasks. It automatically selects the appropriate underlying Mistral model based on the task at hand, whether reasoning, code generation, image analysis, or general question-and-answer.
Le Chat can search the public web and combine results with its pre-trained knowledge. Responses include inline citations linking to source material. The web search draws on multiple sources including web pages, journalism (notably from AFP), and social media. For quick factual questions, web search provides fast, up-to-date answers; for deeper inquiries, users can invoke the Deep Research mode.
Flash Answers is Le Chat's instant-response feature, powered by a partnership between Mistral AI and Cerebras Systems. Using Cerebras' Wafer Scale Engine 3 hardware combined with speculative decoding techniques, Flash Answers delivers responses at over 1,100 tokens per second for the Mistral Large 2 model (123 billion parameters). This makes Le Chat roughly 10 times faster than comparable models from OpenAI, Anthropic, and DeepSeek on text-based queries. When Flash Answers is active, a lightning bolt icon appears in the Le Chat interface.
Le Chat generates images using Black Forest Labs' Flux Ultra model, one of the leading text-to-image models available. Users can describe desired visuals in natural language and receive photorealistic images or stylized content. An advanced image editing feature, introduced in July 2025, allows users to modify generated images through follow-up prompts (for example, removing objects or changing backgrounds) while maintaining consistency in characters and details.
The Canvas feature provides a free-form interactive workspace alongside the chat interface. Users can create and iterate on documents, code, presentations, research mockups, essays, posters, and other content types. Canvas supports in-place editing without requiring full response regeneration, version tracking of drafts, and live previews of designs.
Le Chat includes a sandboxed Python) code execution environment that allows users to run code, perform scientific analysis, create data visualizations, and execute simulations directly within the chat. This is useful for validating algorithms, exploring datasets, and prototyping solutions.
Le Chat processes uploaded documents (PDFs, spreadsheets, logs) and images using Mistral's multimodal models and optical character recognition (OCR) capabilities. It can analyze content containing graphs, tables, diagrams, text, mathematical formulas, and equations, then provide summaries, answers, or structured extractions.
Introduced in July 2025, Deep Research mode transforms Le Chat into a coordinated research assistant. When given a complex question, the agent breaks it down into a multi-step research plan, autonomously searches credible web sources, evaluates findings, and synthesizes everything into a structured report with numbered citations. The resulting report appears directly in the conversation with a summary at the top and detailed analysis below. Deep Research is designed for use cases including market trend analysis, business strategy, scientific literature review, and personal planning.
Le Chat supports two-way voice interaction through Mistral's Voxtral stack. The Voxtral speech-recognition model handles voice input with low-latency multilingual recognition, while Voxtral TTS (released March 26, 2026) handles spoken output. Voxtral TTS supports nine languages, can clone a voice from a 3-second sample, and reaches first audio within roughly 90 milliseconds, enabling near-real-time spoken conversations. Voice mode is intended for hands-free brainstorming, quick answers on mobile, meeting transcription, and accessibility use cases.
Le Chat allows users to create custom AI agents that can be tailored to specific business processes, connected to knowledge bases, or equipped with specific tools. The agent system supports:
The Memories feature allows Le Chat to learn and remember user preferences, names, and recurring topics across conversations. Memories are opt-in and available on all tiers, including the free plan. Stored memories are injected into the prompt at runtime, consuming part of the 128,000-token context window. Mistral reports an 86 percent retrieval accuracy rate for stored memories.
Projects let users organize related conversations, uploaded files, and ideas into focused workspaces. Each project has its own default library and tool settings, making it easier to manage multi-topic research or team workflows.
A workflow builder, in preview through the first half of 2026, lets users compose multi-step pipelines that chain agents, tools, and conditional logic. Workflows defined in Mistral's developer playground become callable from inside Le Chat, blurring the line between consumer chat and the developer-facing Mistral platform. Reusable connectors, libraries, and agents can be slotted into multiple workflows without duplication.
Le Chat uses different Mistral AI models depending on the task. The platform automatically routes queries to the most appropriate model, though users can also influence model selection in some cases.
| Model | Parameters | Context Window | Description |
|---|---|---|---|
| Mistral Large 3 | 41B active / 675B total (sparse MoE) | 128,000 tokens | Open-weight frontier model released December 2, 2025; default for complex tasks |
| Mistral Medium 3 | Not disclosed | 128,000 tokens | Balanced model powering Le Chat Enterprise |
| Mistral Small 3.2 | Not disclosed | 128,000 tokens | Efficient model for fast, cost-effective responses |
| Ministral 3 | 3B / 7B / 14B (Base, Instruct, Reasoning variants) | Not disclosed | Edge-optimized small models released December 2025 |
| Pixtral Large | 124 billion | 128,000 tokens | Multimodal model for text and image understanding |
| Magistral Medium | Not disclosed | Not disclosed | Reasoning model powering Think mode |
| Magistral Small | 24 billion | Not disclosed | Open-weight reasoning model (Apache 2.0) |
| Codestral | 22 billion | Not disclosed | Specialized model for code generation and completion (80+ languages) |
| Devstral 2 / Devstral Small 2 | Not disclosed | Not disclosed | Software-engineering models released December 10, 2025 |
| Voxtral | Not disclosed | n/a | Speech recognition model for voice input |
| Voxtral TTS | 4 billion | n/a | Text-to-speech model released March 26, 2026; nine languages, 3-second voice cloning |
Le Chat offers multiple pricing tiers designed for individual users, students, teams, and large organizations.
| Tier | Price | Key Features |
|---|---|---|
| Free | $0 | Core features with daily limits (reset every 3 hours), access to Mistral Medium and Small models, code interpreter, document uploads, web search, AFP news integration |
| Pro | $14.99/month | Up to 6x message limits, 5x web searches, 40x image generation, 20x document processing; access to all models including Mistral Large; 150 Flash Answers/day; No Telemetry Mode |
| Student | $7.04/month | Same as Pro with 53% discount (requires student verification) |
| Team | $24.99/user/month ($19.99 annually) | 200 messages per user, 30 GB storage, Google Drive and SharePoint connectors, role-based access control, centralized billing, domain verification |
| Enterprise | Custom pricing | All Team features plus SAML SSO, audit logs, self-hosted or private cloud deployment, custom usage limits, premium support and SLAs, EU data residency options |
Free users do not have access to Mistral Large, Flash Answers, or No Telemetry Mode. Le Chat subscription plans are separate from Mistral's API pricing, which charges per token starting at $0.02 per million input tokens. Voxtral TTS is billed separately at $0.016 per 1,000 characters of speech generated through the API.
Le Chat Enterprise is the business-focused tier of Le Chat, launched on May 7, 2025. It is designed for large organizations that require enhanced security, compliance, and customization. A standalone wiki page covers the enterprise tier in greater depth at Le Chat Enterprise.
As a European-headquartered company, Mistral AI positions Le Chat Enterprise as a privacy-first alternative to US-based competitors. The platform's infrastructure resides in ISO 27001-certified European data centers, and following a 2025 partnership with OUTSCALE, Le Chat is also available inside France's SecNumCloud-qualified sovereign cloud perimeter for sensitive public-sector workloads. Key privacy safeguards include opt-in logging with user prompts auto-deleted after 30 days, on-premises deployment behind firewalls for enterprise customers, and role-based access control aligned with EU banking and health-sector norms. This European foundation has made Le Chat attractive to organizations subject to strict data residency and regulatory requirements.
Le Chat Enterprise is available through the Google Cloud Marketplace, with availability on Azure AI Marketplace and AWS Marketplace planned. Enterprise customers receive hands-on support from Mistral's AI engineering team across deployment, customization, safety, and value delivery.
The speed advantage of Le Chat's Flash Answers feature comes from a partnership with Cerebras Systems, an AI hardware company. Cerebras powers the Flash Answers feature using its Wafer Scale Engine 3 (WSE-3), which uses an SRAM-based inference architecture rather than the traditional HBM (High Bandwidth Memory) approach used by GPU-based systems. Combined with speculative decoding techniques developed collaboratively between Cerebras and Mistral researchers, this architecture achieves over 1,100 tokens per second on the 123-billion-parameter Mistral Large 2 model.
In practical demonstrations, Le Chat generated a Python-based snake game in 1.3 seconds, compared to 19 seconds for Claude and 46 seconds for ChatGPT.
In parallel with the Cerebras relationship, Mistral has invested heavily in NVIDIA GPU capacity for training and general-purpose serving. In March 2026, the company raised $830 million in debt financing to acquire 13,800 NVIDIA GB300 GPUs for a new datacenter in Bruyères-le-Châtel, south of Paris. The facility is expected to come online by mid-2026 and will serve Le Chat's growing free and Pro user base alongside enterprise workloads. NVIDIA is also a strategic investor in Mistral.
Le Chat supports the Model Context Protocol (MCP), an open standard for connecting AI models to external tools and data sources. The platform includes a directory of over 20 pre-configured MCP connectors and also allows custom MCP server configurations for internal tools or niche services. Organization administrators configure MCP connectors before individual users can access them.
Le Chat competes in the rapidly growing AI assistant market alongside several major products.
| Product | Developer | Key Differentiator |
|---|---|---|
| Le Chat | Mistral AI | Speed (Flash Answers), EU data sovereignty, open-weight models |
| ChatGPT | OpenAI | Largest user base, extensive plugin ecosystem, GPT-4 and GPT-4o models |
| Claude | Anthropic | Focus on safety and helpfulness, long context windows |
| Gemini | Google DeepMind | Google ecosystem integration, multimodal from the ground up |
| DeepSeek | DeepSeek | Cost-efficient open-source models, strong reasoning |
| Copilot | Microsoft | Integration with Microsoft 365 suite |
| Grok | xAI | Real-time access to X (Twitter) data |
Mistral AI differentiates Le Chat primarily through speed (claiming to be the world's fastest AI assistant), European data sovereignty and GDPR compliance, and an aggressive free tier that includes many features competitors reserve for paid plans. The company's open-weight model philosophy also appeals to developers and organizations seeking transparency and the ability to self-host.
Le Chat occupies an unusually political space for a chat product. French president Emmanuel Macron has repeatedly endorsed it on national television, framing the assistant as evidence that Europe can build credible alternatives to American foundation models. In February 2025, around the AI Action Summit hosted in Paris, Macron urged French viewers to download Le Chat, and the French government announced €109 billion in domestic AI infrastructure investment, the largest such commitment outside the United States and China. Mistral AI was named a flagship beneficiary of this strategy, which Macron has described as a "third way" between US-led and China-led AI ecosystems.
The sovereignty narrative goes beyond rhetoric. Le Chat's hosting inside OUTSCALE's SecNumCloud perimeter qualifies the assistant for use in regulated French public-sector environments. Enterprise customers can pin data residency to EU data centers, deploy on-premises behind their own firewalls, or use Mistral-managed private cloud configurations. The ASML investment in September 2025 reinforced the European-champion framing: the largest investor in Mistral is now another European technology firm rather than an American hyperscaler, even as Mistral retains its Microsoft distribution agreement.
Le Chat Enterprise has won marquee French and European customers in transport, banking, insurance, telecommunications, and manufacturing. Reported deployments include:
| Customer | Sector | Notable Use Cases |
|---|---|---|
| CMA CGM | Container shipping and logistics | €100 million five-year deal; Mistral team embedded in Marseille HQ; processing roughly 1 million customer-service emails per week, claims handling, logistics optimization |
| BNP Paribas | Banking | Models deployed across global markets, sales, and customer support |
| AXA | Insurance | Secure AI capabilities for 140,000+ employees, including text generation and analysis |
| Stellantis | Automotive manufacturing | Expanded enterprise-wide generative-AI deployment announced at Italian Tech Week 2025 |
| Orange | Telecommunications | Industrial AI partnership; co-development of Le Chat-based assistants |
| IBM | Enterprise IT | Hybrid cloud and consulting integrations |
| French government / public sector | Public administration | OUTSCALE-hosted Le Chat for SecNumCloud-regulated workloads |
Mistral has also reported tripling revenue within 100 days of launching Le Chat Enterprise in May 2025, and a Dataconomy study found that 38 percent of US companies tested Le Chat in January 2025, an early sign that adoption was not confined to the European market.
Le Chat experienced rapid growth following its February 2025 relaunch and continued building scale through 2025 and into 2026:
The February 2025 relaunch drew broadly positive coverage in international press. Reuters and Bloomberg highlighted the speed advantage delivered by Cerebras and the unusual sight of a sitting French president promoting a consumer chatbot. Le Monde and Les Echos framed Le Chat as a credible domestic answer to Silicon Valley, while TechCrunch called the relaunch a transformation "from basic chat to full-featured assistant." The Financial Times covered the September 2025 Series C as a milestone for European venture capital, and follow-up coverage of the ASML investment in CNBC and Euronews emphasized the strategic logic of two European technology champions reinforcing each other.
Reviews of the product itself have praised speed, the generosity of the free tier, multilingual quality (especially in French and other European languages), and the European data-handling story. Common criticisms include occasional gaps in factual accuracy compared to GPT-4o or Claude on niche topics, slower iteration cycles on visual features, and a smaller third-party plugin ecosystem than ChatGPT. The competitive landscape moved quickly enough through 2025 and 2026 that any specific benchmark claim tends to date within months.
Le Chat is available on the following platforms:
All features are accessible without a credit card on the free tier. The Pro, Team, and Enterprise tiers require paid subscriptions.
Le Chat Enterprise now has its own page covering the May 2025 enterprise launch, connectors, hybrid deployment model, and organizational controls that sit above the consumer and team tiers.