Grok is a family of generative artificial intelligence chatbots and large language models (LLMs) developed by xAI, a company founded by Elon Musk. Launched on November 4, 2023, Grok is designed to provide conversational AI capabilities with real-time information access through integration with the X platform (formerly Twitter). The name "Grok" is derived from the verb coined by Robert A. Heinlein in his 1961 novel Stranger in a Strange Land, describing a profound form of understanding.[1]
The system is notable for its integration across Musk's technology ecosystem, including Tesla vehicles, Tesla's Optimus robot, and the X social media platform. Grok is characterized by its real-time capabilities, multimodal processing, and what xAI describes as a "truth-seeking" approach, though it has faced significant criticism for lacking standard safety guardrails and generating controversial content. As of March 2026, Grok models range from the original 314-billion-parameter Grok-1 to the multi-agent Grok 4.20 system, with the upcoming Grok 5 expected to feature 6 trillion parameters.
xAI was incorporated on March 9, 2023, by Elon Musk, with the public announcement following on July 12, 2023.[2] The development of Grok was positioned as Musk's response to OpenAI's ChatGPT, following his departure from OpenAI's board in 2018 due to disagreements about the company's direction toward a for-profit model.
In April 2023, Musk announced plans for "TruthGPT," a "maximum truth-seeking AI" to counter what he perceived as politically correct training in existing AI models. This concept was later renamed Grok, inspired by Heinlein's term for deep, intuitive understanding.[3] xAI's stated mission is to build AI systems that "understand the true nature of the universe."
Musk assembled a team of eleven co-founders recruited from leading AI research institutions, including Google DeepMind, OpenAI, Google Research, and Microsoft Research. The founding members included:
| Co-Founder | Previous Affiliation | Role at xAI | Status (March 2026) |
|---|---|---|---|
| Elon Musk | Tesla, SpaceX, OpenAI (board) | CEO and Founder | Active |
| Igor Babuschkin | Google DeepMind | Chief Engineer | Departed August 2025 |
| Yuhuai (Tony) Wu | Google DeepMind | Co-Founder | Departed February 2026 |
| Christian Szegedy | Google Research | Co-Founder | Departed February 2025 |
| Jimmy Ba | University of Toronto | Co-Founder | Departed February 2026 |
| Greg Yang | Microsoft Research | Co-Founder | Departed January 2026 |
| Kyle Kosic | OpenAI | Co-Founder | Departed mid-2024 |
| Toby Pohlen | Google DeepMind | Co-Founder | Departed February 2026 |
| Zihang Dai | Google Research | Co-Founder | Departed March 2026 |
| Guodong Zhang | University of Toronto | Co-Founder | Departed March 2026 |
| Manuel Kroiss | Google Research | Co-Founder | Active |
| Ross Nordeen | Tesla | Co-Founder | Active |
By March 2026, nine of the eleven co-founders had departed the company, with only Manuel Kroiss and Ross Nordeen remaining. Musk acknowledged that xAI "was not built right first time around" and said the company was "being rebuilt from the foundations up."[4]
xAI raised capital at an extraordinary pace between 2023 and 2026:
| Round | Date | Amount | Valuation |
|---|---|---|---|
| Seed | December 2023 | $134.7 million | Not disclosed |
| Series B | May 2024 | $6 billion | $24 billion |
| Series C | December 2024 | $6 billion | $50 billion |
| Series D | September 2025 | $10 billion | $200 billion |
| Series E | January 2026 | $20 billion | $230 billion |
Investors across these rounds included Valor Equity Partners, Fidelity Management, Qatar Investment Authority, NVIDIA, Cisco Investments, and Tesla, which committed approximately $2 billion in the Series E.[5] In addition to equity financing, xAI secured a $5 billion debt facility arranged by Morgan Stanley.
On February 2, 2026, SpaceX announced the acquisition of xAI in an all-stock deal valuing xAI at $250 billion and the combined entity at approximately $1.25 trillion, making it the largest merger in corporate history.[6] Musk stated that the merger was driven by the goal of building space-based data centers, arguing that "global electricity demand for AI simply cannot be met with terrestrial solutions." The deal came ahead of a planned SpaceX initial public offering expected to raise up to $50 billion.
Grok-0 was an internal prototype with 33 billion parameters used for initial development and testing. It was never released publicly but served as the foundation for the Grok-1 architecture.
Grok-1 was initially released on November 4, 2023, as a beta product available to select X Premium subscribers. xAI described it as "a very early beta product, the best we could do with 2 months of training."[7] The model featured 314 billion parameters in a Mixture of Experts (MoE) configuration, with approximately 25% of weights active per token, enabling computational efficiency despite the large total parameter count.
On March 17, 2024, xAI open-sourced Grok-1 under the Apache License 2.0, releasing the base model weights and network architecture. The code was made available on GitHub and the weights on Hugging Face.[8] This made Grok-1 one of the largest open-source language models at the time, though the release was limited to the base model without fine-tuning data or RLHF weights.
Grok-1.5 was announced on March 28, 2024, featuring improved reasoning capabilities and a context length extended to 128,000 tokens from Grok-1's 8,192 tokens. It demonstrated significant performance gains over Grok-1 on key benchmarks:
| Benchmark | Grok-1 | Grok-1.5 | Improvement |
|---|---|---|---|
| MATH | 23.9% | 50.6% | +26.7 points |
| GSM8K | 81.3% | 90.0% | +8.7 points |
| HumanEval | 63.2% | 74.1% | +10.9 points |
Grok-1.5 Vision (Grok-1.5V), announced on April 12, 2024, was the first multimodal model in the series, capable of processing visual information including documents, diagrams, charts, and photographs. xAI introduced the RealWorldQA benchmark alongside the release to evaluate real-world spatial understanding capabilities.[9]
Grok-2 and Grok-2 mini were announced on August 14, 2024, with the full version released on August 20, 2024. Grok-2 featured upgraded reasoning performance, improved instruction following, and image generation capabilities powered by Flux from Black Forest Labs. The mini variant offered faster response times with a smaller model footprint, optimized for latency-sensitive applications.[10]
Grok-2 achieved 87.5% on MMLU and 88.4% on HumanEval, representing substantial improvements over Grok-1. The model was released under the Grok 2 Community License, which permitted non-commercial research use.
In December 2024, xAI introduced Aurora, its proprietary text-to-image model replacing the third-party Flux integration. Aurora was released on December 9, 2024, and uses an autoregressive mixture of experts architecture trained on billions of examples from the internet.[11] Key characteristics of Aurora include:
Aurora's permissive content generation capabilities, including the ability to create images of real individuals, later became the subject of significant controversy (see Controversies section below).
Released on February 17, 2025, Grok-3 represented a major leap in scale and capability. xAI trained it with approximately ten times more computing power than Grok-2, utilizing the Colossus data center with 200,000 NVIDIA GPUs. The model is estimated to contain roughly 2.7 trillion parameters in an MoE configuration with a context window of up to 1 million tokens.[12]
Grok-3 outperformed GPT-4o on several key benchmarks, including 93.3% accuracy on the AIME 2025 math competition and strong results on GPQA for PhD-level science problems. Andrej Karpathy, former Director of AI at Tesla, stated that Grok-3 "feels somewhere around the state of the art territory of OpenAI's strongest models."[13]
Grok-3 introduced several specialized reasoning and search modes:
| Mode | Description | Availability |
|---|---|---|
| Think Mode | Step-by-step chain-of-thought reasoning for complex problems | All Grok-3 users |
| Big Brain Mode | Enhanced computational resources for exceptionally difficult tasks | Limited/internal |
| DeepSearch | Internet-scanning tool for comprehensive, multi-source research synthesis | Premium+ / SuperGrok |
| DeeperSearch | Enhanced version of DeepSearch with customizable "presets" and user-adaptive learning, launched March 19, 2025 | Premium+ / SuperGrok |
DeepSearch was described by xAI as going beyond simple information retrieval to perform sophisticated synthesis, reasoning through conflicting data while providing detailed, transparent responses. DeeperSearch built on this by introducing configurable search contexts and adaptive personalization over time.[14]
Grok-3 mini was released on February 19, 2025, as a smaller, faster, and more cost-efficient reasoning model designed for tasks that require strong STEM performance but less general world knowledge. It uses reinforcement learning at scale to refine its chain-of-thought process. Through the xAI API, Grok-3 mini is priced at $0.30 per million input tokens and $0.50 per million output tokens, making it significantly cheaper than the full Grok-3 model.[15]
Grok-4 was released on July 9, 2025, alongside Grok-4 Heavy. The model features approximately 1.7 trillion parameters in an MoE configuration, native tool use, real-time search integration, and Voice Mode for natural spoken conversations. It incorporates large-scale reinforcement learning and multi-agent systems.[16]
Grok-4 Heavy is a multi-agent reasoning system that deploys multiple Grok-4 instances working together, designed for the most complex analytical tasks. It is available through the SuperGrok Heavy subscription tier at $300 per month.
Released on August 28, 2025, this specialized model is optimized for agentic coding tasks, scoring 70.8% on the SWE-Bench Verified benchmark. It is integrated with developer tools including GitHub Copilot and Cursor.[17]
Released on September 19, 2025, Grok-4 Fast offers similar performance to Grok-4 with 40% fewer thinking tokens, a context window of up to 2 million tokens, and is reportedly 64 times cheaper than early frontier models at comparable performance levels.[18] The model became available through Microsoft Azure AI Foundry for enterprise customers in September 2025.
On November 17, 2025, xAI released Grok 4.1, following a two-week silent rollout period (November 1 through 14) during which xAI conducted blind pairwise evaluations on live traffic to refine the model's behavior. Key improvements included:[19]
Grok 4.1 Fast was released simultaneously as an optimized variant for tool-calling and agentic workflows, featuring a 2-million-token context window and a new Agent Tools API for orchestrating external tools. At $0.20 per million input tokens and $0.50 per million output tokens, Grok 4.1 Fast became one of the most cost-efficient frontier models available.[20]
Grok 4.20 launched as a public beta on February 17, 2026, with a second iteration (Beta 2) released on March 3, 2026. The model introduced several architectural innovations:[21]
Elon Musk confirmed that Grok 5 is planned for Q1 2026 (likely between March and April 2026), featuring an estimated 6 trillion total parameters in an MoE architecture, which would make it the largest publicly announced AI model. The model is being trained on the Colossus 2 data center and is expected to feature a 1.5-million-token context window, dynamic agent spawning that scales with task complexity, persistent memory across agent sessions, and native video understanding. Musk has stated that Grok 5 carries a "10% and rising" probability of achieving artificial general intelligence.[22]
| Model | Release Date | Parameters | Context Length | Key Features | License |
|---|---|---|---|---|---|
| Grok-0 | 2023 (internal) | 33 billion | Not specified | Initial prototype | Internal only |
| Grok-1 | November 2023 | 314 billion (25% active) | 8,192 tokens | MoE architecture, real-time X access | Apache-2.0 |
| Grok-1.5 | March 2024 | Not disclosed | 128,000 tokens | Improved reasoning, math skills | Proprietary |
| Grok-1.5V | April 2024 | Not disclosed | 128,000 tokens | Multimodal (text + vision) | Proprietary |
| Grok-2 | August 2024 | Not disclosed | Not specified | Image generation (Flux), improved reasoning | Grok 2 Community License |
| Grok-2 mini | August 2024 | Smaller than Grok-2 | Not specified | Faster, lightweight version | Proprietary |
| Grok-3 | February 2025 | ~2.7 trillion (estimated) | 1 million tokens | Reasoning modes, DeepSearch | Proprietary |
| Grok-3 mini | February 2025 | Smaller than Grok-3 | Not specified | Faster, cost-efficient reasoning | Proprietary |
| Grok-4 | July 2025 | ~1.7 trillion (MoE) | 256,000 tokens (API) | Native tool use, Voice Mode | Proprietary |
| Grok-4 Heavy | July 2025 | Multiple Grok-4 agents | 256,000 tokens | Multi-agent reasoning system | Proprietary |
| Grok Code Fast 1 | August 2025 | Not disclosed | Not specified | Specialized for agentic coding | Proprietary |
| Grok-4 Fast | September 2025 | Not disclosed | 2 million tokens | Enterprise-focused, cost-efficient | Proprietary |
| Grok 4.1 | November 2025 | Not disclosed | 2 million tokens | Reduced hallucination, emotional intelligence | Proprietary |
| Grok 4.1 Fast | November 2025 | Not disclosed | 2 million tokens | Agent Tools API, cost-efficient | Proprietary |
| Grok 4.20 | February 2026 | Not disclosed | Not specified | 4-agent collaboration, rapid learning | Proprietary |
| Grok 5 (planned) | Q1 2026 | ~6 trillion (MoE) | 1.5 million tokens | Video understanding, dynamic agents | TBD |
Grok models utilize a Mixture of Experts (MoE) transformer architecture, where only a fraction of total model parameters activate for any given input token. This design allows models to scale to trillions of parameters while keeping inference costs manageable. Key architectural components include:[23]
The MoE design means that while Grok-3 has an estimated 2.7 trillion total parameters, only a fraction are active during any single forward pass, keeping latency and compute costs significantly lower than a dense model of equivalent size would require.
Grok models from Grok-3 onward are trained on xAI's Colossus supercomputer, located at a former Electrolux manufacturing facility in South Memphis, Tennessee. Colossus is widely considered the world's largest AI supercomputer.[24]
| Specification | Details |
|---|---|
| Location | South Memphis, Tennessee (former Electrolux site) |
| Initial Deployment | September 2024 |
| Construction Time | 122 days from ground-breaking to operational |
| Initial GPU Count | 100,000 NVIDIA H100 GPUs |
| Expanded Configuration (June 2025) | 150,000 H100, 50,000 H200, 30,000 GB200 GPUs |
| Planned Expansion | Additional 110,000 GB200 GPUs at a second Memphis-area facility |
| Long-Term Goal | 1 million GPUs (Colossus 2) |
| Networking | NVIDIA Spectrum-X Ethernet platform |
| Power Supply | Memphis Light, Gas & Water (MLGW) via TVA, exceeding 100 MW |
In November 2024, the Tennessee Valley Authority (TVA) approved xAI's request for access to more than 100 megawatts of power, enough electricity to power roughly 100,000 homes. The rapid construction timeline and massive power consumption drew local scrutiny and environmental concerns.[25]
xAI has announced plans for Colossus 2, which would scale to 1 million GPUs and is intended to train Grok 5 and subsequent models.
Aurora is xAI's proprietary image generation model, introduced in December 2024 to replace the third-party Flux integration. It uses an autoregressive MoE architecture with native multimodal input support for photorealistic rendering, text-to-image generation, and image-to-image editing.[28]
Launched on July 28, 2025, Grok Imagine extends Aurora's capabilities to create six-second animated audiovisual clips from text prompts. In January 2026 alone, users generated approximately 1.245 billion videos through Grok Imagine. A major update in March 2026 added higher-resolution output and improved temporal coherence.[29]
Grok offers multiple tiers of search functionality:
| Feature | Description | Usage |
|---|---|---|
| Web Search | Standard internet search integrated into responses | Available to all users |
| X Search | Searches posts and discussions on the X platform | Available to all users |
| DeepSearch | Multi-source research synthesis with transparent reasoning | Premium+ / SuperGrok |
| DeeperSearch | Advanced search with customizable presets and adaptive personalization | Premium+ / SuperGrok |
| Benchmark | Grok-1 | Grok-2 | Grok-3 | Grok-4 | Grok 4.1 |
|---|---|---|---|---|---|
| MMLU | 73% | 87.5% | 88.2% | 89.1% | Not reported |
| HumanEval | 63.2% | 88.4% | 91.2% | 93.5% | Not reported |
| MATH | 23.9% | 76.5% | 85.3% | 87.2% | Not reported |
| GSM8K | 81.3% | Not reported | Not reported | Not reported | Not reported |
| AIME 2025 | N/A | N/A | 93.3% | 96.4% | Not reported |
| GPQA | N/A | Not reported | Superior to GPT-4o | Not reported | Not reported |
| Humanity's Last Exam | N/A | N/A | N/A | 38.6% (with tools) | Not reported |
| ARC-AGI 2 | N/A | N/A | N/A | 15.9% (SOTA at release) | Not reported |
| SWE-Bench Verified | N/A | N/A | N/A | 70.8% (Code Fast 1) | Not reported |
| LMArena Elo | N/A | N/A | N/A | N/A | 1,483 (thinking) |
Grok-3 in particular demonstrated strong performance relative to competitors at its time of release, outperforming GPT-4o on multiple benchmarks. However, subsequent releases from OpenAI (GPT-5 series), Anthropic (Claude 4 series), and Google (Gemini 3 Pro) have continued to push the state of the art, making the competitive landscape highly dynamic.
Grok is available through several subscription plans, ranging from a limited free tier to enterprise-grade offerings:
| Tier | Monthly Cost | Annual Cost | Key Features |
|---|---|---|---|
| Free | $0 | $0 | Limited Grok-3 access, strict usage caps (2 prompts per 2 hours for Grok-4) |
| X Premium | $8 | $84 | Basic Grok access via the X platform |
| X Premium+ | $40 | $420 | Full Grok access, DeepSearch, all models |
| SuperGrok | $30 | $300 | Grok-4 and Grok 4.1 access, enhanced features, higher limits |
| SuperGrok Heavy | $300 | $3,000 | Grok-4 Heavy, Grok 4.20, early access to new models, priority support |
| Grok Business | $30/seat | N/A | Team collaboration, admin controls |
xAI offers API access through docs.x.ai with the following pricing structure as of early 2026:[30]
| Model | Input (per 1M tokens) | Output (per 1M tokens) | Cached Input (per 1M tokens) |
|---|---|---|---|
| Grok-3 | $3.00 | $15.00 | $0.75 |
| Grok-3 mini | $0.30 | $0.50 | $0.08 |
| Grok-4 | $3.00 | $15.00 | $0.75 |
| Grok 4.1 Fast | $0.20 | $0.50 | $0.05 |
Additional API features include Live Search at $25 per 1,000 sources requested and server-side tools (web search, X search, code execution, document search) priced at $2.50 to $5.00 per 1,000 calls.[31]
Grok is accessible through multiple channels:
xAI launched Grok Enterprise on January 6, 2026, offering enterprise-grade features including a built-in RAG (Retrieval-Augmented Generation) system in the API (introduced December 30, 2025), Grok Voice for developers (December 22, 2025), and administrative controls for team deployments.[32]
Grok competes directly with several major AI assistants and language models:
| Feature | Grok (xAI) | ChatGPT (OpenAI) | Claude (Anthropic) | Gemini (Google) |
|---|---|---|---|---|
| Max Context Window | 2M tokens (Grok 4.1 Fast) | 200K tokens (GPT-5.1) | 200K tokens (Claude Opus 4) | 1M tokens (Gemini 2.5 Pro) |
| Real-time Social Media Data | Yes (X integration) | No | No | Limited |
| Image Generation | Yes (Aurora) | Yes (GPT-image-1) | No | Yes (Imagen) |
| Video Generation | Yes (Grok Imagine) | Yes (Sora) | No | Yes (Veo) |
| Open-Source Model | Grok-1 (Apache 2.0) | No | No | Limited (Gemma) |
| API Input Cost (frontier) | $0.20/1M (4.1 Fast) | $5.00/1M (GPT-4o) | $15.00/1M (Opus 4) | $1.25/1M (Gemini 2.5 Pro) |
| Unique Strength | Real-time X data, cost efficiency | Ecosystem breadth, GPT Store | Safety, long coding sessions | Multimodal research, Google integration |
Grok's key competitive advantages include its real-time access to X platform data, industry-leading context window sizes, and aggressive API pricing. Its primary weaknesses relative to competitors include less robust safety measures, a smaller developer ecosystem, and ongoing leadership instability at xAI.
Grok has faced significant and sustained criticism for lacking standard safety guardrails compared to competitors:
The most significant controversy surrounding Grok erupted in December 2025 when xAI expanded image generation and editing capabilities on the X platform. On December 20, 2025, Musk announced that Grok could be prompted to edit and generate images on X, and abuse of the feature spread rapidly.[36]
According to a review by The New York Times, Grok generated over 4.4 million images in nine days, of which approximately 1.8 million were sexualized depictions of women. The Center for Countering Digital Hate found that between December 29, 2025, and January 9, 2026, Grok produced an estimated 23,338 sexualized images of children, roughly one every 41 seconds.[37]
In March 2026, three Tennessee minors filed a federal class-action lawsuit against Elon Musk and xAI, alleging that Grok generated child sexual abuse material (CSAM) using their real photographs, including school portraits. The plaintiffs seek damages of at least $150,000 per violation under Masha's Law, along with disgorgement of revenues, punitive damages, and a permanent injunction. The lawsuit is one of the first to hold an AI company directly liable for the alleged production of AI-generated CSAM depicting identifiable minors.[38]
The scandal triggered simultaneous investigations across multiple jurisdictions, including the United States, European Union, United Kingdom, France, Ireland, and Australia.
X users were automatically opted into data sharing for Grok training without explicit consent. The Irish Data Protection Commission launched an investigation under GDPR in April 2025, examining whether X's sharing of publicly accessible user data (posts, profiles, interactions) with xAI for model training complied with European data protection regulations. Under GDPR, regulators may impose fines of up to 4% of global annual turnover.[42]
In September 2025, Ireland's data regulator settled proceedings after X agreed to permanently limit the use of EU users' data for AI training. However, the deepfake scandal in late 2025 triggered renewed investigations by regulators in Ireland, France, and the UK.[43]
In January 2026, Defense Secretary Pete Hegseth announced that the U.S. Department of Defense would integrate Grok into its internal networks, including both classified and unclassified systems. The Pentagon's deal with xAI requires Grok to be available for "all lawful purposes," a standard that Anthropic previously refused to meet for its own Claude model, citing concerns about autonomous weapons and mass surveillance.[44]
This prompted sharp criticism from lawmakers and advocacy groups. Senator Elizabeth Warren pressed the Pentagon over the decision, citing Grok's history of generating harmful outputs. Over 30 advocacy organizations demanded the U.S. government cease use of Grok in August 2025, calling it "unsafe, untested, and ideologically biased."[45]
In September 2025, Musk announced plans for Grokipedia, an AI-powered online encyclopedia intended to rival Wikipedia by addressing perceived biases. The site launched on October 27, 2025, hosting approximately 885,000 articles, though it crashed under load on launch day.[46]
Grokipedia articles are generated by Grok and cannot be directly edited by users. Instead, logged-in visitors can submit correction suggestions via a pop-up form, which are reviewed by the AI system. The project has faced criticism from fact-checkers: a November 2025 PolitiFact review found that content differing from Wikipedia often included unsourced claims and citations to sources that did not exist. Wikipedia co-founder Larry Sanger expressed both support for the concept and concern that the platform might reflect the same biases present in Grok itself.[47]
By December 2025, AI-generated edit suggestions had overtaken human submissions, raising concerns about transparency and editorial oversight. In early February 2026, Grokipedia's visibility in Google Search declined sharply.
Grok is integrated across Musk's technology ecosystem: