Template:Infobox software
See also: Terms and Artificial intelligence terms
See also: Meta AI (Company)

Meta AI is an artificial intelligence assistant developed by Meta Platforms. Initially launched as an integrated assistant within Meta's social media platforms on September 27, 2023, it became available as a standalone application on April 29, 2025.[1][2] The app represents Meta's effort to compete directly with ChatGPT, Google Gemini, and other AI assistant applications, offering unique social features and deep integration with Meta's ecosystem.[2]
Meta AI is also used more broadly to refer to the entire artificial intelligence division of Meta Platforms, which encompasses the company's AI research labs, product teams, and infrastructure groups. The division has produced influential open-source models, frameworks, and research that have shaped the modern AI landscape.
Meta AI is designed as a personal AI assistant powered by Meta's Llama family of large language models, specifically utilizing Llama 4 for the standalone app's conversational capabilities.[3] The assistant can understand user preferences and provide personalized responses based on data from connected Meta accounts, with user consent.[3]
The application serves multiple functions: as a standalone AI assistant, the companion app for Ray-Ban Meta smart glasses (replacing the previous Meta View app), and as an integrated feature across WhatsApp, Instagram, Facebook, and Messenger.[4] As of May 2025, Meta AI reached one billion monthly active users across Meta's family of apps, with CEO Mark Zuckerberg announcing the milestone at the company's annual shareholder meeting.[5]
Facebook AI Research, known as FAIR, was founded in December 2013 by Mark Zuckerberg and Yann LeCun. LeCun, a professor at New York University and a pioneer of convolutional neural networks, was appointed as FAIR's first director on December 9, 2013. The lab was established with a mission to "advance the state of the art in artificial intelligence through open research for the benefit of all."[18]
FAIR initially operated from offices in New York City, Menlo Park, and Paris. From its earliest days, the lab adopted an open-science philosophy that was unusual for a corporate research lab. Researchers were encouraged to publish their work at top conferences and release code publicly. This approach attracted leading academics who might otherwise have stayed in university positions.
Over its first decade, FAIR grew into one of the most prolific AI research organizations in the world. By 2023, FAIR researchers had published thousands of papers, with many becoming among the most cited in the field. At the lab's 10-year anniversary in November 2023, Meta highlighted that the top three most cited AI papers of 2023 all came from FAIR: "LLaMA: Open and Efficient Foundation Language Models" (8,534 citations), "Llama 2: Open Foundation and Fine-Tuned Chat Models" (7,774 citations), and "Segment Anything" (5,293 citations). FAIR also won best paper awards at several major conferences in 2023, including ACL, ICRA, ICML, and ICCV.[18]
Key research areas within FAIR have included self-supervised learning, computer vision, natural language processing, speech recognition, reinforcement learning, and robotics. The lab's contributions span foundational techniques such as self-distillation, contrastive learning at scale, and mixture-of-experts architectures, alongside applied systems like real-time translation and object segmentation.
Yann LeCun led FAIR from its founding through 2018, when he transitioned to the role of Chief AI Scientist for all of Facebook (later Meta). Joelle Pineau, a professor at McGill University and the Mila research institute in Montreal, joined Meta in 2017 and became the head of FAIR in 2023. Pineau oversaw the lab's operations during a period of rapid growth in generative AI and the release of the Llama model family.
In April 2025, Pineau announced her departure from Meta, with her last day on May 30, 2025. Her exit coincided with a broader organizational shift at Meta, as the company restructured its AI teams to focus more heavily on product delivery and pursuit of artificial general intelligence.[19]
Yann LeCun departed Meta in November 2025, ending a twelve-year tenure as the company's chief AI scientist. He subsequently founded AMI Labs, a startup focused on building "world models" that can understand the physical world, in contrast to the prevailing large language model paradigm. In March 2026, AMI Labs announced that it had raised $1.03 billion in funding at a $3.5 billion pre-money valuation, with investors including Cathay Innovation, Bezos Expeditions, and others.[20]
In February 2023, Meta CEO Mark Zuckerberg announced the creation of a new top-level product group focused on generative AI. This group was tasked with building generative AI tools and experiences across Meta's family of apps, including AI chat in WhatsApp and Messenger, AI image generation in Instagram, and AI-powered ad formats.
In January 2024, Zuckerberg merged FAIR and the GenAI product team into a unified organization to accelerate development and reduce duplication of effort. The combined team worked on both foundational research and the consumer-facing Meta AI assistant.
In June 2025, Meta appointed Alexandr Wang, the founder and former CEO of Scale AI, as its first Chief AI Officer. Wang was brought in to lead a new entity called Meta Superintelligence Labs (MSL), reflecting Zuckerberg's stated ambition to build "superintelligent" AI systems.[21]
On August 19, 2025, Meta announced a restructuring that split MSL into four distinct teams:
| Team | Leader | Focus |
|---|---|---|
| TBD Lab | Alexandr Wang | Developing the Llama language models that power the Meta AI assistant |
| FAIR | Rob Fergus | Long-term fundamental research toward advanced machine intelligence |
| Products and Applied Research | Nat Friedman | Integrating Llama models and AI research into Meta consumer products |
| MSL Infra | Aparna Ramani | Building and maintaining the AI infrastructure needed to support Meta's AI goals |
As part of this restructuring, Meta's AGI Foundations team was dissolved and absorbed into the MSL divisions. In October 2025, Meta announced the elimination of approximately 600 roles across the FAIR, Products and Applied Research, and MSL Infra teams.[22]
| Feature | Description |
|---|---|
| Voice Interaction | Supports natural language processing for text and voice inputs, with full-duplex voice mode for conversational flow. Available in the U.S., Canada, Australia, and New Zealand.[3] |
| Discover Feed | A social feed where users can share, like, comment on, or remix AI-generated content, fostering community engagement.[6] |
| Image Generation and Editing | Generates photorealistic images from text prompts and supports editing features like restyling and animation.[3] |
| Personalization | Uses data from Facebook and Instagram profiles (with user permission) to tailor responses, remembering preferences like dietary restrictions.[7] |
| Real-Time Information | Accesses web-based information via Google and Microsoft Bing for up-to-date answers on topics like weather or travel.[8] |
| Document Handling | Web version includes rich document editor, PDF export, and document import for AI analysis (select countries).[3] |
| Device Integration | Operates on Ray-Ban Meta glasses for hands-free tasks and Meta Quest headsets with Meta AI with Vision for visual input.[3] |
The Meta AI app emphasizes voice-based interaction as its primary interface:
Standard Voice Mode: Allows users to have conversations with the AI assistant using natural speech
Full-Duplex Speech Demo: An experimental feature using real-time voice generation technology that creates more natural, conversational responses without relying on text-to-speech conversion
Ready to Talk: An optional setting that enables voice interaction by default when opening the app[3]
Meta's AI division has produced a wide range of models spanning language, vision, audio, and multimodal domains. The sections below cover the most significant model families.
The Llama family is Meta's flagship series of large language models. It has become one of the most widely adopted open-weight model families in the industry.
| Model | Release Date | Parameters | Key Details |
|---|---|---|---|
| Llama 1 | February 2023 | 7B, 13B, 33B, 65B | First release; available to researchers under a non-commercial license. Pre-trained on publicly available data. |
| Llama 2 | July 2023 | 7B, 13B, 70B | Released under a permissive license allowing commercial use. Trained on 2 trillion tokens. Chat-tuned variants included. |
| Llama 3 | April 2024 | 8B, 70B | Pre-trained on approximately 15 trillion tokens. Significant improvements in reasoning, coding, and multilingual capabilities. |
| Llama 3.1 | July 2024 | 8B, 70B, 405B | Introduced the 405B parameter model, the largest openly available LLM at the time. Included updated safety tools (Llama Guard 3). |
| Llama 3.2 | September 2024 | 1B, 3B, 11B, 90B | First Llama models with multimodal (vision) capabilities. Lightweight variants designed for edge and mobile devices. |
| Llama 3.3 | December 2024 | 70B | Text-only model offering performance comparable to Llama 3.1 405B at a fraction of the serving cost. |
| Llama 4 Scout | April 2025 | 109B total (17B active) | First open-weight natively multimodal model using a mixture-of-experts (MoE) architecture. 10 million token context window. |
| Llama 4 Maverick | April 2025 | 400B total (17B active) | Larger MoE variant with 128 experts. Strong performance on reasoning and multimodal benchmarks. |
| Llama 4 Behemoth | In training (as of April 2025) | ~2T total (288B active) | Largest planned Llama model; still in training at the time of Llama 4 launch. Teacher model for Scout and Maverick. |
The Llama 4 family, announced at Meta's inaugural LlamaCon developer conference on April 29, 2025, represented a significant architectural shift: Scout and Maverick were the first Llama models to use a mixture-of-experts design and the first to be natively multimodal (processing both text and images in a single model).[23]
Llama Guard is a family of safety classification models designed to filter harmful content in AI interactions. Released alongside major Llama versions, these models are built to detect unsafe prompts and responses according to a standardized hazard taxonomy.
| Version | Base Model | Capabilities |
|---|---|---|
| Llama Guard 1 | Llama 2 7B | Text-only input/output safety classification |
| Llama Guard 2 | Llama 3 8B | Improved text safety classification |
| Llama Guard 3 (8B) | Llama 3.1 8B | Aligned with MLCommons standardized hazards taxonomy |
| Llama Guard 3 Vision (11B) | Llama 3.2 11B | Multimodal safety classification for text and images |
| Llama Guard 3 (1B) | Llama 3.2 1B | Pruned and quantized to 438 MB for efficient on-device deployment |
| Llama Guard 4 (12B) | Llama 4 | Updated safety model for the Llama 4 ecosystem |
These models are part of Meta's broader Purple Llama project, which provides tools for responsible AI deployment, including CyberSecEval for evaluating cybersecurity risks and Code Shield for filtering unsafe code.[24]
The Segment Anything Model (SAM) is a foundation model for image segmentation released in April 2023. SAM can segment any object in any image based on user prompts such as points, bounding boxes, or text descriptions. The model was trained on SA-1B, a dataset of over 1 billion masks across 11 million images.
SAM comes in three sizes: ViT-B (91 million parameters), ViT-L (308 million parameters), and ViT-H (636 million parameters). It was released under an Apache 2.0 license.[25]
In July 2024, Meta released SAM 2, extending the model's capabilities to video segmentation. SAM 2 uses a transformer architecture with streaming memory, allowing it to track objects across video frames even when they temporarily leave the field of view. Compared to the original SAM, SAM 2 is six times faster on images while achieving better accuracy. It was trained on the SA-V dataset, which contains over 50,000 videos and 35.5 million segmentation masks. An updated SAM 2.1 followed in the fall of 2024 with improved performance for visually similar objects.[26]
DINO (self-DIstillation with NO labels) is a self-supervised learning method for training vision transformers without labeled data. Published in 2021, DINO demonstrated that self-supervised vision transformers can learn features that contain explicit information about semantic segmentation, enabling object detection without any supervision.
DINOv2, released in April 2023, scaled up this approach to produce general-purpose visual features that perform well across a variety of computer vision tasks including classification, segmentation, and depth estimation. DINOv2 models can be used with simple linear classifiers and still achieve strong results, eliminating the need for task-specific fine-tuning in many cases. Meta later relicensed DINOv2 under an Apache 2.0 license for commercial use.[27]
ImageBind is a multimodal embedding model released in May 2023. It was the first AI model capable of learning a joint embedding space across six modalities: images, text, audio, depth (3D), thermal (infrared), and inertial measurement unit (IMU) data. The key innovation is that images serve as a bridge between different modalities, allowing the model to learn cross-modal relationships without requiring training data covering every possible combination of modalities.
ImageBind enables applications such as cross-modal retrieval (searching for audio clips using images), modality arithmetic (combining embeddings from different modalities), and cross-modal generation. It was presented as a highlighted paper at CVPR 2023 and released as open source on GitHub.[28]
AudioCraft is a framework for audio generation released in August 2023. It consists of three components:
| Component | Function | Details |
|---|---|---|
| MusicGen | Music generation from text | Trained on 400,000 recordings (20,000 hours) of licensed music. Uses a single-stage autoregressive transformer over a 32 kHz EnCodec tokenizer with 4 codebooks. |
| AudioGen | Sound effect generation from text | Trained on public sound effects datasets. Generates environmental sounds, effects, and ambient audio from text descriptions. |
| EnCodec | Neural audio codec | Compresses and tokenizes audio for efficient generation. Serves as the backbone for both MusicGen and AudioGen. |
All AudioCraft model weights and code were released as open source, allowing researchers to train their own models on custom datasets.[29]
Emu is a family of generative models for images and video, first announced in September 2023. The foundation Emu model is a latent diffusion model pre-trained on over 1 billion image-text pairs, then fine-tuned on a curated set of high-quality images.
In November 2023, Meta released two extensions:
The Emu models power image generation features within the Meta AI assistant across WhatsApp, Instagram, Facebook, and Messenger.[30]
Meta has released a series of translation and speech models aimed at breaking language barriers:
PyTorch is an open-source deep learning framework originally developed by FAIR. Led by Soumith Chintala, the PyTorch team designed the framework around dynamic computational graphs, offering a more intuitive and flexible alternative to the static-graph approach used by TensorFlow at the time.
PyTorch was released publicly in January 2017 and quickly gained traction in the research community for its ease of use and Pythonic design. It became the dominant framework for AI research and is now widely used in production systems as well.
In September 2022, Meta transitioned the governance of PyTorch to the newly established PyTorch Foundation under the Linux Foundation. The foundation's governing board includes representatives from Meta, Microsoft, Amazon, Google, and other major technology companies. This move was intended to ensure PyTorch's long-term neutrality and community-driven development.[32]
PyTorch is widely considered one of FAIR's most consequential contributions to the AI field. As of 2025, it underpins the training and deployment of the majority of state-of-the-art AI models across industry and academia.
Meta has been the most prominent large technology company to embrace an open-weight approach to AI model releases. Beginning with Llama 2 in July 2023, Meta released its major language models under permissive licenses that allow commercial use, a strategy that stood in contrast to the closed approaches of OpenAI, Google, and Anthropic.
The rationale behind this strategy has several components:
Meta's open-source approach began evolving in 2025. In July 2025, Zuckerberg signaled that Meta would likely not open-source all of its most advanced "superintelligence" AI models. By late 2025, reports emerged that Meta was developing a model codenamed "Avocado" that would be released as a closed model, one that Meta could sell access to. This would represent the biggest departure from the open-weight strategy that Meta had championed for years.
The shift was driven by pressure to monetize AI investments directly, as Meta poured billions into researcher salaries, data center construction, and the development of increasingly powerful models. Despite having one of the top AI research labs in the world, Meta faced challenges in commercializing its AI work compared to rivals like OpenAI and Google.[33]
Meta has made substantial investments in the compute infrastructure required to train and serve large AI models. The company's approach combines custom silicon development with massive deployments of commercial GPUs.
The Meta Training and Inference Accelerator (MTIA) is Meta's family of custom AI chips, designed specifically for the deep learning workloads that power Meta's apps and services.
| Version | Process Node | Power | Memory | Clock Speed | Status |
|---|---|---|---|---|---|
| MTIA v1 | 7 nm | 25 W | 64 MB on-chip SRAM | 800 MHz | Deployed in production (inference for ads and recommendations) |
| MTIA v2 (Next-Gen) | 5 nm | 90 W | 128 MB on-chip SRAM | 1.35 GHz | 3x performance improvement over v1; deployed at scale within 9 months of first silicon |
| MTIA 400 | Undisclosed | Undisclosed | Includes HBM | Undisclosed | Completed testing; on path to data center deployment. Designed for GenAI inference. |
MTIA v1 delivers 102.4 TOPS for INT8 operations and 51.2 TFLOPS for FP16 operations. The next-generation chip achieved 6x model serving throughput at the platform level (with 2x the number of devices and a more powerful CPU host) along with a 1.5x improvement in performance per watt.
In October 2025, Meta publicly confirmed its acquisition of chip startup Rivos, further strengthening its in-house silicon capabilities. Meta has stated that custom chips allow it to achieve better price-performance across its data center fleet than relying solely on third-party vendors.[34]
Alongside custom silicon, Meta operates some of the largest GPU clusters in the world. Key milestones in Meta's infrastructure build-out include:
| Platform | First Supported | Availability | Notes |
|---|---|---|---|
| September 2023 | Global (select countries) | Inline replies and group-chat commands using "@Meta AI" | |
| September 2023 | Global (select countries) | Direct messages; image generation prompts | |
| Messenger | September 2023 | Global (select countries) | Text and voice responses |
| September 2023 | Global (select countries) | Feed search suggestions | |
| Ray-Ban Meta smart glasses | September 2023 | Where available | Wake-word "Hey Meta" plus visual recognition |
| Standalone App (iOS, Android, Web) | April 2025 | U.S., Canada, Australia, New Zealand, others | Full features; voice mode limited to select countries |
| Meta Quest | 2024 | Where available | Via Horizon OS v68; Quest 3, Quest Pro, Quest 2 |
| Languages | Status |
|---|---|
| English | Available since launch |
| Danish, Dutch, Finnish, French, German, Italian, Norwegian Bokmal, Portuguese, Spanish, Swedish | Available |
| Hindi, Indonesian, Tagalog, Thai, Vietnamese | Available or rolling out |
| Arabic | Announced |
Meta unveiled Meta AI during the Meta Connect keynote on September 27, 2023, initially powered by a custom model based on Llama 2.[1] The assistant gained real-time information access through a search partnership with Bing and was rolled out in English to users in the United States and 13 other markets.[1]
In April 2024, Meta upgraded the assistant to use Llama 3, adding faster image generation and inline web results.[9] The assistant was progressively integrated across all major Meta platforms throughout 2024, with multilingual support added in July 2024.[10]
Reports of a standalone Meta AI app first emerged in February 2025.[11] The app was officially announced and launched at Meta's inaugural LlamaCon developer conference on April 29, 2025, powered by an early Llama 4 checkpoint.[2][12]
The following table lists individuals who have played significant roles in Meta's AI efforts.
| Person | Role | Period | Notable Contributions |
|---|---|---|---|
| Yann LeCun | Founding Director of FAIR; Chief AI Scientist | 2013-2025 | Founded FAIR; championed open research and self-supervised learning. Turing Award recipient (2018). Left Meta in November 2025 to found AMI Labs. |
| Mark Zuckerberg | CEO of Meta Platforms | 2004-present | Co-founded FAIR with LeCun; set strategic direction for Meta's AI investments and open-source approach. |
| Joelle Pineau | VP and Head of FAIR | 2017-2025 | Led FAIR during the Llama era; oversaw expansion of open research. Departed May 2025. |
| Alexandr Wang | Chief AI Officer | 2025-present | Former CEO of Scale AI. Leads Meta Superintelligence Labs. |
| Ahmad Al-Dahle | VP of Generative AI | 2023-present | Led the GenAI product organization and the Llama 4 release. Co-lead of AGI Foundations (2025). |
| Soumith Chintala | Co-creator of PyTorch | 2014-present | Led the development of PyTorch, one of the most widely used deep learning frameworks. |
| Rob Fergus | Director of AI Research (FAIR) | 2025-present | Leads FAIR following the MSL restructuring, focusing on long-term fundamental research. |
| Nat Friedman | Head of Products and Applied Research | 2025-present | Former CEO of GitHub. Leads integration of AI research into Meta consumer products. |
| Chris Cox | Chief Product Officer | 2020-present | Oversees product strategy across Meta's apps, including AI integration. |
AI Model: Powered by Llama 4 large language models (standalone app); previously Llama 2 and Llama 3[3]
Voice Technology: Full-duplex speech technology for natural voice conversations[3]
Knowledge Sources: Integration with Google and Microsoft Bing for real-time information[8]
Hardware Integration: MTIA v1 AI accelerator (7nm chip delivering 102.4 TOPS for INT8 and 51.2 TFLOPS for FP16) and Nvidia GPUs for compute power
Platform Requirements: Compatible with iOS 15.2+, Android, and modern web browsers[4]
While the basic Meta AI app is free, Meta has indicated plans for monetization:
Premium Subscription: Testing of paid subscription tiers for advanced features planned for Q2 2025[12]
Advertising Integration: Potential for "paid recommendations" within the AI responses[5]
Enhanced Compute Access: Subscription users may access more computational resources for complex queries[5]
The Discover Feed feature has raised significant privacy concerns:
Unintentional Sharing: Reports of users unknowingly sharing personal conversations publicly, including medical queries, personal data, and work-related information[13]
UI/UX Issues: Criticism that the app interface doesn't clearly indicate when content will be shared publicly[14]
Data Usage: Meta has stated that public posts may be used to train AI models in regions like the European Union
Mozilla Foundation launched a petition demanding Meta improve the app's design to ensure users understand when they're sharing content publicly.[13]
Users have reported several technical challenges:
Android users experiencing significant battery drain (1% every 2 minutes in background) and device overheating[15]
Sign-up difficulties for users without existing Facebook or Instagram accounts
Glitches when importing media from Ray-Ban Meta glasses[15]
Despite concerns, the app has shown rapid adoption:
600 million monthly active users reported in December 2024[16]
700 million users by January 2025[2]
1 billion users across all Meta platforms by May 2025[5]
Meta AI competes in the AI assistant market with:
ChatGPT by OpenAI - Known for advanced conversational abilities and a large third-party plugin ecosystem
Gemini by Google DeepMind - Offers deep web integration and multimodal capabilities across Google products
Claude by Anthropic - Emphasizes safety, interpretability, and long-context reasoning
Grok by xAI - Integrated with the X (formerly Twitter) platform; focuses on real-time information[2]
Meta's competitive advantages include its integration with the social media ecosystem (reaching billions of existing users) and the unique social Discover Feed feature. However, its voice mode capabilities have been noted as lagging behind ChatGPT's advanced voice features.[17]
In the broader AI model market, Meta competes through its open-weight Llama models against proprietary offerings from OpenAI (GPT-4, GPT-4o), Google (Gemini), and Anthropic (Claude), as well as open-weight competitors like Mistral AI, DeepSeek, and Qwen from Alibaba.
The following table provides a chronological overview of Meta's significant AI model and tool releases.
| Date | Release | Category | Description |
|---|---|---|---|
| January 2017 | PyTorch 0.1 | Framework | Open-source deep learning framework with dynamic computational graphs. |
| June 2019 | RoBERTa | NLP | Robustly optimized BERT pre-training approach; improved state-of-the-art on multiple benchmarks. |
| 2021 | DINO | Vision | Self-supervised learning method for vision transformers without labels. |
| July 2022 | No Language Left Behind (NLLB) | Translation | Machine translation model supporting 200 languages. |
| September 2022 | PyTorch Foundation | Governance | PyTorch governance transferred to the Linux Foundation. |
| February 2023 | Llama 1 | LLM | Open foundation language model (7B to 65B parameters). |
| April 2023 | Segment Anything (SAM) | Vision | Universal image segmentation model trained on 1 billion+ masks. |
| April 2023 | DINOv2 | Vision | Scaled self-supervised visual feature model with commercial license. |
| May 2023 | ImageBind | Multimodal | Joint embedding model spanning six modalities. |
| July 2023 | Llama 2 | LLM | Open-weight LLM with commercial license (7B to 70B parameters). |
| August 2023 | AudioCraft (MusicGen, AudioGen) | Audio | Open-source music and audio generation from text. |
| August 2023 | SeamlessM4T | Translation | Unified multimodal translation model (speech and text, 100 languages). |
| September 2023 | Emu | Vision | Latent diffusion model for image generation; powers Meta AI image features. |
| September 2023 | Meta AI Assistant | Product | AI assistant launched across WhatsApp, Instagram, Facebook, Messenger. |
| November 2023 | Emu Video / Emu Edit | Vision | Text-to-video generation and instruction-based image editing. |
| April 2024 | Llama 3 | LLM | Major upgrade (8B and 70B parameters, 15 trillion training tokens). |
| July 2024 | Llama 3.1 | LLM | Introduced 405B parameter model; largest open-weight LLM at the time. |
| July 2024 | SAM 2 | Vision | Extended Segment Anything to video; 6x faster than SAM on images. |
| September 2024 | Llama 3.2 | LLM / Multimodal | First Llama models with vision capabilities; edge/mobile variants. |
| December 2024 | Llama 3.3 | LLM | Efficient 70B model matching 405B performance. |
| April 2025 | Llama 4 (Scout, Maverick) | LLM / Multimodal | Natively multimodal MoE models with 10M+ token context support. |
| April 2025 | Meta AI Standalone App | Product | Standalone AI assistant app for iOS, Android, and web. |
Meta has outlined several areas for future development:
Expansion of voice features to additional countries and real-time web access integration
Enhanced personalization capabilities leveraging Meta's ecosystem data
Integration with upcoming Meta hardware products, including a more expensive Ray-Ban Meta model with heads-up display planned for late 2025
Development of subscription tiers with advanced features
Expansion to reach 1 billion standalone app users by end of 2025[5]
Continued scaling of AI infrastructure, with total data center capacity expected to exceed 10 gigawatts by late 2026
Development of next-generation closed models (codenamed "Avocado") as part of a potential shift toward monetizing advanced AI capabilities directly
CEO Mark Zuckerberg stated that 2025 would be "the year when a highly intelligent and personalized AI assistant reaches more than 1 billion people," positioning Meta AI as that leading assistant.[2]
-
[](/wiki/file_meta_ai6_jpg)
-
[](/wiki/file_meta_ai3_jpg)
-
[](/wiki/file_meta_ai4_jpg)
-
[](/wiki/file_meta_ai1_jpg)
Meta Platforms
Ray-Ban Meta