| inclusionAI |
|---|
![]() |
| Type |
| Industry |
| Founded |
| ** |
| Headquarters |
| Key people |
| Parent |
| Owner |
| Products |
| Reinforcement Learning systems |
| AGI frameworks |
| Multimodal models |
| | Website | inclusionai.github.io |
inclusionAI** is an artificial general intelligence (AGI) research initiative established by Ant Group, focused on developing and open-sourcing advanced artificial intelligence systems.[1] The initiative represents Ant Group's dedicated effort to work towards AGI through the development of Large Language Models (LLMs), Reinforcement Learning (RL) systems, multimodal models, and other AI-related frameworks and applications.[2] inclusionAI describes itself as a hub for open projects from Ant Group's research teams working toward reproducible and community-driven AI systems, with a stated mission to develop a fully open-sourced AI ecosystem.[3]
inclusionAI operates as the primary vehicle for Ant Group's artificial general intelligence ambitions, maintaining a strong commitment to open-source principles and collaborative development.[4] The organization develops and releases various AI models and tools designed to advance the field of AGI while ensuring accessibility and inclusivity in AI development.[5]
The initiative is guided by principles of fairness, transparency, and collaboration, with a focus on tools for training and evaluating reasoning-oriented LLMs via RL, agent frameworks, and the release of trained model checkpoints when feasible.[1] This aligns with Ant Group's broader "AI First" corporate strategy announced in 2024.[6] Public materials indicate that inclusionAI maintains repositories on GitHub and model artifacts on Hugging Face, and has presented work at venues such as ICLR 2025 Expo.[3]
inclusionAI emerged as part of Ant Group's increased focus on artificial intelligence research and development. The initiative became prominently active in 2024-2025 with the release of multiple open-source models and frameworks.[1] This move aligned with Ant Group's "Plan A" recruitment initiative, launched in April 2025, which aimed to attract top AI talents and ramp up innovation efforts.[6][7]
By May 2025, Ant Group publicly showcased its elite AI researchers, including figures like He Zhengyu, a PhD graduate from the Georgia Institute of Technology known for developing advanced algorithms.[8] Public references to inclusionAI as a named project appear in 2025 in connection with an ICLR Expo session highlighting its open RL training stack and agent work.[3]
In March 2025, Ant Group announced the open-sourcing of the Ling Mixture of Experts (MoE) Large Language Models under the inclusionAI umbrella, marking a significant milestone in the initiative's development.[9] This was followed by the release of Ling-Plus and Ling-Lite models, which demonstrated the ability to train large-scale models on domestically produced Chinese chips from Alibaba and Huawei.[10]
inclusionAI's projects began appearing on platforms like GitHub and Hugging Face in mid-to-late 2025, with releases such as the Inclusion Arena leaderboard in August 2025.[11] In September 2025, the organization began open-sourcing Ling 2.0, a series of MoE architecture LLMs, with Ling-mini-2.0 as the first released version.[12] On September 30, 2025, the organization released Ring-1T-preview, a trillion-parameter reasoning model.[13]
inclusionAI has developed multiple families of LLMs with a focus on efficiency, reasoning capabilities, and multimodal processing:
| Model Family | Description | Key Features |
|---|---|---|
| Ling Series | Foundation LLMs with MoE architecture | - Ling-Lite: 16.8B parameters (2.75B activated)[14] |
Ling-Plus: 290B parameters (28.8B activated)[14]
Ling-mini-2.0: 16B parameters (1.4B activated)[12]
Ling-V2: Enhanced version with improved capabilities[1] | | Ring Series | Reasoning-focused LLMs | - Ring-V2: Reasoning MoE LLM[1]
Ring-lite-2507: 16.8B MoE model with 2.75B activated parameters[4]
Ring-1T-preview: Trillion-parameter model (preview checkpoint)[13] | | Ming Series | Multimodal LLMs | - Ming-lite-omni: Multimodal understanding and generation[15]
Ming-lite-omni 1.5: Enhanced multimodal capabilities[4]
Ming-Omni: Advanced multimodal model[16] |
Ring-1T-preview is a preview checkpoint of a trillion-parameter "thinking" model released in late September 2025 on Hugging Face.[17] The model features a MoE architecture and was positioned to facilitate early community exploration. It excels in natural language reasoning and was trained on 20 trillion tokens, achieving 92.6% on the AIME 2025 (American Invitational Mathematics Examination) math benchmark.[13] The model is optimized for tasks requiring deep thinking and long-term planning, such as code generation and complex problem-solving, and supports long-horizon problem solving.[13]
The model was fine-tuned using inclusionAI's custom RLVR framework with the icepop method.[13] FP8 variants and community quantizations appeared shortly after on Hugging Face.[18][19] Third-party coverage reported Ring-1T-preview as the first open-source trillion-parameter model.[20]
Ming-Omni is an advanced open-source multimodal model capable of processing images, text, audio, and video, released in 2025.[16] The model features a comprehensive multimodal processing architecture with MoE design and modality-specific routers.[16] It supports speech and image generation, dialect understanding, voice cloning, context-aware dialogues, text-to-speech, and image editing.[16]
Ming-Omni represents a breakthrough in multimodal AI, integrating dedicated encoders for different modalities. It supports a wide range of tasks without additional fine-tuning, including generating natural speech, high-quality images, and handling dialect-specific interactions.[16] The model has been described as the first open-source model matching GPT-4o's modality support, with all code and weights publicly available.[21]
inclusionAI has developed several frameworks to support AGI research and development:
AReaL is an open-source, fully asynchronous reinforcement learning training system designed for large reasoning and agentic models.[22] It decouples generation from training to improve GPU utilization and training stability, and provides details intended for full reproducibility (data, infra, and models).[22][23] The system emphasizes lightning-fast, efficient operations for training large-scale models, and was developed by the AReaL Team at Ant Group in collaboration with Tsinghua University's Institute for Interdisciplinary Information Sciences.[22]
ASearcher is an open-source framework for large-scale online RL training of search agents, aiming to advance "Search Intelligence" to expert-level performance.[24] The framework offers guidance to build customized agents, including integration with AReaL.[24]
AWorld is a runtime system for building, evaluating and training general multi-agent assistance.[25] The system provides infrastructure for developing collaborative agent systems and testing their performance in various scenarios.[25]
Inclusion Arena is a live leaderboard and open platform for evaluating large foundation models based on real-world, in-production applications, launched in August 2025.[11] The platform bridges AI-powered apps with state-of-the-art LLMs and multimodal LLMs (MLLMs).[11]
Unlike traditional lab-based benchmarks, Inclusion Arena prioritizes evaluations based on production environments to better reflect practical utility and addresses gaps in conventional evaluation methods by using production data.[26] The platform was proposed by researchers from inclusionAI and Ant Group and shifts the paradigm of model evaluation from synthetic lab benchmarks to real-world performance metrics derived from production applications.[26] The platform is live and open, inviting contributions from the AI community.[27]
ABench is a benchmark suite for evaluating AI models developed by inclusionAI.[1]
| Project | Type | First Public Reference/Release | Primary Link |
|---|---|---|---|
| AReaL | RL training system for LLM reasoning | 2025 (paper + repo updates) | GitHub[22] |
| ASearcher | RL system for search agents | 2025 | GitHub[24] |
| AWorld | Multi-agent assistance runtime | 2025 | GitHub[25] |
| Ring-1T-preview | Trillion-parameter model (preview checkpoint) | September 2025 | Hugging Face[17] |
| Ming-Omni | Advanced multimodal model | 2025 | Project Page[21] |
| Inclusion Arena | Live evaluation leaderboard | August 2025 | arXiv[26] |
inclusionAI's work emphasizes (i) open, reproducible RL training pipelines for reasoning-centric LLMs; (ii) asynchronous system designs that reduce training latency bottlenecks by decoupling rollout generation from parameter updates; and (iii) releasing code, data notes, and, when feasible, model weights for community use and inspection.[22][23][1]
inclusionAI has pioneered methods for training large-scale models on resource-constrained hardware. The organization reported training costs of approximately $880,000 for their Ling models, representing a 20% cost reduction compared to traditional approaches.[10] This was achieved through:
Use of domestically produced Chinese chips from Alibaba and Huawei[9]
Implementation of the EDiT (Elastic Distributed Training) method[14]
FP8 mixed-precision training throughout the entire process[12]
Novel optimization techniques for heterogeneous computing environments[14]
All major models and frameworks developed by inclusionAI are released as open-source software, available through platforms including:
GitHub (primary repository)[1]
Hugging Face (model distribution)[2]
ModelScope (Chinese platform)[4]
inclusionAI's research spans multiple domains critical to AGI development:
Natural language processing and understanding
Reinforcement learning for reasoning and agent systems
Multimodal AI combining vision, language, and speech
Efficient training methods for resource-constrained environments
Agent-based systems and multi-agent coordination
Real-world evaluation and benchmarking
The initiative actively encourages collaboration from researchers, developers, and AI enthusiasts worldwide.[1] inclusionAI maintains:
Open-source repositories with over 2,000 projects[5]
Active presence on developer platforms
Integration with Ant Group's broader AI ecosystem
Partnerships with academic institutions like Tsinghua University[22]
Collaborations with industry researchers
inclusionAI sits within the wider Ant Group technology and open-source ecosystem, which spans databases, privacy computing, and AI infrastructure. As part of Ant Group, inclusionAI's work supports the parent company's broader AI initiatives, including:
Healthcare AI applications through the AQ app[5]
Financial services AI solutions[9]
Integration with Alipay and other Ant Group services[12]
The models developed by inclusionAI are planned for use in industrial AI solutions across healthcare, finance, and other sectors served by Ant Group.[9] Ant Group communicates its open-source and research activities through corporate channels and events such as the INCLUSION·Conference on the Bund in Shanghai, where it shares AI initiatives and related reports.[28][29]
In September 2025, at the INCLUSION·Conference on the Bund, Ant Group highlighted its AI advancements, including open-source contributions from inclusionAI, underscoring the initiative's role in promoting trustworthy AI across industries.[30]
Ant Group
Open-source artificial intelligence
Multimodal learning
GitHub