| Qinglong V3.0 |
|---|
| General information |
| Developer |
| Country of origin |
| Year introduced |
| Status |
| Height |
| Weight |
| Degrees of freedom |
| Max joint peak torque |
| Computational power |
| Walking speed |
| Payload capacity |
| Open-source community |
| GitHub |
Qinglong V3.0 (Chinese: 青龙, meaning "Azure Dragon" or "Green Dragon") is the third major iteration of the Qinglong open-source humanoid robot platform developed by Humanoid Robot (Shanghai) Co., Ltd. and the National Local Joint Humanoid Robot Innovation Center. Part of the broader OpenLoong open-source project, Qinglong V3.0 is a full-sized, general-purpose humanoid robot standing 185 cm tall with 60 degrees of freedom, designed to serve as a shared research and development platform for embodied AI and robotics applications.
The Qinglong project holds a distinctive position in the global humanoid robotics landscape as one of the most ambitious government-backed open-source robotics initiatives. Unlike proprietary humanoid robots developed by private companies such as Tesla Optimus or Figure AI's Figure series, Qinglong is designed from the ground up as an open platform. Its hardware designs, software control frameworks, and simulation models are publicly available, enabling researchers, developers, and companies worldwide to build upon the technology. The OpenLoong community, operated jointly with the OpenAtom Foundation, serves as the hub for this collaborative ecosystem.
Qinglong V3.0 was unveiled in May 2025, featuring significant upgrades over its predecessors, including an increased number of degrees of freedom (from 43 to 60), a modular architecture supporting flexible combinations of walking, manipulation, and perception modules, and integration with the "Gewu Zhizhi" collaborative AI training platform.
The Qinglong project emerged from a broader Chinese national strategy to establish leadership in humanoid robotics. In October 2023, China's Ministry of Industry and Information Technology (MIIT) released the "Guiding Opinions on the Innovative Development of Humanoid Robots," a policy document that set out ambitious goals for the industry. The guidelines called for establishing a humanoid robot innovation system, achieving breakthroughs in key technologies, and ensuring a reliable domestic supply chain for core components by 2025. By 2027, the ministry stated that humanoid robots should become "an important new engine of economic growth" in China.[1]
The MIIT guidelines specifically proposed the establishment of an open-source community for humanoid robots, encouraging collaboration among developers around the world. This policy directive laid the groundwork for the creation of the National Local Joint Humanoid Robot Innovation Center and the OpenLoong open-source platform.[2]
In December 2025, the MIIT took further steps by establishing a dedicated Humanoid Robot and Embodied Intelligence Standardization Technical Committee, and by March 2026, China released its first national standard system covering the humanoid robot industry's entire lifecycle.[3]
Humanoid Robot (Shanghai) Co., Ltd. is a research and development institution established by leading Chinese industry enterprises with a registered capital of one billion RMB (approximately $140 million USD). In May 2024, it was designated by the MIIT as the National Local Joint Humanoid Robot Innovation Center, giving it a central role in China's humanoid robotics development strategy.[4]
The center is situated in the Zhangjiang Robot Valley in Shanghai's Pudong New Area, a district that has become a major hub for robotics research and development in China. Jiang Lei serves as the chief scientist of the Innovation Center and has been the driving force behind the Qinglong project and the OpenLoong open-source initiative.[5]
The Innovation Center collaborates with more than 300 robotics-related companies, including Agibot, Fourier Intelligence, Shanghai Electric, Huawei, Baidu, and Alibaba. This broad collaboration network reflects the project's goal of building an integrated open-source ecosystem rather than a single company's proprietary product.[6]
The name "Qinglong" (青龙) translates to "Azure Dragon" or "Green Dragon" in Chinese and refers to one of the Four Symbols of Chinese mythology, celestial creatures representing the four cardinal directions. The Azure Dragon represents the east and the season of spring. The robot was named Qinglong because it was unveiled in 2024, the Year of the Dragon in the Chinese zodiac.[7]
Chief Scientist Jiang Lei announced a long-term naming strategy: the Innovation Center plans to introduce a new humanoid robot model annually, with each named after one of the Chinese zodiac animals. This approach is intended to build a humanoid robot innovation community with distinctly Chinese cultural identity while maintaining a consistent annual development cadence.[8]
The original Qinglong humanoid robot was unveiled at the World Artificial Intelligence Conference (WAIC) 2024 in Shanghai on July 4, 2024, marking the debut of what was described as China's first full-sized, general-purpose humanoid robot. The robot was presented alongside the launch of the OpenLoong open-source platform and the release of its related open-source technologies.[9]
The first-generation Qinglong stood 185 cm tall and weighed 80 kg. It featured 43 active degrees of freedom, a maximum joint peak torque of 400 N*m, and computational power of 400 TOPS. At WAIC 2024, the robot demonstrated its dexterous hands by performing tasks such as picking up soft objects like bread without damaging them, showcasing the precision of its manipulation capabilities.[10]
The robot's platform technology was organized around three critical modules:
The original Qinglong supported multimodal mobility, perception, interaction, and manipulation, with capabilities including rapid walking, agile obstacle avoidance, and stable movement on inclines and declines.[11]
Shortly after the initial unveiling, the Innovation Center released the Qinglong V2.1 model. A key milestone of this version was that it was assembled entirely from 100 percent Chinese-made components, demonstrating the maturity of China's domestic robotics supply chain. The V2.1 proved that a full-sized humanoid robot could be built without reliance on foreign-sourced parts for its mechanical structure, actuators, sensors, and computing hardware.[12]
The Qinglong V2.5 was unveiled in January 2026. This version incorporated improvements informed by the growing volume of training data collected through the Innovation Center's expanding network of training facilities. By this point, the center had accumulated 5.26 million training data entries across various humanoid robot models, significantly boosting development efficiency and enabling more rapid iteration between versions.[13]
Qinglong V3.0 represents the most significant upgrade in the platform's history. Unveiled alongside the smaller Qinglong Lite variant, V3.0 introduced several major improvements over its predecessors, including 60 degrees of freedom (up from 43), a fully modular architecture, and deep integration with the Gewu Zhizhi AI training platform.
Alongside V3.0, the Innovation Center also introduced Qinglong Lite, a smaller-form-factor variant of the Qinglong platform. While detailed specifications for the Lite version have not been fully published as of early 2026, its introduction reflects the center's strategy of offering multiple form factors to address different use cases and research requirements.[14]
Qinglong V3.0 is a full-sized humanoid robot standing 185 cm (6 ft 1 in) tall and weighing 85 kg (187 lb), slightly heavier than the original 80 kg model due to the addition of more joints and actuators. The robot features a highly biomimetic torso designed to replicate human body proportions and movement patterns.
| Specification | Qinglong V1 (2024) | Qinglong V3.0 (2025) |
|---|---|---|
| Height | 185 cm | 185 cm |
| Weight | 80 kg | 85 kg |
| Degrees of freedom | 43 | 60 |
| Max joint peak torque | 400 N*m | 400 N*m |
| Computational power | 400 TOPS | 400 TOPS |
| Walking speed | Up to 1 m/s | Up to 1 m/s |
| Payload capacity | 40 kg | 40 kg |
| Modular architecture | No | Yes |
| Dexterous hands | 6 DoF per hand | Upgraded multi-DoF |
The V3.0's increase from 43 to 60 degrees of freedom represents a 40% improvement in articulation, enabling more nuanced and human-like movements across the entire body. The additional degrees of freedom are distributed across the torso, arms, and hands, providing greater dexterity for manipulation tasks.[15]
One of the defining features of Qinglong V3.0 is its modular design philosophy. The robot supports flexible combinations of three primary module types:
This modularity allows researchers and developers to customize the robot's configuration for specific applications. For example, a research team focused on manipulation tasks could upgrade the working modules while keeping the standard walking and perception modules, reducing both cost and complexity.[16]
Qinglong's dexterous hands are developed in collaboration with Inspire Robotics and LinkerBot. Each hand features multiple degrees of freedom, enabling the robot to perform precise grasping operations. At WAIC 2024, the robot demonstrated its ability to pick up delicate objects such as bread without crushing them, highlighting the force control capabilities of the hand system.[17]
The hands combine with the robot's 400 N*m peak joint torque to provide both delicate manipulation and powerful gripping when needed, making the platform suitable for tasks ranging from assembly line work to household object handling.
Qinglong V3.0 integrates a comprehensive multi-sensor perception system for environmental awareness and autonomous navigation.
| Sensor | Type | Specifications |
|---|---|---|
| RoboSense E1R | Solid-state digital LiDAR | 120 x 90 degree field of view |
| Depth cameras | Stereo depth sensing | 3D environmental modeling |
| Surround cameras | Wide-angle RGB cameras | 360-degree visual coverage |
| 3D vision sensors | Orbbec 3D vision | Depth perception and object recognition |
The RoboSense E1R solid-state digital LiDAR is a particularly notable component. Built on RoboSense's automotive-grade E platform, the E1R provides a 120-degree by 90-degree field of view using a proprietary solid-state architecture. In February 2025, RoboSense delivered its one millionth LiDAR unit, with the milestone unit being an E1R destined for integration into the Qinglong robot.[18]
The combination of LiDAR, depth cameras, and surround cameras supports centimeter-level environmental modeling. In simulated nuclear power plant inspection training, Qinglong demonstrated the ability to autonomously navigate complex pipe systems and avoid obstacles using this sensor suite.[19]
Qinglong uses the EtherCAT (Ethernet for Control Automation Technology) bus system for joint-level communication and control. EtherCAT provides high real-time performance and reliability, enabling the robot's central controller to efficiently manage the movement of all 60 joints with minimal latency. This industrial-grade communication protocol ensures precise synchronization across the robot's actuators during complex whole-body movements.[20]
The robot's onboard computing system delivers 400 TOPS (Tera Operations Per Second) of processing power, enabling real-time artificial intelligence processing, sensor fusion, and autonomous decision-making. This level of computational power supports running large language models and vision-language models for task understanding and planning, while simultaneously processing data from the robot's LiDAR, cameras, and joint encoders.[21]
Qinglong V3.0 runs on the Gewu Zhizhi (格物致知) system, a collaborative training platform for large AI models and robots. The name "Gewu Zhizhi" is a classical Chinese philosophical concept meaning "investigating things to acquire knowledge," drawn from the Confucian text Great Learning (大学). The platform has been fully open-sourced through the OpenLoong community.[22]
The Gewu embodied intelligent simulation platform is built on the Unity RL Playground reinforcement learning framework. It supports full-process automation from simulation training to real hardware deployment and integrates advanced reinforcement learning frameworks with multimodal motion control technology.[23]
Key capabilities of the Gewu platform include:
The platform provides a low-cost, high-efficiency proving ground for intelligent algorithms, accelerating the loop from algorithmic innovation to industrial application.[24]
By early 2026, the Qinglong ecosystem had accumulated over 7.19 million data points through various simulation exercises and real-world robot operations. This data infrastructure is one of the key competitive advantages of the OpenLoong project, as the sheer volume of humanoid robot training data enables more rapid iteration and improvement of control algorithms.[25]
The OpenLoong open-source community was launched in June 2024 as what was described as the world's first open-source community platform dedicated to humanoid robots. The project is operated jointly by Humanoid Robot (Shanghai) Co., Ltd., the Shanghai Humanoid Robotics Manufacturing Innovation Center, and the OpenAtom Foundation.[26]
The community's official website at openloong.org.cn serves as the central hub for documentation, source code, discussion forums, and developer resources. OpenLoong positions itself as the leading humanoid robot open-source community in China, dedicated to promoting the development of embodied intelligence technology and fostering communication and cooperation among developers.[27]
The OpenLoong-Hardware repository (github.com/loongOpen/OpenLoong-Hardware) provides the hardware open-source content for the Qinglong humanoid robot. This includes:
All design files and drawings are released under open-source licenses, allowing free use, modification, and optimization. This level of hardware openness is unusual in the humanoid robotics industry, where most manufacturers keep their mechanical designs proprietary.[28]
The OpenLoong-Dyn-Control repository (github.com/loongOpen/OpenLoong-Dyn-Control) provides the whole-body dynamics control software package for the Qinglong robot. The software framework is based on two primary control methods:
The control framework runs on the MuJoCo physics simulation platform and includes three built-in motion demonstrations: walking, jumping, and blind obstacle stepping (navigating obstacles without visual guidance). The repository bundles several key dependencies, including the MuJoCo simulation engine, the Pinocchio dynamics library, Eigen for linear algebra, the Quill logging tool, the GLFW graphics library, and the JsonCpp parsing library.[29]
Additional repositories under the loongOpen GitHub organization include:
| Repository | Description |
|---|---|
| OpenLoong-Dyn-Control | MPC/WBC whole-body dynamics control framework |
| OpenLoong-Hardware | Hardware system designs and specifications |
| OpenLoong-Framework | Core framework and infrastructure components |
| OpenLoong | Main project repository and documentation |
MuJoCo (Multi-Joint dynamics with Contact) serves as the primary simulation platform for the OpenLoong project. MuJoCo is an open-source physics engine originally developed by Emo Todorov and later acquired by Google DeepMind in October 2021, which released it as open-source software under the Apache 2.0 license.[30]
The OpenLoong-Dyn-Control repository provides MuJoCo-compatible robot models of Qinglong (referred to as "AzureLoong" in the simulation environment), allowing researchers to test and develop control algorithms in simulation before deploying them on physical hardware. The simulation environment supports two primary demonstration modes:
Third-party developers have also created secondary development projects based on the OpenLoong simulation models, using them for embodied intelligence training experiments.[31]
The Shanghai Innovation Center has established a 3 million yuan open fund to support community development, providing an average of 300,000 to 500,000 yuan (approximately $42,000 to $70,000 USD) per project. This financial support is intended to incentivize community innovation and attract developers to contribute to the OpenLoong ecosystem.[32]
The Qinglong platform has generated peer-reviewed academic publications. A notable paper, "Loong: An Open-Source Platform for Full-Size Universal Humanoid Robot Toward Better Practicality," was published in the MDPI journal Biomimetics in November 2025. The research, jointly supported by the National Key Research and Development Program of China (Grant 2024YFB4711100) and the National Natural Science Foundation of China (Grant 52205035), detailed the robot's biomimetic joint design, multi-level control architecture, and multi-master high-speed real-time communication mechanism.[33]
Experimental results in the paper demonstrated Qinglong's ability to traverse complex terrains including 13 cm steps and 20-degree slopes, as well as its competence in object manipulation and transportation tasks such as identifying switch positions, performing cleaning operations on heat sinks, and handling workpieces on industrial manufacturing production lines.[34]
A companion paper, "DT-Loong: A Digital Twin Simulation Framework for Scalable Data Collection and Training of Humanoid Robots," described a digital twin system combining a high-fidelity simulation environment with a full-scale virtual replica of the Qinglong robot, using optical motion capture and human-to-humanoid motion re-targeting technologies for training embodied AI models.[35]
Starting in October 2025, the Innovation Center expanded beyond its Shanghai headquarters by establishing subsidiary training facilities at eight locations across China. The purpose of these regional facilities is to collect training data specialized for each region's core industries. By early 2026, the network consisted of nine facilities total:[36]
| Location | Status (early 2026) |
|---|---|
| Shanghai (headquarters) | Operational |
| Beijing | Operational |
| Henan Province | Operational |
| Jiangsu Province | Operational |
| Hubei Province | Under construction |
| Zhejiang Province | Under construction |
| Guangdong Province | Under construction |
| Chongqing | Under construction |
| Shandong Province | Under construction |
The Shanghai Innovation Center serves as the network's leader, collecting the most data and handling the most experimental industrial applications. As of April 2026, a total of 395 humanoid robots (spanning 13 different models) have been deployed across the four completed training grounds, collecting 7.19 million data points through various simulation exercises.[37]
The Innovation Center's training facilities house not only Qinglong robots but also humanoid robots from other manufacturers, creating a heterogeneous training environment. The center currently houses 133 humanoid robots across 13 models at its Shanghai facility alone. This approach reflects the open-source philosophy of the project: by training multiple robot platforms in the same environment and sharing data across the OpenLoong community, the ecosystem benefits all participating companies and researchers.[38]
Qinglong is designed as a collaborative platform integrating technologies from many of China's leading technology companies. Rather than being built entirely in-house, the robot assembles general-purpose open-source technologies from across the Chinese tech ecosystem.
| Partner | Contribution |
|---|---|
| Huawei | AI foundation models ("brain") |
| Baidu | AI foundation models ("brain") |
| Alibaba | AI foundation models ("brain") |
| RoboSense | E1R solid-state LiDAR perception |
| Orbbec | 3D vision sensors |
| Inspire Robotics | Dexterous hand technology |
| LinkerBot | Dexterous hand technology |
| JAKA Robotics | Integrated joint technology |
| Agibot | Collaborative development partner |
| Fourier Intelligence | Collaborative development partner |
| Shanghai Electric | Collaborative development partner |
This multi-company collaboration model is central to the Qinglong project's identity. By drawing AI capabilities from Huawei, Baidu, and Alibaba, and sensor and actuator technologies from specialized robotics companies, Qinglong serves as an integration platform for China's broader robotics supply chain.[39]
Qinglong occupies a unique niche as a government-backed, open-source humanoid robot platform. Most comparable robots are proprietary systems developed by private companies.
| Robot | Developer | Height | Weight | DoF | Open source | Primary purpose |
|---|---|---|---|---|---|---|
| Qinglong V3.0 | Humanoid Robot (Shanghai) | 185 cm | 85 kg | 60 | Yes (full stack) | Research platform / industrial |
| Tesla Optimus | Tesla | 173 cm | 57 kg | 28+ | No | Industrial / consumer |
| Figure 03 | Figure AI | 170 cm | 70 kg | 40+ | No | Industrial / enterprise |
| Atlas | Boston Dynamics | 150 cm | 89 kg | 28 | No | Research |
| Unitree H1 | Unitree Robotics | 180 cm | 47 kg | 26 | Partially | Research / commercial |
| Tiangong | Beijing Innovation Center | 163 cm | 43 kg | 42 | Yes | Research platform |
Qinglong's 60 degrees of freedom in V3.0 place it among the most articulated humanoid robots in the world. Its full-stack open-source approach (covering hardware, software, and simulation) distinguishes it from most competitors. The closest comparable open-source project in China is Tiangong, developed by the Beijing Humanoid Robot Innovation Center, which open-sourced its design in late 2024.[40]
Qinglong has been demonstrated in several application scenarios:
The Innovation Center has outlined several directions for future development of the Qinglong platform and the broader OpenLoong ecosystem:
The rapid development cadence of the Qinglong project, with new models appearing roughly every four months, reflects both the strength of the collaborative development model and the substantial government and institutional backing behind the initiative.