| Developer | GigaAI |
| Type | Wheeled humanoid robot |
| Country of origin | China |
| Unveiled | November 2025 |
| Status | Mass production (deliveries began February 2026) |
| Height | 160 cm (5 ft 3 in) |
| Weight | 64 kg (141 lb) |
| Degrees of freedom | 28 (excluding end-effectors) |
| Arm DOF | 7 per arm |
| Arm payload | 5 kg (11 lb) per arm |
| Cameras | 5 RGB + 4 RGBD |
| LiDAR | 360-degree |
| Max speed | 2.2 m/s (7.9 km/h) |
| Battery life | ~4 hours |
| AI system | GigaBrain |
| Price | ~$160,000 USD |
| Website | giga-ai.com |
The GigaAI Maker H01 is a wheeled humanoid robot developed by GigaAI, a Chinese artificial intelligence and robotics startup headquartered in Beijing. Officially unveiled in November 2025, the Maker H01 is designed as what the company calls a "Physical AGI Native Platform," combining a wheeled mobility base with a dual-arm humanoid upper body to perform tasks across industrial, commercial, and domestic environments. The robot is built around GigaAI's proprietary full-stack technology ecosystem, which includes the GigaBrain embodied intelligence foundation model and the GigaWorld world model platform.
The Maker H01 stands 160 cm tall, weighs 64 kg, and features 28 degrees of freedom with dual 7-DOF bionic arms capable of handling 5 kg payloads. Its perception system includes five RGB cameras, four RGBD cameras, and a 360-degree LiDAR sensor, enabling comprehensive environmental awareness. GigaAI began large-scale deliveries of the Maker H01 in February 2026, just two months after the robot's official unveiling, with the Hubei Humanoid Robot Innovation Center receiving the first unit.[1][2]
GigaAI (also known as Giga Vision) was founded in June 2023 by Huang Guan, a doctoral graduate of Tsinghua University's Department of Automation.[3] The company describes itself as China's first startup dedicated to world model research in physical AI. Despite starting with a team of roughly a dozen people, GigaAI has grown rapidly and positions itself as the "OpenAI of the physical world," aiming to bridge the gap between advanced AI foundation models and real-world physical systems.[4]
The company's core mission centers on achieving what it calls "model-to-body convergence," the idea that large-scale AI models trained on both virtual and real-world data can be translated into precise physical actions through purpose-built robot hardware. GigaAI operates across two primary domains: autonomous driving and embodied intelligence.
Huang Guan's career spans several major milestones in Chinese AI and computer vision research. He completed his undergraduate studies at the Department of Automation at Huazhong University of Science and Technology in 2009, then earned a master's degree at the Institute of Automation of the Chinese Academy of Sciences before completing his doctorate at Tsinghua University. During his doctoral studies, he interned at Microsoft Research Asia, where he worked alongside prominent researchers including He Kaiming and Sun Jian.[3]
In 2016, Huang Guan joined Horizon Robotics as a visual perception technology lead, where he led the creation of WebFace260M, which was at the time the world's largest face recognition dataset. He later co-founded PhiGent Robotics (also known as Jianzhi Robotics), where he helped develop the BEV (Bird's Eye View) series of models for autonomous driving perception.[3]
GigaAI's leadership team also includes several other notable figures:
| Name | Role | Background |
|---|---|---|
| Huang Guan | Founder and CEO | PhD, Tsinghua University; former Horizon Robotics visual perception lead; PhiGent Robotics co-founder |
| Zhu Zheng | Co-founder, Chief Scientist | PhD, Chinese Academy of Sciences Institute of Automation (2019); postdoctoral researcher at Tsinghua University |
| Sun Shaoyan | Co-founder | Former Alibaba Cloud director; former Horizon Robotics data product general manager |
| Mao Jiming | Partner, VP of Engineering | Former Baidu and Yingche architect; led Baidu Apollo simulation technology development |
The engineering team includes model architects from major Chinese internet firms and recipients of Huawei's "Genius Youth" program, as well as hardware R&D specialists from leading domestic robotics companies with experience deploying thousands of humanoid robots.[4]
GigaAI has raised approximately 1 billion yuan (roughly $140 million USD) across multiple funding rounds in a remarkably compressed timeline. The company completed four consecutive funding rounds within three months between August and December 2025, totaling approximately 500 million yuan in Series A financing alone.[4]
| Round | Date | Amount | Lead Investors |
|---|---|---|---|
| Seed | ~2023 | Tens of millions yuan | Chentao Capital |
| Angel / Angel+ | September 2024 | ~50 million yuan | BAIC Capital, MiraclePlus, Huamin Investment |
| Pre-A | August 2025 | Hundreds of millions yuan | Guozhong Capital |
| Pre-A+ | August 2025 | Hundreds of millions yuan | CICC Capital, Guangzhou Venture Capital |
| Series A1 | November 2025 | Hundreds of millions yuan | Huawei Habo Investment, Huakong Fund |
| Series A2 | December 2025 | 200 million yuan | Fortune Capital, Huakong Fund |
| Pre-B | March 2026 | ~1 billion yuan | SMIC Juyuan, CICC Capital, Huaqiang Capital |
The Series A1 round in November 2025, co-led by Huawei Habo Investment (Huawei's investment arm) and Huakong Fund, was widely reported as a signal of Huawei's strategic commitment to the physical AI sector.[5][6] Additional notable investors across rounds include Shanghai Pudong Science and Technology Investment, Linxin Capital, Xingyuan Capital, Wanlin International, Changjiang Capital, Optics Valley Industrial Investment, and several state-backed investment platforms.[4]
The Maker H01 was designed from the ground up as what GigaAI calls a "physical AGI native body." Rather than adapting existing robot platforms for AI integration, GigaAI engineered the Maker H01's hardware architecture specifically to serve as an optimal platform for embodied intelligence models. The company describes its approach as a "Foundation Model, Body, Scenario" triad, where the robot body is purpose-built to execute the outputs of large-scale AI models in real-world settings.[2][4]
The decision to use a wheeled base rather than bipedal legs reflects a pragmatic engineering choice. Wheeled platforms offer greater stability, faster locomotion speeds, and lower energy consumption compared to legged robots, making them better suited to structured indoor environments such as factories, warehouses, and commercial spaces. The trade-off is reduced terrain versatility; the Maker H01 is optimized for flat surfaces rather than stairs or uneven outdoor terrain.
The Maker H01 stands 160 cm (5 feet 3 inches) tall and weighs 64 kg (141 pounds). Its body is constructed from a combination of aluminum alloy and ABS composite materials, balancing structural strength with weight reduction. The robot's overall form factor resembles a humanoid upper body mounted on an omnidirectional wheeled chassis, with the torso, head, and arms providing human-like manipulation capabilities while the wheeled base enables rapid, smooth navigation.[1][7]
The omnidirectional all-wheel-drive chassis allows the robot to move in any direction without needing to rotate first, a significant advantage in tight or crowded spaces. The Maker H01 achieves a maximum speed of 2.2 m/s (approximately 7.9 km/h or 4.9 mph), making it one of the faster wheeled humanoid platforms available.[7]
The Maker H01's upper body features dual bionic arms, each with 7 degrees of freedom, providing a wide range of motion that approximates human arm movement. The arms have a vertical reach of 0 to 2 meters and can handle payloads of up to 5 kg per arm (with some sources citing up to 7.5 kg for certain configurations). The arms are adaptable and can be fitted with either standard grippers or five-fingered dexterous hands, depending on the application requirements.[1][7]
The 28 total degrees of freedom (excluding end-effectors) are distributed across the arms, torso, and chassis, enabling coordinated whole-body movement. The GigaBrain control system manages the arms, torso, and chassis in an end-to-end manner, allowing the robot to simultaneously navigate, reach, and manipulate objects in a coordinated fashion.
| Category | Parameter | Value |
|---|---|---|
| Physical | Height | 160 cm (5 ft 3 in) |
| Physical | Weight | 64 kg (141 lb) |
| Physical | Materials | Aluminum alloy + ABS composite |
| Physical | IP rating | IP20 |
| Mobility | Total degrees of freedom | 28 (excluding end-effectors) |
| Mobility | DOF per arm | 7 |
| Mobility | Chassis type | Omnidirectional all-wheel-drive |
| Mobility | Maximum speed | 2.2 m/s (7.9 km/h / 4.9 mph) |
| Mobility | Vertical reach | 0 to 2 meters |
| Manipulation | Arm payload | 5 kg (11 lb) per arm |
| Manipulation | Fingers per hand | 5 (with dexterous hand option) |
| Manipulation | End-effector options | Grippers or dexterous hands |
| Sensors | RGB cameras | 5 (head, chest, hands) |
| Sensors | RGBD cameras | 4 (head, chest, hands) |
| Sensors | LiDAR | 360-degree |
| Power | Battery life | ~4 hours |
| Computing | Control latency | 350 ms |
| Connectivity | Interfaces | Bluetooth, Ethernet, WiFi |
The Maker H01 features one of the more comprehensive perception systems among wheeled humanoid robots. The nine-camera system (five RGB cameras and four RGBD depth cameras) is distributed across the robot's head, chest, and hands, providing overlapping visual coverage for both navigation and manipulation tasks. The RGB cameras handle color-based object recognition and scene understanding, while the RGBD cameras add depth information critical for spatial reasoning and grasping.[1][7]
The 360-degree LiDAR sensor mounted on the chassis provides continuous distance measurements in all directions, enabling robust obstacle detection and avoidance even in cluttered or dynamic environments. The combination of vision and LiDAR data gives the robot a rich multi-modal perception capability that supports both autonomous navigation and fine-grained manipulation tasks.
The Maker H01's battery provides approximately four hours of continuous operation, sufficient for a typical shift in commercial or service applications. The robot supports multiple connectivity options including Bluetooth, Ethernet, and WiFi, allowing it to communicate with fleet management systems, cloud services, and other networked devices.
The Maker H01 is controlled by GigaBrain, GigaAI's proprietary vision-language-action model (VLA) designed specifically for embodied intelligence. GigaBrain integrates visual perception, natural language processing, and action generation into a unified model, enabling the robot to understand spoken or written instructions, perceive its environment through cameras and sensors, and generate appropriate physical actions.[8]
GigaBrain-0, the initial version released as open source, was trained on approximately 1,000 hours of real-world robot data combined with synthetic data generated by the GigaWorld world model platform. The model processes both RGB and depth information (RGBD input modeling) and employs an "Embodied Chain-of-Thought" reasoning framework that enables the system to reason about spatial geometry, object states, and long-horizon task dependencies during execution.[8]
A key innovation of GigaBrain is its significant reduction of reliance on expensive real-world robot training data. By leveraging world model-generated synthetic data, the system can learn to perform tasks with far less real-world demonstration data than comparable VLA models. GigaBrain-0 supports multiple robot platforms (including the AgileX Cobot Magic and Agibot G1) through embodiment-specific parameters, indicating that the model is designed to generalize beyond the Maker H01 hardware.[8]
GigaBrain-0.1, an enhanced version trained on 10,000 hours of data (ten times the original), achieved first place on the RoboChallenge leaderboard in February 2026, the world's largest real-machine evaluation competition for embodied intelligence, surpassing competing models including Pi0.5.[4][9]
GigaBrain-0.5M represents another milestone as the world's first embodied foundation model to incorporate world-model-based reinforcement learning. According to GigaAI, this approach yields a 30% higher task success rate (approximately 85% on average) compared to mainstream competitors, which typically achieve 50 to 55%, along with a 10x improvement in inference speed.[4]
GigaWorld is GigaAI's world model platform, functioning as a data engine and simulator for training embodied AI systems. The platform was described in the research paper "GigaWorld-0: World Models as Data Engine to Empower Embodied AI," published on arXiv in November 2025.[10]
GigaWorld-0 integrates two synergistic components:
GigaWorld-0-Video: A large-scale video generation system that produces diverse, texture-rich, and temporally coherent embodied sequences. It provides fine-grained control over appearance, camera viewpoint, and action semantics, generating realistic training data without requiring physical robot operation.
GigaWorld-0-3D: A component that combines 3D generative modeling, 3D Gaussian Splatting reconstruction, physically differentiable system identification, and executable motion planning to ensure geometric consistency and physical realism in the generated data.
Together, these components enable the scalable synthesis of embodied interaction data that is visually realistic, spatially coherent, physically plausible, and aligned with natural language instructions. Training at scale is supported by the GigaTrain framework, which uses FP8 precision and sparse attention mechanisms to reduce memory and compute requirements.[10]
The practical impact of GigaWorld is that VLA models like GigaBrain, when trained on GigaWorld-generated data, achieve strong real-world performance on physical robots with significantly reduced need for costly real-world data collection. The model demonstrates superior generalization across variations in object appearances (textures, colors), placements, and camera viewpoints.[8][10]
GigaWorld-Policy represents an evolution of the platform's architecture, introducing an action-centered paradigm that replaces the traditional WA (World-Action) architecture. According to GigaAI, this approach delivers a 10x inference speed improvement and a 30% higher task success rate compared to conventional methods, while reducing overall training time.[4]
Beyond robotics, GigaAI's technology platform extends into autonomous driving through the DriveDreamer family of world models. DriveDreamer is a pioneering system for controllable driving video generation, derived entirely from real-world driving scenarios. DriveDreamer4D, presented at CVPR 2025, advances this work further. These autonomous driving solutions have been deployed commercially with partners including Li Auto and ECARX, demonstrating the cross-domain applicability of GigaAI's world model technology.[4][11]
The Maker H01 targets five primary market verticals:
| Sector | Application Examples |
|---|---|
| Automotive manufacturing | Assembly assistance, parts handling, quality inspection |
| 3C electronics | Component manipulation, precision assembly tasks |
| Warehousing and logistics | Inventory management, order picking, material transport |
| Hospitality and high-end guidance | Guest reception, concierge services, navigation assistance |
| Home and consumer | Household tasks, personal assistance, education |
The robot's combination of wheeled mobility and dual-arm manipulation makes it particularly well suited for structured indoor environments where tasks involve picking, placing, organizing, and transporting objects. Demonstrated capabilities include coffee preparation, clothes folding, desktop cleaning, item organization, and box moving.[4]
GigaAI's 2026 production target is to deliver thousands of Maker H01 units across these multiple scenario types, with additional native body models adapted to different application requirements planned for release during the year.[4]
In October 2025, GigaAI partnered with the Hubei Humanoid Robot Innovation Center to establish what both organizations describe as the "world's first world-model-driven virtual-real embodied intelligence data factory." This facility serves as a training ground for embodied AI systems, establishing a closed-loop system that covers robot body control, data collection, data processing and enhancement, model training, and functional iteration.[2]
The Hubei center received the first production unit of the Maker H01 in February 2026, marking the beginning of GigaAI's large-scale delivery phase. The rapid two-month turnaround from unveiling to delivery highlighted the company's engineering and supply chain integration capabilities.[2]
The collaboration has already produced research results. GigaBrain-0.1, developed through the joint efforts of the Hubei center and GigaAI, achieved a top-ranking position on the RoboChallenge evaluation platform, validating the effectiveness of the data factory approach for training embodied AI models.[9]
Huawei's investment through Habo Investment in GigaAI's Series A1 round represents more than financial backing. Huawei has significant strategic interests in the physical AI space, and the investment signals the telecommunications giant's intention to build an ecosystem around embodied intelligence and world model technologies. GigaAI's team includes recipients of Huawei's competitive "Genius Youth" program, indicating a talent pipeline between the two organizations.[5][6]
The Maker H01 enters a crowded and rapidly evolving Chinese humanoid robot market. China has become one of the world's most active regions for humanoid robot development, with dozens of companies pursuing both bipedal and wheeled form factors.
| Robot | Developer | Key Differentiator |
|---|---|---|
| Maker H01 | GigaAI | World model-powered AI, full-stack integration |
| Walker S2 | UBTECH | Bipedal, mass-produced, industrial deployments |
| A2 | Agibot | Backed by SAIC Motor, factory-focused |
| D9 | Pudu Robotics | Service and hospitality focus |
| CLOi | LG Electronics | Height-adjustable torso, consumer-oriented |
| HMND 01 Alpha | Humanoid Inc. | Warehouse and retail logistics |
| GR-2 | Fourier Intelligence | Rehabilitation and healthcare focus |
Within the Chinese market specifically, GigaAI differentiates itself through its emphasis on world models and the integrated software-hardware stack. While companies like Unitree Robotics, UBTECH, and Agibot compete primarily on hardware capabilities and manufacturing scale, GigaAI's approach centers on the AI foundation model layer, positioning the Maker H01 as a platform for deploying increasingly capable AI models rather than a standalone hardware product.[4]
The Maker H01's wheeled design places it in a different category from fully bipedal humanoids like the Tesla Optimus, Figure 02, and Unitree H1. Bipedal robots offer greater terrain versatility and a more human-like physical presence, but at the cost of higher complexity, slower movement, and greater energy consumption. For the structured indoor environments that constitute the Maker H01's target market, the wheeled platform's advantages in speed (2.2 m/s versus typical bipedal walking speeds of 1.0 to 1.5 m/s), stability, and battery efficiency provide practical benefits that outweigh the mobility limitations.
GigaAI's approach of combining a wheeled base with a sophisticated AI stack reflects a broader trend in the Chinese robotics industry, where several companies have concluded that wheeled or hybrid platforms may reach commercial viability faster than fully bipedal systems for indoor service and industrial applications.
GigaAI has contributed several open-source projects and peer-reviewed publications to the robotics and AI research community:
| Project | Description | Venue / Platform |
|---|---|---|
| GigaBrain-0 | World model-powered VLA foundation model | Open source (Apache 2.0), GitHub, Hugging Face |
| GigaBrain-0.1 | Enhanced VLA model (10,000 hours training data) | RoboChallenge #1 ranking (February 2026) |
| GigaBrain-0.5M | First embodied model using world-model-based RL | Research publication |
| GigaWorld-0 | World model data engine for embodied AI | arXiv (November 2025) |
| DriveDreamer | Real-world-driven world model for autonomous driving | ECCV 2024 |
| DriveDreamer4D | 4D world model for driving scenarios | CVPR 2025 |
The open-source release of GigaBrain-0 under the Apache 2.0 license, with model checkpoints available on Hugging Face, allows external researchers and developers to build upon GigaAI's VLA architecture. The model supports deployment through a server-client architecture, with example implementations provided for AgileX robot platforms.[8]
GigaAI has outlined several goals for 2026 and beyond. The company plans to deliver thousands of Maker H01 units across its five target market verticals during 2026, while also releasing additional robot body designs adapted to different application scenarios. The Pre-B funding round of approximately 1 billion yuan closed in March 2026, providing capital for scaling production and continued research.[4]
The company's longer-term vision extends beyond individual robot products to building a comprehensive "Physical AGI" ecosystem where world models, foundation models, and purpose-built robot bodies work together in a self-improving cycle. Data collected by deployed Maker H01 robots feeds back into the GigaWorld training pipeline, improving the GigaBrain models, which in turn make deployed robots more capable. This virtuous cycle of data collection, model improvement, and deployment is central to GigaAI's strategy for achieving what it calls the physical world's "ChatGPT moment."[2][4]