The AgiBot X2 Ultra is a compact bipedal humanoid robot developed by AgiBot, a Chinese robotics company headquartered in Shanghai. It is the flagship variant of the AgiBot X2 series within AgiBot's Lingxi (灵犀) product line, offering enhanced computing power, a significantly richer sensor suite, autonomous charging capabilities, and full support for secondary development. Standing 1.31 meters tall and weighing approximately 39 kilograms, the X2 Ultra targets researchers, system integrators, and commercial operators who need a capable humanoid platform for embodied AI research, autonomous navigation, human-robot interaction, and always-on service deployments such as exhibition guiding and reception.
First unveiled alongside the standard X2 on March 11, 2025, the X2 Ultra shares the same kinematic skeleton and industrial design as the base model but adds an NVIDIA Jetson Orin NX AI accelerator (157 TOPS), 3D LiDAR, an RGB-D depth camera, front binocular and rear RGB cameras, 4G/5G cellular connectivity, an optional autonomous charging dock, and compatibility with AgiBot's OmniHand dexterous manipulation system. These additions transform the X2 from an expressive interaction-focused humanoid into a full-stack research and deployment platform capable of SLAM-based autonomous navigation, on-device deep learning inference, and extended unattended operation.[1][2]
AgiBot (Chinese: 智元机器人) was co-founded in February 2023 by Deng Taihua (CEO) and Peng Zhihui (CTO), both former Huawei engineers. The company grew rapidly, delivering over 5,100 humanoid robots in 2025 to become the world's largest humanoid robot shipper by volume, surpassing Western competitors such as Tesla (Optimus), Figure AI, and Agility Robotics by a wide margin. AgiBot's investors include BYD, Tencent, HongShan Capital, Hillhouse Investment, and LG Electronics. The company is planning a Hong Kong IPO for 2026 with a target valuation of up to US$6.4 billion.[3][4]
AgiBot organizes its product portfolio into several series. The Lingxi (灵犀) X-series consists of compact bipedal humanoids roughly half the height of an adult human, designed for education, entertainment, research, and light service. The Yuanzheng (远征, "Expedition") A-series includes full-sized industrial humanoids standing 169 to 175 cm tall. The Genie G-series covers wheeled industrial robots, and the D-series includes quadruped inspection platforms.[5]
The first Lingxi robot was the AgiBot X1, released in 2024 as a fully open-source research platform with 34 degrees of freedom. The X2 series, announced in March 2025, shifted from pure open-source research toward commercial viability and AI-driven interaction while retaining developer accessibility in the Ultra variant.[6]
The X2 is offered in multiple configurations to serve different market segments:
| Variant | Key Differentiators | Target Market |
|---|---|---|
| X2 Lite | Reduced DOF (~27), cost-optimized | Education, basic demonstrations |
| X2 (Standard) | 25 DOF, interactive RGB camera, basic interaction | Entertainment, hospitality, education |
| X2 Pro | ~31 DOF with 7-DOF arms, configurable sensors | Academic R&D, advanced demonstrations |
| X2 Ultra | 30 DOF, Orin NX compute, LiDAR, RGB-D, 4G/5G, auto-charging dock, OmniHand support | Research, autonomous service, commercial deployment |
| X2-N | Hybrid bipedal/wheeled locomotion, sensor-less navigation | Logistics, search-and-rescue, industrial |
The Ultra sits at the top of the lineup as the most capable and most expensive variant, designed for customers who need the full perception and computing stack rather than just the interaction and motion capabilities of the standard model.[2][7]
The X2 Ultra maintains the same 1,310 mm height and 460 mm x 210 mm footprint as the standard X2 but weighs approximately 4 kg more (39 kg vs. 35 kg) due to the additional sensors, compute hardware, and associated cabling. It features 30 actuated degrees of freedom, compared to 25 on the standard model, with the additional joints distributed across the neck (1 DOF, enabling head tracking) and the arms (7 DOF per arm instead of 5, providing greater dexterity and workspace coverage).[1][2]
| Category | Specification | Value |
|---|---|---|
| Physical | Height | 1,310 mm (4 ft 4 in) |
| Physical | Width | 460 mm |
| Physical | Depth | 210 mm |
| Physical | Weight | ~39 kg (86 lb) |
| Mobility | Total degrees of freedom | 30 |
| Mobility | Neck DOF | 1 |
| Mobility | Arm DOF (per arm) | 7 |
| Mobility | Waist DOF | 3 |
| Mobility | Leg DOF (per leg) | 6 |
| Mobility | Arm reach (excl. end effector) | 558 mm |
| Performance | Max walking speed | 1.8 m/s (6.5 km/h) |
| Performance | Typical walking speed | 0.8 m/s or less |
| Performance | Peak joint torque | 120 N-m |
| Performance | Payload (specific postures) | Up to 3 kg |
| Performance | Payload (full range of motion) | 1 kg or less |
| Power | Battery capacity | ~500 Wh |
| Power | Runtime (at 0.5 m/s walking) | ~2 hours |
| Power | Charging time | 1.5 hours or less |
| Power | Charger output | 54.6V / 10A |
| Power | Input voltage | 100-220V AC |
| Power | Battery type | Interchangeable |
| Computing | Main processors | RK3588 x 2 |
| Computing | AI accelerator | NVIDIA Orin NX (157 TOPS) |
| Sensors | 3D LiDAR | Yes |
| Sensors | RGB-D camera | Yes |
| Sensors | Front binocular RGB cameras | Yes |
| Sensors | Rear RGB camera | Yes |
| Sensors | Interactive head RGB camera | Yes |
| Sensors | Head touch sensor | Yes |
| Sensors | Microphone array | Yes |
| Sensors | Speaker | Yes |
| Connectivity | Wi-Fi | Yes |
| Connectivity | Bluetooth | Yes |
| Connectivity | 4G/5G module | Yes |
| Connectivity | USB Type-A ports | 2 |
| Connectivity | USB Type-C ports | 2 |
| Connectivity | RJ45 LAN ports | 2 |
| Connectivity | Mini DisplayPort | 1 |
| Connectivity | Power outputs | 12V/3A, 48V/5A |
| Software | Secondary development | Supported (AimDK_X2) |
| Software | OTA updates | Yes |
| Software | Mobile app | Yes |
| Software | ROS 2 compatible | Yes |
| Accessories | Auto-charging dock | Optional |
| Accessories | OmniHand dexterous hand | Compatible (sold separately) |
| Accessories | OmniPicker adaptive gripper | Compatible (sold separately) |
| Accessories | Remote control | Included |
| Environment | Operating temperature | -10 to 40 degrees C |
| Warranty | Maintenance service | 18 months |
The X2 Ultra integrates several proprietary hardware subsystems developed in-house by AgiBot, shared with the standard X2 and other robots in the AgiBot lineup:
The X2 Ultra represents a substantial upgrade over the standard X2 in computing, perception, connectivity, and developer accessibility. The following table summarizes the key differences:
| Feature | AgiBot X2 (Standard) | AgiBot X2 Ultra |
|---|---|---|
| Weight | ~35 kg (77 lb) | ~39 kg (86 lb) |
| Total degrees of freedom | 25 | 30 |
| Arm DOF (per arm) | 5 | 7 |
| Neck DOF | 0 | 1 |
| Main compute | RK3588 x 2 | RK3588 x 2 |
| AI accelerator | None | NVIDIA Orin NX (157 TOPS) |
| 3D LiDAR | No | Yes |
| RGB-D camera | No | Yes |
| Front binocular RGB cameras | No | Yes |
| Rear RGB camera | No | Yes |
| 4G/5G connectivity | No | Yes |
| RJ45 LAN ports | No | 2 |
| Autonomous charging dock | Not available | Optional |
| OmniHand compatibility | No | Yes |
| OmniPicker compatibility | No | Yes |
| Secondary development (AimDK) | Not supported | Supported |
| OTA updates | No | Yes |
| Mobile app control | No | Yes |
| Autonomous navigation | Basic | Advanced (LiDAR + RGB-D SLAM) |
| Warranty/maintenance | 8 months | 18 months |
| Price (official store, EUR) | ~20,000+ | ~37,900 |
The standard X2 is designed primarily for interactive entertainment, hospitality reception, and educational demonstrations where its expressive motion and voice interaction capabilities are sufficient. The Ultra, by contrast, is built for scenarios that demand autonomous spatial awareness, on-device AI inference, extended unattended operation, and the ability to deploy custom software. The additional 5 degrees of freedom (the neck joint and the extra 2 DOF per arm) also give the Ultra more expressive head tracking and improved arm dexterity for manipulation tasks when paired with the OmniHand.[2][7]
Both X2 variants use dual Rockchip RK3588 processors as their primary compute units. The RK3588 is an octa-core ARM SoC (4x Cortex-A76 + 4x Cortex-A55) with an integrated 6 TOPS neural processing unit (NPU), commonly used in edge AI and robotics applications. On the X2, the two RK3588 boards handle the robot's core operating system, interaction logic (voice processing, facial expression control, screen rendering), and communication protocols.[1]
The X2 Ultra supplements the dual RK3588 setup with an NVIDIA Jetson Orin NX module, which delivers 157 TOPS (trillion operations per second) of AI computing performance. This GPU-accelerated compute module enables the Ultra to run sophisticated deep learning models on-device for tasks including:
The Orin NX is the critical hardware differentiator that makes the Ultra suitable for research and autonomous deployment scenarios. Without it, the standard X2 relies solely on the RK3588 processors, which lack the GPU throughput needed for real-time spatial AI workloads.[2][9]
The X2 Ultra features one of the most comprehensive sensor suites available on a compact humanoid robot in its size class. The perception stack is designed to support both expressive human interaction and autonomous spatial awareness.
The Ultra includes a 3D LiDAR unit that generates point-cloud data for Simultaneous Localization and Mapping (SLAM). This enables the robot to build and maintain a real-time 3D map of its environment, plan navigation paths, and detect obstacles at distances and angles that cameras alone cannot reliably cover. LiDAR is particularly important for autonomous patrol, waypoint navigation, and operation in varying lighting conditions.[2]
The RGB-D camera provides synchronized color and depth information, enabling 3D spatial perception at close to medium range. This is used for object manipulation planning (when paired with end effectors), precise distance estimation to people and objects, and supplementary depth data for the SLAM pipeline.[2]
The Ultra carries multiple RGB cameras: front-facing binocular (stereo) cameras for depth estimation and wide field-of-view coverage, a rear-facing RGB camera for backward awareness, and an interactive head-mounted RGB camera used for facial recognition, object detection, gesture recognition, and expressive eye-contact behavior during conversations. The standard X2 has only the interactive head camera.[1][2]
Both X2 variants include a head-mounted touch sensor (enabling pat and tap interactions), a microphone array for voice input with directional sound localization, a speaker for voice output, and an interactive display screen with programmable LED lighting effects for expressive facial animations. These sensors support the X2's multimodal interaction capabilities, allowing it to detect human emotional states through facial expression analysis and voice tone recognition.[1][10]
The X2 Ultra supports secondary development through AimDK_X2 (AgiBot Intelligent Machine Development Kit for X2), a task-level programming and extension framework that provides open interfaces for developers to build custom applications on the robot. AimDK is built on ROS 2 and supports both Python and C++ programming languages.[11]
The framework includes the following modules:
| Module | Capabilities |
|---|---|
| Control Module | Motion mode switching, locomotion control, preset motion playback, end effector/gripper control, joint-level motor commands |
| Interaction Module | Voice control, text-to-speech, screen/emoji control, LED strip control |
| Hardware Abstraction | Sensor interfaces (LiDAR, cameras, IMU, touch sensors), power management access |
| Media Module | Camera streaming from multiple sensors, audio input/output, video playback |
| Custom Input | Registration of custom input sources for application-specific sensing |
Developers can connect to the X2 Ultra's development computing unit either through a direct wired Ethernet connection (via the rear RJ45 port, accessing the SDK at IP 10.0.1.41) or wirelessly through the robot's built-in Wi-Fi hotspot (via a jump host at 192.168.88.88). The framework supports cross-device ROS 2 networking and SSH login for remote development.[11]
AgiBot's documentation notes that the Motion Control Computing Unit (PC1, at 10.0.1.40) must not be used as a build or runtime environment for secondary development to avoid safety risks. All custom code runs on the separate development computing unit.[11]
The X2 Ultra is powered by AgiBot's Genie Operator-1 (GO-1) foundation model, a generalist embodied AI system released on March 10, 2025. GO-1 introduces the Vision-Language-Latent-Action (ViLLA) framework, which represents an evolution beyond the standard Vision-Language-Action (VLA) paradigm used in prior robot learning models.[12]
The ViLLA architecture uses a Mixture of Experts (MoE) design with two key components:
In benchmark evaluations, GO-1 increased task success rates by 32 percentage points over prior state-of-the-art models (from 46% to 78%), and achieves over 60% success rate on complex, long-horizon dexterous manipulation tasks.[12][13]
The X2 Ultra runs Lingqu OS, which AgiBot describes as the world's first embodied intelligent operating system, released in July 2025. Lingqu OS provides a layered, open-source robotics software stack built on top of AimRT, AgiBot's proprietary real-time middleware that serves as an alternative to ROS 2 DDS implementations with up to 30% lower latency. The operating system supports over-the-air (OTA) updates, enabling operators to push new capabilities and behaviors to deployed robots without physical access.[14]
Genie Sim 3.0 is AgiBot's simulation platform for training and validating robot policies in virtual environments before deploying them to physical hardware. It enables sim-to-real transfer of learned behaviors and was recognized with a Best of Show 2026 award from Ubergizmo at CES 2026.[5]
Announced in October 2025, LinkCraft is a zero-code platform that converts human motion videos into executable robot action sequences. This allows non-technical users to teach the X2 Ultra new behaviors by simply demonstrating the desired motion on video, significantly lowering the barrier to programming new tasks and making the robot more accessible for entertainment and demonstration applications.[15]
One of the X2 Ultra's most significant practical advantages over the standard X2 is its optional autonomous charging dock. When battery power drops below a configured threshold, the Ultra uses its LiDAR and camera systems to autonomously locate the charging station, navigate to it, and dock for recharging without human intervention. This eliminates the need for manual battery swaps or cable connections, enabling continuous "always-on" operation cycles.[2][7]
This capability is particularly valuable for exhibition, museum, and retail deployments where the robot needs to operate throughout business hours with minimal staff oversight. The interchangeable battery design also allows operators to hot-swap batteries for immediate return to service when a charging dock is not available or when downtime must be minimized.
With a charging time of 1.5 hours or less and a runtime of approximately 2 hours at 0.5 m/s walking speed, the X2 Ultra can maintain roughly 57% uptime in an autonomous cycling pattern. For lighter usage patterns with more stationary interaction and less continuous walking, effective uptime can be considerably higher.
The X2 Ultra is compatible with AgiBot's OmniHand series of dexterous robotic hands, which are sold as separate accessories. The standard X2 does not support OmniHand or any end effectors. Two main OmniHand variants are available:[16]
| Model | OmniHand 2025 (Agile) | OmniHand Pro 2025 |
|---|---|---|
| Total DOF | 16 (10 active + 6 passive) | 19 |
| Length | 180 mm | 180 mm |
| Weight | 500 g | ~500 g |
| Tactile sensors | 400+ force control points | Multi-cell pressure taxels per fingertip |
| Force resolution | 0.1 N array resolution | 0.1 N multi-modal sensing |
| Max fingertip force | 5 N | Higher (industrial tasks) |
| Slip detection | Basic | Micro-vibration incipient-slip detection |
| Target use | Interactive services, gestures | Precise industrial manipulation |
| Price (EUR) | ~3,905 | ~12,907 |
The Ultra is also compatible with the OmniPicker adaptive gripper, a simpler end effector suited for pick-and-place tasks that do not require the full dexterity of the OmniHand.
When equipped with the OmniHand Pro and running the GO-1 foundation model, the X2 Ultra can perform dexterous manipulation tasks including grasping irregularly shaped objects, tool use, and delicate assembly operations, making it a capable platform for robot learning research.[16]
AgiBot positions the X2 Ultra across several commercial and research application domains. Its enhanced sensor suite, computing power, and developer tools make it suitable for more demanding scenarios than the standard X2.
The X2 Ultra serves as a full-stack research platform for universities and labs working on embodied AI, autonomous navigation, human-robot interaction, manipulation learning, and multi-robot coordination. The AimDK framework, ROS 2 compatibility, Orin NX compute module, and comprehensive sensor suite provide researchers with the tools needed to develop and test custom algorithms. AgiBot's open-source AgiBot World dataset (containing over 1 million robot manipulation trajectories) and GO-1 foundation model provide pre-trained baselines for transfer learning experiments.[11][17]
With its LiDAR-based navigation, autonomous charging dock, and expressive interaction capabilities, the X2 Ultra can operate as an autonomous reception guide, museum docent, or retail assistant. It can navigate predetermined waypoints, greet visitors, answer questions through voice interaction, and return to its charging station when needed, all without human supervision. The 4G/5G connectivity enables remote monitoring and fleet management across multiple locations.[2][7]
The X2 series gained viral attention in September 2025 when the X2 became the first humanoid robot to perform a Webster flip, a complex acrobatic forward somersault from a single-leg back-facing takeoff. The X2 supports more than 20 preset motion skills, including dance routines, martial arts sequences, and expressive gestures. At MWC 2026 in Barcelona, an X2 performed martial arts, hip hop choreography, and a full split on the marble floor of the Palacio Real de Pedralbes.[5][18][19]
The Ultra variant's autonomous navigation adds the ability to roam exhibition spaces independently, approach visitors, perform demonstrations, and return to charge, making it ideal for trade shows, corporate events, and brand activations.
The Ultra's LiDAR, multi-camera system, and autonomous navigation enable it to perform security patrol routes with real-time video feeds and anomaly detection. The 4G/5G module allows it to transmit alerts to a central monitoring station. The GO-1 model's support for multi-robot collaboration allows multiple X2 Ultra units to coordinate patrol patterns across large facilities.[10]
The X2 Ultra's rich sensor suite makes it a useful platform for collecting real-world interaction data to improve AI models. Its cameras, LiDAR, depth sensors, microphones, and touch sensors capture multimodal data from human interactions and navigation scenarios that can be used to train and refine robot learning models.[10]
The X2 Ultra has been showcased at several major technology events as part of AgiBot's product portfolio:
| Event | Date | Location | Notable Demonstrations | |---|---|---| | AgiBot Launch Event | March 11, 2025 | Shanghai, China | Official unveiling of the X2 series alongside GO-1 foundation model | | CES 2026 | January 2026 | Las Vegas, USA | U.S. market debut; coordinated multi-robot demonstrations; Best of CES awards | | MWC 2026 | March 2026 | Barcelona, Spain | Martial arts, hip hop dance, full split demonstration at Palacio Real de Pedralbes |
At CES 2026, AgiBot showcased its full product portfolio at LVCC North Hall Booth 10715, with multiple humanoid robots performing coordinated live demonstrations of collective movement, interaction, and task execution. AgiBot won multiple Best of CES 2026 awards, including Best of Show from Ubergizmo.[5]
The AgiBot X2 Ultra is available for purchase through AgiBot's official online store (store.agibot.com), which launched in March 2026, as well as through third-party distributors in Europe and other regions.
| Channel | Price | Notes |
|---|---|---|
| AgiBot Official Store | ~37,900 EUR | Base unit; end effectors sold separately |
| European distributors | ~43,000-47,000 EUR | May include VAT and shipping |
| Robot-as-a-Service (BotShare) | From 899 EUR/day | Available in 50 cities in China and 17 countries |
The X2 Ultra's pricing positions it significantly above the standard X2 (starting from ~20,000 EUR) and the entry-level X2 Lite, reflecting the substantial additional hardware (Orin NX, LiDAR, RGB-D camera, 4G/5G module) and the extended 18-month warranty. End effectors such as the OmniHand 2025 (~3,905 EUR) and OmniHand Pro 2025 (~12,907 EUR) are sold separately. The autonomous charging dock is also an optional accessory.[7][20]
AgiBot requires buyers to read and confirm a "Technical Capability Confirmation Agreement" before placing an order, indicating that the Ultra is intended for technically capable buyers who can manage their own software development and integration rather than turnkey consumer deployments.[20]
The X2 Ultra competes in the growing market for compact humanoid research and service platforms. The broader humanoid robot industry saw Chinese companies account for approximately 90% of global shipments in 2025, with AgiBot and Unitree Robotics together shipping more than 10,000 units.[4]
| Robot | Manufacturer | Height | DOF | AI Compute | Approx. Price | Primary Use |
|---|---|---|---|---|---|---|
| X2 Ultra | AgiBot | 131 cm | 30 | Orin NX (157 TOPS) | ~37,900 EUR | Research, service, exhibition |
| Unitree G1 | Unitree Robotics | 127 cm | 23-43 | Jetson Orin (275 TOPS, EDU variant) | From ~US$16,000 | Education, research |
| Unitree H1 | Unitree Robotics | 180 cm | 19 | Jetson Orin | ~US$90,000 | Research, industrial |
| AgiBot A2 | AgiBot | 169-175 cm | 40-49+ | 200 TOPS | US$100,000-190,000 | Industrial, commercial |
| Tesla Optimus | Tesla | 173 cm | 28+ | Custom | US$20,000-30,000 (target) | General purpose |
AgiBot differentiates the X2 Ultra from competitors through its deep AI integration via the GO-1 foundation model, its expressive multimodal interaction capabilities, its broad software ecosystem (AimDK, Lingqu OS, Genie Sim, AgiBot World dataset), and its autonomous charging capability for extended unattended deployments. Unitree's G1, while cheaper at entry-level pricing, lacks the integrated foundation model and the autonomous charging infrastructure that the X2 Ultra offers.[4][21]