| TORA-ONE | |
|---|---|
![]() | |
| General information | |
| Manufacturer | PaXini Technology |
| Country of origin | China |
| Year revealed | 2024 |
| Status | Prototype / limited availability |
| Price | $50,000 to $150,000 USD (configuration-dependent) |
| Availability | Available for enterprise customers |
| Website | paxini.com |
TORA-ONE (also written as ToraOne or Tora One) is a wheeled humanoid robot developed by PaXini Technology, a Shenzhen-based Chinese robotics company that specializes in tactile sensing and embodied intelligence. The robot is built around PaXini's proprietary multi-dimensional tactile sensing platform and is designed for autonomous operation in industrial, healthcare, logistics, and service environments. TORA-ONE features up to 53 degrees of freedom, an adjustable height range from 1.46 to 1.86 meters, nearly 2,000 high-precision tactile sensors across more than 7,800 channels, and an 8-hour battery life.[1][2]
The robot was first publicly unveiled in its second-generation form at the 2024 World Robot Conference in Beijing in August 2024.[3] It subsequently gained widespread international attention at iREX 2025 in Tokyo, where it demonstrated autonomous ice cream preparation, and at CES 2026 in Las Vegas, where it performed the same task live for attendees without any human intervention.[4][5] TORA-ONE is one of two humanoid robot platforms produced by PaXini, alongside the TORA DoubleOne, which emphasizes wheeled mobility and obstacle navigation rather than fine manipulation.
PaXini Technology (Shenzhen) Co., Ltd. (Chinese: 帕西尼) was founded in 2021 by Xu Jincheng (also romanized as Hsu Jincheng), who studied under the roboticist Shigeki Sugano at Waseda University in Tokyo. The Sugano Laboratory at Waseda is widely recognized as the birthplace of the world's first humanoid robot.[6] After struggling to find sufficient capital in Japan, Xu was discovered by Lu Qi, the founder of the Chinese venture capital firm MiraclePlus and former COO of Baidu. Lu became Xu's angel investor, and in April 2021, Xu relocated to Shenzhen and established PaXini within two months.[7]
The company has grown into China's largest tactile sensor manufacturer and the only domestic firm that produces multi-tactile dexterous robotic hands at commercial scale. Its product line covers the entire chain from core sensor components to complete humanoid robot systems. Thousands of companies across advanced manufacturing, high-end equipment, healthcare, and consumer electronics use PaXini's tactile sensing products. Notably, the company has entered Apple's supply chain for tactile sensor components.[8][9]
PaXini has raised substantial funding in rapid succession. In August 2025, JD.com led the company's Series A funding round, bringing total capital raised to approximately CNY 1 billion (about $139 million USD). Earlier backers include BYD, SAIC Motor, BAIC Group, TCL, and Addor Capital.[10] In early 2026, PaXini completed a Series B round of over $145 million at a valuation exceeding $1.45 billion. Investors in that round included Huangpu River Capital, Kaitai Capital, CIM International Group, Xin'an Capital, and affiliates of Meta.[11][12] This places PaXini among a small group of Chinese embodied intelligence firms valued above $1 billion.
TORA-ONE was designed as PaXini's flagship platform for demonstrating the capabilities of its tactile sensing and dexterous manipulation technologies. While the TORA DoubleOne was optimized for general-purpose deployment with an emphasis on mobile stability and obstacle navigation, TORA-ONE prioritizes precision manipulation and touch-based interaction. The two robots share the same core tactile sensing architecture but are configured for different operational profiles.
Like the TORA DoubleOne, the TORA-ONE uses a wheeled base rather than bipedal locomotion. This design choice reflects PaXini's focus on practical deployability over research novelty. The wheeled platform provides stable, energy-efficient movement across flat indoor surfaces such as factory floors, hospital corridors, and retail spaces without the computational overhead or fall risk associated with bipedal walking. The robot uses Laser-SLAM navigation to autonomously map and move through indoor environments at speeds up to 1 meter per second (3.6 km/h).[1][13]
The chassis supports a modular design, allowing customization based on the deployment scenario. PaXini has indicated that the platform can accommodate different wheeled configurations to suit various terrain and mobility requirements.[14]
TORA-ONE features a dynamically adjustable body that can extend from 1.46 meters to 1.86 meters in height. This range allows the robot to interact with objects at different vertical levels, from picking items off low shelves to working at standard countertop height. The adjustment mechanism is integrated into the torso section, enabling the robot to adapt its posture during operation without stopping or reconfiguring.[1][3]
The robot's modular construction supports interchangeable end-effectors, expandable sensor arrays, and integration into various workflows. Operators can configure the platform with different hand modules, attach additional sensing equipment, or modify the chassis depending on whether the robot is performing assembly tasks in a factory, assisting patients in a healthcare facility, or serving customers in a retail environment.[14]
Tactile perception is the defining feature of TORA-ONE and the core competency of PaXini as a company. The robot's sensing system is built on proprietary technology that goes well beyond the force/torque sensors found in most competing humanoid platforms.
At the heart of the TORA-ONE's tactile system are PaXini's Intelligent Tactile Processing Units (ITPUs), custom-designed sensor modules that capture and process multi-dimensional touch data. The robot integrates approximately 1,956 ITPU sensors distributed across its hands and body, generating more than 7,824 tactile channels. Each ITPU sensor can simultaneously measure 15 tactile dimensions, including six-axis force (three translational and three rotational), surface texture, material elasticity, friction, softness, sliding motion, and temperature.[1][3][5]
Key performance characteristics of the ITPU sensors include:
| Parameter | Value |
|---|---|
| Force sensing resolution | 0.01 N across full measurement range |
| Repeatability | Less than 0.5% of full scale |
| Sampling rate | 1,000 Hz |
| Sensing dimensions | 15 (six-axis force, texture, elasticity, friction, temperature, etc.) |
| Durability | Over 3 million measurement cycles |
| Environmental rating | Waterproof and dustproof |
| Material | Semi-flexible polymer with advanced encapsulation |
The ITPU sensors use PaXini's proprietary 6D Hall array technology, which employs multilayer nested arrays of Hall-effect sensors to capture multi-axis pressure and force vectors with high spatial density. Unlike traditional strain-gauge force sensors that measure along limited axes, the Hall-effect approach allows full six-dimensional force capture in a compact, cost-effective package. PaXini has stated that its sensors are priced starting at $49 per unit, significantly below competing six-axis force/torque sensors on the market.[5][9]
The third-generation PX-6AX-GEN3 is PaXini's flagship commercial tactile sensor product, used both within the TORA-ONE and sold separately to third-party robotics manufacturers. It outputs 15 types of tactile information at 1,000 Hz, achieves force resolution of 0.01 N, and maintains repeatability below 0.5% of full scale across the full measurement range. The sensor uses multilayer nested magnetic-field arrays for its Hall-effect measurements.[4][5]
PaXini also developed the PX6D and PXTS series, described as the world's first commercial Hall-effect six-dimensional force/torque sensors designed specifically for embodied AI applications. These sensors use advanced polymer construction instead of traditional steel, reducing weight while improving resistance to aging and creep. They are designed for integration into robot limbs and joints to provide whole-body force perception.[4]
Rather than routing all raw sensor data to the central processor, the ITPU sensors perform local preprocessing to generate dynamic force maps and trigger adaptive control responses. This distributed architecture reduces latency and allows the robot to react to physical contact in real time, even without cloud connectivity. The approach is critical for manipulation tasks where millisecond-level response times determine success, such as catching a slipping object or adjusting grip on a fragile item.[9]
The TORA-ONE's manipulation system centers on PaXini's DexH13 GEN2 dexterous hand, which the company describes as the industry's first robotic end-effector to integrate multi-dimensional tactile sensing with AI vision in a single unit.
The second-generation DexH13 hand features a four-fingered bionic design with 13 degrees of freedom per hand (26 total for both hands). Each hand integrates approximately 1,140 ITPU multi-dimensional tactile processing units, with a concentration of nearly 1,000 sensors in the fingertips alone. The hand also incorporates an 8-megapixel high-definition AI hand-eye camera that uses a zero-sample pose estimation vision algorithm for object recognition and grasp planning.[4][15]
| Parameter | Value |
|---|---|
| Fingers | 4 per hand (bionic design) |
| DOF per hand | 13 |
| ITPU sensors per hand | ~1,140 |
| Tactile signals generated | 3,420 multi-dimensional signals per hand |
| Load capacity | 5 kg per hand |
| Vision | 8 MP AI hand-eye camera |
| Pose estimation | Zero-sample vision algorithm |
| Durability | Over 100,000 operational cycles |
At CES 2026, the DexH13 demonstrated the ability to accurately mirror a wide range of human hand gestures, stably grasp irregular objects including test tubes, cubes, and delicate items, and perform fine manipulation tasks such as turning knobs and handling fragile materials.[5]
The DexH13 GEN2 integrates tactile and visual data through PaXini's proprietary VTLA-Model (Visual-Tactile-Language-Action model). This system combines tactile feedback from the fingertip sensors with visual input from the hand-eye camera to create a fused perception of objects being handled. For example, the hand can simultaneously feel the softness and texture of an object through its tactile sensors while visually identifying the object's shape and orientation through the camera. This dual-modal approach enables more robust manipulation than either sensing modality alone.[3][16]
| Category | Specification | Value |
|---|---|---|
| Physical | Height (adjustable) | 146 to 186 cm |
| Physical | Weight | ~70 kg (standard); up to 80 kg (heavy-duty configuration) |
| Physical | Footprint | 50 x 40 cm |
| Physical | Locomotion type | Wheeled (modular chassis) |
| Degrees of freedom | Body DOF | 21 |
| Degrees of freedom | Hand DOF | 26 (13 per hand) |
| Degrees of freedom | Total DOF | 47 (up to 53 in some configurations) |
| Manipulation | Payload capacity per arm | 6 to 8 kg |
| Manipulation | Positioning accuracy | +/- 0.05 mm |
| Manipulation | Force control precision | 0.01 N |
| Mobility | Maximum speed | 1 m/s (3.6 km/h) |
| Mobility | Navigation | Laser-SLAM with LiDAR |
| Power | Battery type | 48V / 40 Ah lithium-ion |
| Power | Operating time | Up to 8 hours continuous |
| Power | Charging time | ~4 hours |
| Power | Average power draw | 500 W (1,000 W peak) |
| Power | Battery lifespan | 3 to 5 years |
| Sensors | Tactile sensors | ~1,956 ITPU sensors, 7,824+ channels |
| Sensors | Visual cameras | 5 HD monocular + 2 depth cameras |
| Sensors | Navigation sensors | 3D LiDAR, fisheye camera |
| Sensors | Audio | Circular microphone arrays |
| Computing | AI platform | NVIDIA Jetson AGX Orin |
| Computing | Software compatibility | ROS2, Python APIs |
| Computing | AI model | OmniVTLA (Visual-Tactile-Language-Action) |
| Connectivity | Interfaces | Wi-Fi (dual-band), Bluetooth 5.0, USB, Ethernet |
| Safety | Features | Force limiting, collision detection, emergency stop |
| Safety | Environmental rating | IP54 |
| Safety | Operating temperature | 0 to 40 degrees C |
TORA-ONE is powered by the NVIDIA Jetson AGX Orin computing platform, which delivers up to 275 TOPS of AI performance. This onboard processing handles real-time multimodal sensor fusion, autonomous navigation, and AI-driven decision-making without requiring constant cloud connectivity. The robot supports ROS2 (Robot Operating System 2) and Python APIs for software development and integration.[13][14]
The robot's intelligence layer runs PaXini's OmniVTLA model, a multimodal AI system that fuses visual, tactile, linguistic, and action data into a unified perception and planning framework. OmniVTLA extends the standard Vision-Language-Action (VLA) paradigm by incorporating tactile data as a first-class input modality. This means the robot can not only see and understand spoken instructions but also "feel" objects and surfaces to inform its manipulation strategy.[6][16]
The OmniVTLA model was trained using data generated at PaXini's Super EID Factory, a 12,000-square-meter embodied intelligence data facility located in Tianjin. This facility produces approximately 200 million omni-modal data entries annually using more than 150 standardized data acquisition units. Human operators wearing sensor-equipped gloves and motion capture equipment generate training data through natural movements, creating a dataset that captures the full range of human manipulation behaviors.[16][17]
PaXini unveiled the second-generation TORA-ONE at the 2024 World Robot Conference in Beijing on August 21, 2024. This event marked the public debut of the updated platform with its expanded 47-DOF body, upgraded ITPU sensor system, and the new VTLA-Model visual-tactile multimodal perception capability. The unveiling emphasized the integration of nearly 2,000 self-developed ITPU tactile sensing units into the robot's hands.[3]
The TORA-ONE made its international debut at the International Robot Exhibition (iREX) 2025, held December 3 to 6, 2025, in Tokyo, Japan. The robot performed a complete ice cream preparation workflow as a live demonstration, autonomously handling lever manipulation, ingredient dispensing, and cup handover. This was described as a food-service scenario demonstration and the robot's first public task execution in a real-world context. The demonstration confirmed the robot's ability to combine precise tactile perception with dexterous operation for practical applications.[4]
At CES 2026, held January 6 to 9, 2026, PaXini showcased the TORA-ONE at Booth 9153 in the North Hall of the Las Vegas Convention Center. The robot repeated its ice cream-making demonstration, this time preparing and serving ice cream continuously for show attendees. The demonstration attracted significant media attention, with outlets such as Interesting Engineering, TechCrunch, and Global Times covering the event. The CES showcase was notable because the robot completed the entire workflow (lever operation, ingredient handling, and cup delivery) without any human intervention, relying entirely on its tactile sensors and AI vision system to manage the task.[5][2]
PaXini's CES 2026 booth also featured the TORA DoubleOne, the DexH13 dexterous hand, PX-6AX-GEN3 tactile sensors, and a replica of the company's Omni-Modality Embodied AI Data Acquisition System.[5]
PaXini positions the TORA-ONE for deployment across multiple professional sectors where tactile precision and dexterous manipulation are priorities.
Industrial manufacturing: The robot's high positioning accuracy (+/- 0.05 mm) and force control precision (0.01 N) make it suitable for assembly tasks, quality inspection, and handling of fragile or irregularly shaped components. PaXini has demonstrated the robot operating in automotive manufacturing facilities and performing pallet jack operations in logistics warehouses.[4][6]
Healthcare: TORA-ONE's sensitive manipulation capabilities and safe interaction features (force limiting, collision detection) allow it to assist in medical logistics, patient care support, and laboratory tasks. The robot's ability to perceive material softness and texture through touch is particularly relevant for handling medical supplies and instruments.[14]
Logistics and warehousing: Autonomous navigation via Laser-SLAM, combined with the ability to handle packages and sort items with tactile feedback, positions the robot for parcel sorting, shelf management, and inventory operations.[13]
Food service and hospitality: The ice cream-making demonstrations at iREX 2025 and CES 2026 showcased the robot's ability to perform food preparation tasks autonomously, suggesting applications in commercial kitchens, restaurants, and event catering.[2][5]
Construction: PaXini has identified construction as a target sector where the robot's payload capacity (6 to 8 kg per arm) and tactile sensing could support material handling and assembly tasks.[14]
TORA-ONE and the TORA DoubleOne represent two distinct approaches within PaXini's humanoid robot product line. Both share the same underlying tactile sensing platform and ITPU sensor technology, but they are optimized for different operational priorities.
| Feature | TORA-ONE | TORA DoubleOne |
|---|---|---|
| Total degrees of freedom | 47 to 53 | 47 |
| Height range | 146 to 186 cm | 146 to 186 cm |
| Weight | ~70 kg | ~70 kg |
| Locomotion | Wheeled (modular chassis) | Wheeled (dual-steer AGV or 4WD folding chassis) |
| Maximum speed | 1 m/s (3.6 km/h) | 2 km/h (1.24 mph) |
| Obstacle clearance | Not specified | 21.5 cm |
| Maximum slope | Not specified | 8.5 degrees |
| Payload per arm | 6 to 8 kg | 5 kg |
| Positioning accuracy | +/- 0.05 mm | Not specified |
| Hand configuration | DexH13 GEN2 (4-finger, 13 DOF per hand) | Modular (compatible with DexH13) |
| Tactile sensors | ~1,956 ITPUs, 7,824+ channels | ~1,956 ITPUs, 7,800+ channels |
| Computing platform | NVIDIA Jetson AGX Orin | NVIDIA Jetson AGX Orin + x86 controller |
| AI model | OmniVTLA | OmniVTLA |
| Battery life | 8 hours | 8 hours |
| Exterior | Standard housing | Replaceable fabric skin with embedded sensors |
| Design emphasis | Fine manipulation and tactile dexterity | Mobility, obstacle navigation, dynamic interaction |
| Awards | None announced | iF Design Award 2025 |
| Status | Prototype / limited availability | In production / preorder |
| Price | $50,000 to $150,000 USD | ~$45,000 USD |
| Year revealed | 2024 | 2025 |
The TORA-ONE focuses on demonstrating the limits of PaXini's tactile sensing and manipulation capabilities, with its higher DOF count, upgraded DexH13 GEN2 hands, and emphasis on precision tasks. The TORA DoubleOne, by contrast, was designed for commercial deployment at a lower price point, with features like a foldable body, replaceable fabric skin, and interchangeable chassis options that prioritize practical usability in professional environments.[4][6][14]
Both robots were showcased together at iREX 2025 and CES 2026, where they demonstrated complementary capabilities: TORA-ONE performing the precision ice cream-making task while the TORA DoubleOne demonstrated obstacle-crossing and visitor interaction.[4][5]
TORA-ONE competes in the broader market for wheeled and mobile humanoid robots, which has grown rapidly since 2024. Its primary differentiator is PaXini's deep vertical integration in tactile sensing. While most humanoid robot manufacturers rely on third-party sensors and focus on visual perception as their primary sensing modality, PaXini designs and manufactures its own tactile sensors, dexterous hands, AI models, and robot bodies as an integrated system.[6]
Key competitors in the wheeled humanoid segment include the Agibot A2 series, the UBTECH Walker line, and the Pudu Robotics D7 and D9 platforms. In the broader humanoid robot market, TORA-ONE also competes with bipedal platforms such as the Tesla Optimus, Unitree H1 and H2, the Figure 02, and the Agility Robotics Digit.[6]
China's humanoid robot industry has produced nearly 100 embodied AI robotic products since 2024 and is estimated to hold approximately 70% of the global market. PaXini's strategy of positioning itself as a "full-stack embodied intelligence infrastructure" provider, supplying sensors, data, models, and robot bodies as an integrated offering, distinguishes it from companies that manufacture only the robot hardware or only the AI software.[6][8]
PaXini founder Xu Jincheng has outlined a "hardware first, then data and models" strategy for international expansion, with plans to penetrate the United States, European, Japanese, and South Korean markets. Xu has predicted that "a large number of robots will enter the real production process in two to three years," reflecting the company's ambition to move beyond demonstrations and prototypes into large-scale commercial deployment.[16]
The company's Super EID Factory in Tianjin, which spans approximately 12,000 square meters and includes 150 standardized data collection units, is designed to produce the training data needed to scale robot intelligence. PaXini makes this data available to third parties through its OmniSharing DB platform, described as the world's first embodied intelligence data cloud marketplace.[16][17]
PaXini has also indicated that it plans to continue iterating on the TORA-ONE platform, with improvements to sensor density, hand dexterity, and AI capabilities expected in future generations.[3]