![]() | |
| Developer | Figure AI |
| Type | Humanoid robot |
| Generation | 2nd |
| Unveiled | August 6, 2024 |
| Height | 168 cm (5 ft 6 in) |
| Weight | 70 kg (154 lb) |
| Degrees of Freedom | 41 (total), 16 per hand |
| Battery | 2,250 Wh; ~5 hours runtime |
| Walking Speed | 1.2 m/s (2.7 mph) |
| Payload | 20 kg (44 lb) total; 7 kg per arm |
| Cameras | 6 RGB cameras |
| AI System | Helix VLA (in-house) |
| Actuators | Electric (custom) |
| Price | ~$100,000 (estimated) |
Figure 02 (also written as F.02) is the second-generation humanoid robot developed by Figure AI, a robotics company based in San Jose, California. Unveiled on August 6, 2024, Figure 02 succeeded the company's prototype Figure 01 and was designed from the outset for commercial deployment in manufacturing and logistics environments. The robot became notable for completing an 11-month pilot program at BMW's Spartanburg, South Carolina plant, where it contributed to the production of over 30,000 BMW X3 vehicles.
Figure 02 features 41 degrees of freedom, fourth-generation dexterous hands with 16 degrees of freedom per hand, a 2,250 Wh battery providing approximately five hours of continuous operation, and six RGB cameras for perception. It runs the Helix vision-language-action (VLA) model entirely onboard, requiring no cloud connection for inference. The robot marked Figure AI's transition from research prototype to commercially deployed product.
Figure AI was founded in 2022 by Brett Adcock, who previously founded Archer Aviation (an electric air taxi company) and Vettery (a recruiting platform). The company set out to build a commercially viable general-purpose humanoid robot capable of performing tasks in unstructured environments. Figure AI is headquartered in San Jose, California, where it occupies a 98,700-square-foot facility at the Assembly at North First tech campus, having relocated from a smaller 27,900-square-foot space in Sunnyvale in early 2025.
Figure AI has raised approximately $1.9 billion across multiple funding rounds, reaching a valuation of $39 billion by late 2025.
| Round | Date | Amount | Valuation | Notable Investors |
|---|---|---|---|---|
| Seed | 2022 | ~$100 million | Undisclosed | Brett Adcock (personal investment) |
| Series A | May 2023 | $70 million | Undisclosed | Parkway Venture Capital, Brett Adcock ($20M) |
| Series B | February 2024 | $675 million | $2.6 billion | Jeff Bezos, Microsoft, NVIDIA, Intel, OpenAI, ARK Invest |
| Series C | September 2025 | $1+ billion | $39 billion | Brookfield, Intel, Macquarie Capital, NVIDIA, Qualcomm, Salesforce, T-Mobile |
The Series B round drew particular attention for its roster of high-profile investors. Jeff Bezos invested through Bezos Expeditions, while OpenAI participated through its startup fund. The Series C round in September 2025 represented a roughly 15-fold increase in valuation in just 19 months.
Figure AI's first robot, Figure 01, was introduced in March 2023. Standing 168 cm tall and weighing 60 kg, it demonstrated dynamic bipedal walking by October 2023 and completed autonomous tasks such as coffee preparation (after 10 hours of training) by early 2024. Figure 01 served primarily as a research and development platform and relied on external cabling and limited onboard computing. It was eventually retired in favor of Figure 02 and later models.
Figure 02 was unveiled on August 6, 2024, during a company event where Figure AI described it as a "ground-up hardware and software redesign" of the Figure 01 platform. The development was informed by data collected during Figure 01's pilot deployments, particularly at BMW's Spartanburg facility where Figure 01 had been used for training and data collection purposes.
Key design goals for Figure 02 included increasing onboard computing power, extending battery life, improving hand dexterity, and enabling fully autonomous operation without human teleoperation. The robot was positioned as Figure AI's first product intended for sustained commercial use rather than demonstration purposes.
At launch, Figure 02 still incorporated OpenAI technology for speech-to-speech conversation capabilities. The integration gave the robot the ability to understand spoken commands and respond verbally while performing tasks. This OpenAI collaboration was later discontinued (see AI Software section below).
Figure 02 stands 168 cm (5 feet 6 inches) tall and weighs 70 kg (154 pounds), roughly matching the dimensions of an average adult human. The robot's frame is constructed from high-strength composites, and its joints use miniature ball bearings to combine structural rigidity with smooth articulation. Unlike Figure 01, which used external cabling, Figure 02 integrates all cabling within the limbs and houses the battery in the torso, improving both balance and durability.
The robot has 41 total degrees of freedom distributed across its body, allowing movement patterns that approximate human range of motion. The five-fingered hands account for 16 DOF each (32 DOF combined), with the remaining degrees of freedom distributed across the legs, torso, and head. This configuration enables the robot to walk, bend, reach, and manipulate objects across a wide workspace.
| Parameter | Value |
|---|---|
| Height | 168 cm (5 ft 6 in) |
| Weight | 70 kg (154 lb) |
| Total degrees of freedom | 41 |
| Hand degrees of freedom | 16 per hand |
| Fingers per hand | 5 |
| Total payload capacity | 20 kg (44 lb) |
| Arm payload | 7 kg per arm |
| Hand carry capacity | Up to 25 kg (55 lb) |
| Maximum walking speed | 1.2 m/s (2.7 mph) |
| Battery capacity | 2,250 Wh |
| Battery life | ~5 hours |
| Cameras | 6 RGB cameras |
| Onboard compute | 3 NVIDIA RTX AI chips |
| Actuator type | Electric (custom) |
| Stair climbing | Yes |
| Connectivity | WiFi, 5G |
| Depth sensors | Yes |
| Force/torque sensors | Yes |
| IMU | Yes |
The hands on Figure 02 represent the fourth generation of Figure AI's hand design. Each hand has five fingers with 16 degrees of freedom and human-equivalent grip strength. The joint system uses a combination of revolute and spherical joints with miniature ball bearings, providing the range of motion and precision needed for fine manipulation.
The hands can carry up to 25 kg (55 lb), adapt grip strength and finger positioning in real time at 200 Hz, and handle objects they have never encountered before. During testing, the hands successfully manipulated delicate glassware, crumpled clothing, scattered small items, and standard industrial fixtures without requiring pre-programmed configurations for each object type.
Figure 02 carries a 2,250 Wh battery integrated into the torso (compared to a backpack-mounted battery on Figure 01). This torso placement lowers the robot's center of gravity, improving balance and agility during locomotion. The battery provides approximately five hours of continuous operation, a 50% increase over Figure 01's capacity. The battery represents one of the largest single components of the robot's mass.
Six RGB cameras provide visual coverage around the robot. The sensor suite also includes depth cameras, force/torque sensors at key joints, and an inertial measurement unit (IMU). This multi-sensor configuration feeds data into the robot's onboard AI system for real-time environmental understanding and task planning.
In February 2024, alongside its Series B funding announcement, Figure AI entered a collaboration agreement with OpenAI to develop AI models for humanoid robots. The partnership aimed to combine OpenAI's large language model research with Figure's robotics hardware and software expertise. Under this collaboration, Figure 02 gained speech-to-speech conversation capabilities, allowing operators and bystanders to communicate with the robot through natural language. The robot used onboard microphones and speakers to process spoken instructions and respond verbally.
However, the partnership lasted less than a year. In February 2025, CEO Brett Adcock announced that Figure AI was ending the collaboration. Adcock stated that Figure had achieved a "major breakthrough on fully end-to-end robot AI, built entirely in-house," making the external partnership unnecessary. He told TechCrunch: "We found that to solve embodied AI at scale in the real world, you have to vertically integrate robot AI." Adcock also cited practical difficulties, including challenges getting the OpenAI team into the office for demos and concerns when OpenAI indicated it wanted to develop humanoid capabilities internally.
Following the OpenAI breakup, Figure AI introduced Helix, a proprietary vision-language-action model that serves as the primary AI system for Figure 02 and subsequent robots. Helix uses a dual-system architecture inspired by human cognition:
System 2 (Slow Thinking): An onboard vision-language model (VLM) with 7 billion parameters operates at 7 to 9 Hz. It handles high-level scene understanding, language comprehension, and task planning. The model processes monocular robot images and state data (including wrist pose and finger positions) and outputs a continuous latent vector that conditions lower-level actions.
System 1 (Fast Acting): A reactive visuomotor policy with 80 million parameters translates System 2's semantic understanding into precise motor actions at 200 Hz. It uses a fully convolutional, multi-scale vision backbone initialized from simulation pretraining and a cross-attention encoder-decoder transformer architecture. This subsystem controls the robot's full action space for upper body tasks.
System 2 operates asynchronously as a background process while System 1 maintains a critical 200 Hz real-time control loop. This split allows the robot to think and react at different timescales simultaneously.
Helix was trained on approximately 500 hours of diverse teleoperated behavior data, with auto-labeling VLMs generating natural language instruction pairs for each behavior segment. The model uses standard regression loss and requires no task-specific fine-tuning. A single set of neural network weights handles picking, placing, drawer and refrigerator operation, and cross-robot interaction.
Figure AI has claimed several firsts for Helix: it was the first VLA to output high-rate continuous control of the entire humanoid upper body (including wrists, torso, head, and individual fingers); the first VLA to operate simultaneously on two robots solving a shared long-horizon manipulation task with unseen items; and the first VLA to run entirely onboard embedded low-power GPUs without requiring a cloud connection.
Figure 02 carries three NVIDIA RTX AI chips onboard, providing approximately three times the computing and AI inference capability of Figure 01. The dual GPU configuration for Helix splits processing between the two systems: one GPU handles the slower System 2 VLM inference, while another manages the high-frequency System 1 motor control. All inference runs locally on the robot with no dependency on external servers or cloud infrastructure.
The most significant real-world deployment of Figure 02 took place at the BMW Group Plant Spartanburg in South Carolina, one of BMW's largest manufacturing facilities globally. The partnership between Figure AI and BMW was announced in January 2024, initially involving Figure 01 for data collection and feasibility assessment. Figure 02 was subsequently deployed on an active production line, where it operated for 11 months.
Figure 02 worked 10-hour shifts Monday through Friday on the Spartanburg assembly line. The robot's primary task was sheet-metal loading: lifting parts from bins and placing them onto welding fixtures with a 5-millimeter tolerance in approximately 2 seconds per placement. The deployment had strict key performance indicators (KPIs):
| Metric | Target / Result |
|---|---|
| Total deployment duration | 11 months |
| Shift length | 10 hours (Monday through Friday) |
| Total runtime | 1,250+ hours |
| Parts handled | 90,000+ sheet-metal parts |
| Vehicles contributed to | 30,000+ BMW X3 |
| Cycle time | 84 seconds total (37 seconds for loading) |
| Placement accuracy target | >99% per shift |
| Human intervention target | Zero per shift |
| Distance traveled | ~200+ miles (~1.2 million robot steps) |
The deployment generated valuable data for Figure AI's engineering team. Accuracy stayed above 99%, and the robot completed its cycles within the required timeframes. However, the 1,250+ hours of operation also revealed hardware weaknesses. The forearm emerged as the top hardware failure point, due to its tight packaging, dexterity requirements (three degrees of freedom), and thermal constraints. These findings directly informed the design of Figure 03, where engineers redesigned the wrist electronics to eliminate distribution boards and dynamic cabling, enabling direct motor controller communication with the main computer.
Following the Spartanburg pilot, BMW announced in February 2026 that it would expand humanoid robot deployment to its Leipzig, Germany plant, marking the first use of humanoid robots in European automotive production. However, the Leipzig pilot uses AEON robots from Hexagon Robotics rather than Figure units, suggesting BMW is evaluating multiple humanoid platforms.
In late December 2024, Figure AI announced that it had shipped Figure 02 robots to its first paying commercial customer, making it a revenue-generating company. The milestone was reached 31 months after the company filed its C-Corp. The specific identity of this customer was not publicly disclosed.
Figure AI has adopted a Robot-as-a-Service (RaaS) business model for commercial deployments, with industry estimates placing the subscription price at approximately $1,000 per robot per month (roughly $12,000 per year). This pricing covers hardware deployment, software updates, maintenance, and support services. The subscription approach lowers capital expenditure for customers and provides Figure AI with predictable recurring revenue.
In March 2025, Figure AI unveiled BotQ, a vertically integrated manufacturing facility in California designed for high-volume humanoid robot production. The first-generation manufacturing line is capable of producing up to 12,000 humanoid robots per year, with a stated goal of manufacturing 100,000 robots over a four-year period.
BotQ represents a shift from prototyping methods to production-oriented manufacturing. Components that previously required over a week of CNC machining now take under 20 seconds using steel molds, injection molding, die-casting, metal injection molding, and stamping. The facility features automated grease dispensing for motor gearboxes and robotic cell testing for battery components.
One distinctive aspect of BotQ is the integration of Figure's own humanoid robots into the assembly process. Using the Helix AI system, Figure robots assist with assembling key production line components and handle material movement between stations, replacing conventional conveyor systems. The company built custom Manufacturing Execution Software (MES) along with PLM, ERP, and WMS systems over a six-month development period to support scalable manufacturing.
Figure 02 represented a substantial upgrade over the Figure 01 prototype across nearly every dimension.
| Feature | Figure 01 | Figure 02 |
|---|---|---|
| Unveiled | March 2023 | August 2024 |
| Weight | 60 kg | 70 kg |
| Degrees of freedom | 41 | 41 |
| Hand DOF | Basic grippers | 16 per hand (4th gen) |
| Battery capacity | ~1,500 Wh | 2,250 Wh (+50%) |
| Battery location | Backpack-mounted | Torso-integrated |
| Onboard compute | Limited | 3 NVIDIA RTX AI chips (3x increase) |
| AI system | OpenAI integration | Helix VLA (in-house) |
| Cabling | External | Integrated within limbs |
| Cameras | Limited | 6 RGB cameras (360-degree coverage) |
| Autonomy | Heavy reliance on teleoperation | End-to-end neural network autonomy |
| Commercial status | Research prototype | Commercially deployed |
The most significant improvements were the three-fold increase in onboard computing power, the 50% larger battery with torso integration, the fourth-generation dexterous hands, and the shift from reliance on external AI partners to a fully in-house AI stack.
In October 2025, Figure AI introduced Figure 03, the third-generation humanoid designed for both home and factory environments. Figure 03 incorporates lessons from the Figure 02 BMW deployment and is the first Figure robot engineered from the ground up for high-volume manufacturing at BotQ.
Key differences from Figure 02 include: 9% less mass with significantly less volume; soft textile coverings instead of hard machined parts; tactile fingertip sensors detecting forces as small as 3 grams; a camera system with double the frame rate, one-quarter the latency, and 60% wider per-camera field of view; hand-mounted cameras; wireless inductive charging at 2 kW through coils in the feet; 10 Gbps mmWave data offload; and actuators with 2x faster speeds and improved torque density.
Figure 02 entered a rapidly growing humanoid robotics market with several well-funded competitors.
| Company | Robot | Key Features | Status (as of early 2026) |
|---|---|---|---|
| Figure AI | Figure 02 / 03 | Helix VLA, BMW deployment, BotQ factory | Commercial (limited deployment) |
| Tesla | Optimus | Target price $20,000-$30,000, Tesla factory use | Prototyping; no useful factory work reported as of Q4 2025 |
| Boston Dynamics | Atlas (electric) | 56 DOF, 50 kg lift, Hyundai backing | Commercial launch at CES 2026 |
| Agility Robotics | Digit | Bipedal, logistics-focused | $2.1B valuation, Amazon pilot |
| 1X Technologies | NEO | Lightweight, home-oriented | Development |
| Unitree Robotics | H1 | Low cost, open ecosystem | Commercial |
Tesla's Optimus is often cited as Figure's most direct competitor due to similar ambitions for factory automation and eventual consumer use. However, during Tesla's Q4 2025 earnings call, CEO Elon Musk acknowledged that no Optimus robots were performing useful factory work despite earlier claims of over 1,000 deployed units. Boston Dynamics' all-electric Atlas, showcased at CES 2025 and CES 2026, is widely considered the most technically advanced humanoid demonstration to date, with planned deployments at Hyundai and Google DeepMind facilities.
Figure AI's $39 billion valuation, achieved with only a limited number of commercial units deployed, reflects investor confidence in the long-term potential of humanoid robotics rather than current revenue. The broader humanoid robotics sector saw significant capital inflows in 2024 and 2025, with multiple companies raising hundreds of millions or billions of dollars.
In November 2025, Robert Gruendel, Figure AI's former principal robotic safety engineer and head of product safety, filed a lawsuit against the company in federal court in the Northern District of California. Gruendel alleged that he was wrongfully terminated after warning top executives, including CEO Brett Adcock and chief engineer Kyle Edelberg, that Figure's robots "were powerful enough to fracture a human skull."
According to the complaint, internal impact testing on the Figure 02 model generated forces reportedly more than double those required to break an adult skull. Gruendel also alleged that one robot "had already carved a quarter-inch gash into a steel refrigerator door during a malfunction." The lawsuit further claimed that a product safety plan presented to prospective investors was significantly reduced after the investment round closed, a move Gruendel characterized as potentially fraudulent.
Figure AI has not publicly commented in detail on the allegations. The case was pending as of early 2026.