| MenteeBot | |
|---|---|
| General information | |
| Manufacturer | Mentee Robotics |
| Country of origin | Israel |
| Year unveiled | 2024 |
| Status | Succeeded by MenteeBot V3 |
| Type | Humanoid robot |
| Website | menteebot.com |
MenteeBot is a general-purpose humanoid robot developed by Mentee Robotics, an Israeli artificial intelligence and robotics startup headquartered in Herzliya, near Tel Aviv. The robot was publicly unveiled on April 17, 2024, after approximately two years of stealth development, and represented one of the first humanoid platforms designed from the ground up with an "AI-first" architecture. Standing 175 cm tall and weighing 70 kg, the original MenteeBot integrated large language models for task planning, neural radiance field (NeRF) algorithms for real-time 3D mapping, and simulator-to-reality (Sim2Real) reinforcement learning for locomotion and manipulation.
Mentee Robotics was co-founded in late 2022 by Professor Amnon Shashua (chairman of Mentee and president/CEO of Mobileye), Professor Lior Wolf (CEO of Mentee and former director at Facebook AI Research), and Professor Shai Shalev-Shwartz (chief scientist of Mentee and CTO of Mobileye). The original MenteeBot served as the company's proof-of-concept platform through 2024, demonstrating end-to-end autonomous task completion in household and warehouse scenarios. It was succeeded by the significantly upgraded MenteeBot V3, unveiled in February 2025, which featured 360-degree vision, a hot-swappable battery system, and redesigned hands.[1][2][3]
Mentee Robotics was established in late 2022 by three prominent figures from Israel's AI research community, each bringing decades of experience in computer vision, machine learning, and autonomous driving technology.
Professor Amnon Shashua, the company's chairman, is one of Israel's most influential technologists. He co-founded Mobileye in 1999, building it into the world's leading provider of camera-based advanced driver-assistance systems (ADAS). Intel acquired Mobileye for $15.3 billion in 2017, and the company was re-listed on the NASDAQ stock exchange in October 2022 with Shashua continuing as president and CEO. Beyond Mobileye, Shashua co-founded OrCam in 2010 (an assistive vision technology company for the visually impaired) and co-founded AI21 Labs in 2017 (a developer of enterprise-grade large language models). He holds the Sachs Professorship in Computer Science at the Hebrew University of Jerusalem and was elected to the U.S. National Academy of Engineering in 2026.[4][5]
Professor Lior Wolf, the company's CEO, is a full professor at Tel Aviv University's School of Computer Science who previously served as a research scientist and director at Facebook AI Research (FAIR). His academic work spans computer vision, deep learning, and natural language processing, providing Mentee Robotics with deep expertise in the perception and language understanding systems critical for humanoid robot intelligence.[1][6]
Professor Shai Shalev-Shwartz, the company's chief scientist, is a professor at the Hebrew University of Jerusalem and serves as CTO of Mobileye, where he leads development of the Responsibility-Sensitive Safety (RSS) framework and Road Experience Management technologies. A world-renowned machine learning researcher, he co-authored a widely cited textbook on the field. His dual role at Mobileye and Mentee Robotics helped facilitate direct technology transfer between autonomous driving and humanoid robotics.[4][7]
The founding of Mentee Robotics came at a moment of surging interest in humanoid robotics. Tesla had announced its Optimus humanoid robot program in 2021 and demonstrated an early prototype in 2022, sparking renewed attention from investors and the media. Mentee Robotics attracted brief press coverage at the tail end of 2022 in connection with this broader trend, but the company had little to show publicly at that early stage and quickly entered a period of quiet development.[3]
For approximately 18 months following its founding, Mentee Robotics operated in a near-stealth mode. During this period, the company assembled a team that grew to roughly 70 employees at its Herzliya offices and developed its humanoid robot platform from scratch. Rather than relying on off-the-shelf components, the team designed proprietary actuators, motor drivers, and robotic hands in-house, reflecting a vertically integrated approach that would become a defining characteristic of the MenteeBot platform.[1][3][8]
The company raised initial funding from Ahren Innovation Capital (lead investor) and additional investors. The total amount raised before the Mobileye acquisition was reported as $17 million by Tracxn, while other sources, including Calcalist, cited figures as high as $38 million to $41 million, suggesting additional undisclosed funding rounds.[9][10][11]
Shashua later described the technological convergence that motivated the venture: "We are on the cusp of a convergence of computer vision, natural language understanding, strong and detailed simulators, and methodologies on and for transferring from simulation to the real world." This convergence, he argued, made the development of truly capable humanoid robots feasible for the first time.[1]
Mentee Robotics emerged from stealth on April 17, 2024, with the public unveiling of its first-generation MenteeBot prototype. The announcement was distributed via BusinessWire under the headline "Mentee Robotics Unveils MenteeBot: A Humanoid Robot That Integrates AI Across All Operational Layers." The company did not hold a traditional press conference or provide hands-on demonstrations to journalists; instead, it released a series of short demonstration videos alongside the press release.[1][12]
The unveiling positioned the MenteeBot as a fundamentally different kind of humanoid robot. While many competitors had started with mechanical engineering and added AI capabilities later, Mentee Robotics described its approach as "AI-first," meaning that artificial intelligence was integrated across every operational layer from the very beginning of the design process, from high-level task planning down to low-level motor control.[1][3]
The flagship demonstration video showed the MenteeBot performing a multi-step task in a single continuous, unedited take. In the video, the robot received a verbal command ("put the fruit in the box and place it on the shelf"), navigated to a table, identified and picked up two persimmons, placed them in a box, and carried the box to a nearby shelf. The entire sequence was completed in approximately 90 seconds without human intervention.[1][12][13]
This unedited format was notable in the humanoid robotics industry, where demonstration videos are frequently cut and edited to hide failures or delays. Calcalist's competitive analysis at the time observed that "MenteeBot's demonstration emphasizes a variety of different actions in one continuous and unedited video, in response to a voice command and in a completely autonomous manner," distinguishing it from the edited clips released by competitors such as Figure AI and Tesla.[13]
Additional demonstration clips showed the MenteeBot walking forward and backward, carrying objects (including a six-pack of beverages), performing dynamic balancing while loaded, and engaging in basic verbal interactions. The videos illustrated the robot's ability to chain together locomotion, navigation, perception, object manipulation, and language understanding into coherent end-to-end behaviors.[1][12]
The original MenteeBot prototype exhibited the following specifications, as disclosed at the April 2024 unveiling and through subsequent company communications.
| Category | Specification | Value |
|---|---|---|
| Physical | Height | 175 cm (5 ft 9 in) |
| Physical | Weight | 70 kg (154 lb) |
| Mobility | Degrees of freedom | 40 |
| Mobility | Maximum walking speed | 1.5 m/s (5.4 km/h; 3.4 mph) |
| Manipulation | Payload capacity | 25 kg (55 lb) |
| Power | Battery life | Up to 5 hours (prototype), 3 hours (later reported) |
| Sensing | Primary sensing | Camera-only (no LiDAR) |
| Computing | Compute platform | Dual NVIDIA Jetson AGX Orin |
| Computing | LLM integration | Yes (transformer-based) |
| Software | 3D mapping | NeRF-based real-time cognitive maps |
| Software | Locomotion training | Sim2Real reinforcement learning |
| Hardware | Actuators | Custom proprietary design |
| Hardware | Hands | Gripper-style (clamp design) |
Battery life estimates varied across sources. Early reports from Interesting Engineering and other outlets cited a 5-hour continuous runtime, while later company materials and the V3 specification sheet indicated approximately 3 hours per charge, suggesting that the 5-hour figure may have been an aspirational target rather than a tested result.[14][15]
At the time of unveiling, Mentee Robotics described two planned variants of the MenteeBot: a residential model designed for household tasks such as table setting, laundry management, and cleaning; and a commercial model optimized for manual labor in warehouses and industrial settings, including item retrieval, transportation, and organization. Both variants would share the same core platform, with modular actuators and camera-only sensing tailored to each environment. In practice, the company's focus shifted increasingly toward industrial and logistics applications as the platform matured toward the V3 generation.[1][14]
The MenteeBot's AI system operated across three interconnected layers, each responsible for a different aspect of the robot's behavior. This architecture drew heavily on the founders' experience at Mobileye, where a similar layered approach was used for autonomous vehicle perception and decision-making.
At the highest level, the MenteeBot used transformer-based large language models to interpret natural language commands, engage in conversation, and decompose complex instructions into sequences of executable subtasks. When a human operator gave a verbal command (for example, "pick up the boxes and move them to the shelf"), the LLM broke this down into discrete steps: navigate to the boxes, identify the correct objects, grasp them, transport them, and place them at the designated location. This capability enabled workers without technical training to direct the robot using ordinary spoken language.[1][16]
The LLM integration also supported the concept of a "mentorable" robot, one that could learn new tasks through verbal instructions and visual demonstrations provided by human operators, similar to onboarding a new employee. The company described this as working toward "few-shot generalization," where the robot could acquire new skills from only a small number of human demonstrations rather than requiring extensive programming.[1][17]
The robot built a detailed three-dimensional map of its environment in real time using neural radiance field algorithms. Originally developed in the computer vision research community for representing 3D scenes from 2D images, NeRF models were adapted in the MenteeBot to create cognitive maps that stored both geometric and semantic information. The robot could identify the locations of specific objects, understand spatial relationships between items and surfaces, and dynamically plan paths that avoided obstacles. The maps updated continuously as the robot moved, allowing it to handle environmental changes such as moved inventory, new obstacles, or rearranged furniture.[1][16]
This cognitive mapping system enabled the robot to query its internal representation of the world. For instance, when asked to "find the Coke cans," the robot's NeRF-based system could localize matching objects in its 3D map and plan a route to reach them, without requiring predefined waypoints or hard-coded location data.[1]
Locomotion and manipulation skills were trained using a simulator-to-reality (Sim2Real) approach. The robot first learned behaviors in a simulated environment, where reinforcement learning could draw on effectively unlimited synthetic training data. Domain randomization techniques (randomizing object positions, textures, lighting conditions, and physical properties) ensured that policies learned in simulation were robust enough to transfer to the real world with minimal fine-tuning. This dramatically reduced the need for expensive and time-consuming real-world data collection.[1][16]
The Sim2Real approach was central to the company's strategy for scaling robot capabilities efficiently. Rather than requiring extensive teleoperation sessions for every new skill, engineers could train policies in simulation and deploy them on the physical robot with relatively little adaptation, a process the company described as requiring "very little data" for real-world transfer.[1]
A notable design choice was the MenteeBot's reliance on camera-only sensing, without LiDAR or other active depth sensors. This vision-centric approach directly mirrored the philosophy that Mobileye had pioneered in autonomous driving, where camera-based perception proved sufficient for understanding complex road environments at a fraction of the cost of LiDAR-equipped systems. Applied to humanoid robotics, this approach kept the sensor suite simpler and more cost-effective while leveraging the advances in computer vision and neural network-based depth estimation that the founders had helped develop over their careers.[1][13][18]
Mentee Robotics built its development and training pipeline around NVIDIA's robotics ecosystem, establishing one of the deeper integrations among humanoid robot developers. The partnership followed NVIDIA's three-computer architecture for robotics.
| Component | NVIDIA technology | Role |
|---|---|---|
| Training | NVIDIA DGX platform | Model training and refinement for perception, movement, and manipulation |
| Simulation | NVIDIA Isaac Sim | Synthetic data generation, physics-accurate simulation, and model validation |
| Simulation | NVIDIA Isaac Lab | Reinforcement and imitation learning for motion control and dexterity |
| Simulation | NVIDIA GR00T-Mimic | Scaling expert teleoperation demonstrations with domain randomization |
| Simulation | NVIDIA Cosmos | Augmenting synthetic training data with generative AI capabilities |
| Edge deployment | Dual NVIDIA Jetson AGX Orin | Onboard AI inference, real-time control, and autonomous operation |
| Future upgrade | NVIDIA Jetson Thor | Planned next-generation onboard compute for greater scalability |
| Validation | NVIDIA RTX 6000 Ada GPUs | Private cloud infrastructure for model validation before physical deployment |
The training pipeline worked as follows: expert demonstrations were collected via teleoperation using motion planners, then scaled in simulation using NVIDIA GR00T-Mimic with domain randomization (randomized object positions, backgrounds, textures, and more). Reinforcement learning policies for motion control were trained in NVIDIA Isaac Lab. The resulting models were validated in Isaac Sim before being deployed to the dual Jetson AGX Orin processors on the physical robot for real-time autonomous operation.[19]
Professor Lior Wolf, Mentee Robotics' CEO, stated: "NVIDIA's hardware and AI stack provided the compute efficiency, scalability, and real-time AI processing" necessary for the company's approach to robot learning and humanoid autonomy.[19]
In August 2024, Mentee Robotics released a demonstration video showing the MenteeBot functioning as a shopping assistant. In the video, the robot accompanied a wheelchair user through a retail environment, pushing a shopping cart and adjusting its position each time the shopper stopped to add items. The demonstration highlighted the robot's ability to follow a human companion at a controlled pace, avoid collisions in a dynamic setting, and respond to implicit behavioral cues (such as stopping when the user stops) rather than requiring explicit verbal commands for every action.[20]
This video broadened the perceived application space beyond the household and warehouse scenarios shown in April, suggesting potential service and assistive robotics roles for the platform.
Later demonstration videos featured two MenteeBot V3 units working together in a warehouse environment, autonomously transferring 32 boxes from eight piles of varying heights into four flow racks. The 18-minute continuous, unedited demonstration showcased coordinated multi-robot operation, stable movement with loaded payloads, precise object handling across different rack heights, and fleet coordination (two robots operating in close proximity without collisions). This demonstration, released in November 2025, used the upgraded V3 platform but built on the same AI architecture originally developed for the first-generation MenteeBot.[21][22]
In January 2025, a design closely resembling the forthcoming MenteeBot V3.0 appeared on stage during NVIDIA founder Jensen Huang's keynote address at CES 2025 in Las Vegas. Huang assembled 14 humanoid robots on stage alongside his declaration that "the ChatGPT moment for general robotics is just around the corner." The lineup included robots from Boston Dynamics, Agility Robotics, Figure AI, Apptronik, Unitree Robotics, 1X Technologies, Sanctuary AI, XPeng Robotics, Fourier Intelligence, and others. MenteeBot was the sole representative from Israel in the lineup.[23][24]
The CES appearance served as an important validation of Mentee Robotics' standing within the global humanoid robotics ecosystem and underscored its close technical relationship with NVIDIA. It also provided the first public glimpse of the V3 design before its official unveiling the following month.
The original MenteeBot was succeeded by the MenteeBot V3, officially unveiled on February 11, 2025. While the company designated this as the "third version," suggesting at least one intermediate iteration (sometimes referred to informally as V2), detailed public information about the V2 stage is limited. The Ynet News report on the V3 unveiling referred to it as an improvement over "the previous version from last April" (i.e., the April 2024 prototype), and Interesting Engineering described the V3 as featuring "a series of significant improvements compared to previous generations," but neither publication provided detailed specifications for a distinct V2 model.[15][25]
The progression from the original MenteeBot to V3 brought several notable improvements.
| Feature | Original MenteeBot (2024) | MenteeBot V3 (2025) |
|---|---|---|
| Vision system | Forward-facing cameras | 360-degree (fisheye side + rear + head-mounted sensors) |
| Battery system | Fixed battery, 3 to 5 hours | Hot-swappable, 3 to 4+ hours per module, 24/7 via cycling |
| Hand design | Gripper-style clamps | Redesigned with 30 N finger pinch force, impact-resistant |
| Tactile sensing | Not disclosed | Motor-based tactile sensing in hands |
| Appearance | Functional prototype aesthetic | More humanoid, refined industrial design |
| Market focus | Residential and commercial variants | Primarily industrial and logistics |
| Actuators | Custom proprietary | Custom proprietary (3x power density, further refined) |
| Height | 175 cm | 175 cm |
| Weight | 70 kg | 70 kg |
| Degrees of freedom | 40 | 40 |
| Payload | 25 kg | 25 kg |
| Walking speed | 1.5 m/s | 1.5 m/s |
The V3 maintained the same fundamental physical dimensions and performance envelope as the original, but introduced substantial upgrades to the sensor suite, battery architecture, and manipulation hardware. The shift from a dual residential/commercial positioning to a primarily industrial focus reflected a strategic recalibration as the company moved closer to real-world deployment scenarios.[2][15][25]
A further incremental update, the MenteeBot V3.1, was announced in 2025 with refinements to the tactile sensing, reinforcement learning workflows for specific industrial tasks, and further improvements to the Sim2Real transfer pipeline.[26]
On January 6, 2026, during CES 2026 in Las Vegas, Mobileye announced a definitive agreement to acquire Mentee Robotics for approximately $900 million. The transaction comprised roughly $612 million in cash and up to 26.2 million shares of Mobileye Class A common stock, subject to adjustment based on vesting of Mentee stock options prior to closing. The deal was approved by Mobileye's board of directors and Intel (Mobileye's majority shareholder), with closing expected in the first quarter of 2026.[5][27]
Amnon Shashua, who held a 37.8% stake in Mentee Robotics, described the acquisition as "the beginning of Mobileye 3.0," combining Mentee's humanoid robotics breakthroughs with Mobileye's two decades of automotive autonomy expertise. Mentee CEO Lior Wolf highlighted the platform's four-year development achieving "cost-efficient humanoid" solutions. Mentee Robotics was to continue as an independent unit within Mobileye following the acquisition.[5][27][28]
Mobileye identified three primary technology synergies driving the acquisition:
The acquisition attracted scrutiny due to Shashua's role on both sides of the transaction. As both the CEO of Mobileye (the acquirer) and the chairman and largest shareholder of Mentee Robotics (the target), the deal raised conflict-of-interest questions. Mobileye's board established a strategy committee and engaged McKinsey as external advisers, with Shashua formally recused from the board's consideration and approval process. Analysts at Calcalist noted that Mentee had generated no revenue at the time of acquisition and that the $900 million valuation represented a significant premium over the approximately $38 million in total funding the company had raised.[29]
Following the acquisition, Mobileye outlined a phased go-to-market plan: first autonomous proof-of-concept deployments with customers were expected in 2026, with series production and initial commercial sales targeted for 2028. These deployments would focus on industrial applications such as logistics centers and manufacturing lines, with autonomous on-site operation rather than teleoperation.[5][27]
The relationship between Mentee Robotics and Mobileye extended beyond shared founders, representing a deliberate transfer of autonomous driving technology to humanoid robotics.
Mobileye, founded in 1999, pioneered the use of camera-based computer vision for ADAS and fully autonomous driving. The company's technology relies on interpreting visual data from cameras rather than expensive LiDAR arrays to perceive the driving environment, make decisions, and control vehicle behavior. This camera-first, vision-centric philosophy directly influenced the MenteeBot's design, which similarly relied exclusively on cameras for perception. The approach kept hardware costs lower and leveraged the advances in neural network-based depth estimation and scene understanding that the Mobileye team had spent decades developing.[5][18]
Mobileye's RSS framework provides a formal mathematical model that defines safe behavior for autonomous systems operating in dynamic environments alongside humans. Originally developed for self-driving cars, RSS encodes five core safety rules covering safe following distances, lateral spacing, right-of-way priority, visibility handling, and crash avoidance obligations. Adapting RSS for humanoid robotics would provide a rigorous, mathematically verifiable safety foundation for robots working in close proximity to human workers in factories and warehouses, building the regulatory readiness and trust required for large-scale commercial deployment.[5][7][27]
Both Mobileye and Mentee Robotics relied heavily on simulation for training and validation. Mobileye uses simulation extensively to test autonomous driving software across billions of miles of virtual driving scenarios. Mentee Robotics applied the same principle to humanoid robotics, training locomotion and manipulation policies in NVIDIA Isaac Sim before deploying them on physical hardware. The shared expertise in bridging the gap between simulated and real-world performance was a key technical synergy identified in the acquisition rationale.[5][19]
The original MenteeBot entered a rapidly growing and intensely competitive humanoid robotics landscape during 2024. Several competitors had their own unveilings and demonstrations during the same period.
| Company | Robot | Notable features at the time of MenteeBot's unveiling |
|---|---|---|
| Tesla | Optimus Gen 2 | Multi-jointed fingers, second-generation prototype, Autopilot AI |
| Figure AI | Figure 01 / Figure 02 | OpenAI partnership for reasoning, 16 DOF hands |
| Boston Dynamics | Atlas (electric) | Fully electric redesign of iconic platform |
| Agility Robotics | Digit | Already deployed in Amazon warehouses |
| Apptronik | Apollo | Mercedes-Benz partnership, modular design |
| Unitree Robotics | H1 | Aggressive pricing, open ecosystem |
| Sanctuary AI | Phoenix | Carbon AI control system, dexterous hands |
| 1X Technologies | NEO | Consumer-focused, lightweight design |
Calcalist's comparison at the time noted that while competitors like Figure 01 and Tesla Optimus featured five-jointed fingers capable of fine manipulation, the original MenteeBot used simpler gripper-style clamps. However, MenteeBot distinguished itself through its continuous, unedited demonstration videos, its AI-first architecture, and its founders' unmatched track record in bringing autonomous perception technology to commercial scale through Mobileye.[13]
The broader humanoid robotics market was estimated to reach $38 billion by 2035, according to Goldman Sachs, with logistics, manufacturing, and hazardous environment applications expected to drive early adoption. Despite the intense activity, no company had achieved large-scale commercial sales of general-purpose humanoid robots as of the MenteeBot's unveiling, and the timeline for doing so remained uncertain.[13][29]
Mentee Robotics is a product of Israel's broader technology ecosystem, which has earned the country its "Startup Nation" reputation. Israel is home to approximately 170 robotics companies spanning industrial technology, agricultural technology, health technology, and defense. The country's innovation culture fosters close collaboration between universities, research institutions, and startups, supported by a deep talent pool in AI, computer vision, and machine learning.[30]
The nation's autonomous driving sector is particularly strong, anchored by Mobileye and supported by companies like OrCam (co-founded by Shashua for assistive vision devices) and numerous defense-related autonomy firms. Israeli tech startups raised approximately $9.58 billion in 2024 (a 38% increase over 2023), with robotics representing a growing share of investment. The Mobileye-Mentee acquisition at $900 million was one of the largest Israeli startup acquisitions of early 2026.[10][30]
The original MenteeBot was designed for both domestic and commercial applications, though the emphasis shifted increasingly toward industrial use cases as the platform matured.
At unveiling, the residential variant was described as capable of household chores including table setting, cleaning, dishwashing, laundry management, and learning new domestic tasks through verbal instructions and visual imitation. The April 2024 persimmon-sorting demonstration, while set in a controlled environment, illustrated the type of household task the robot was designed to perform.[1][14]
The commercial variant targeted warehouse automation (locating, retrieving, and transporting items up to 25 kg), manufacturing support (assembly tasks, loading and unloading, material handling), logistics operations (sorting, organizing, and palletizing goods), and work in hazardous environments (settings with chemical exposure or extreme temperatures that pose risks to human workers). The August 2024 shopping assistant demo and the later multi-robot warehouse demonstrations further validated the platform's capabilities in service and logistics roles.[1][14][20]