| NAO |
|---|
![]() |
| General information |
| Developer |
| Country of origin |
| Year unveiled |
| Status |
| Current version |
| Height |
| Weight |
| Degrees of freedom |
| Battery |
| Operating time |
| Processor |
| Units deployed |
| Price |
| Website |
NAO is an autonomous, programmable humanoid robot originally developed by Aldebaran Robotics, a French company founded in Paris in 2005 by Bruno Maisonnier. Standing 58 cm tall and weighing approximately 5.4 kg, NAO is one of the most widely used humanoid robots in the world, with over 13,000 units deployed across more than 70 countries as of 2024 [1]. The robot serves as a standard platform in academic research, STEM education, special education (particularly autism therapy), and competitive robot soccer through the RoboCup Standard Platform League.
Since its commercial debut in 2008, NAO has gone through six major hardware revisions, with the current NAO6 model released in 2018. It features 25 degrees of freedom, two HD cameras, four directional microphones, speech recognition in over 20 languages, and support for programming in Python, C++, Java, and MATLAB through the NAOqi software framework. The robot's combination of accessible pricing (relative to full-scale humanoids), open programming interfaces, and expressive movement capabilities has made it the dominant platform for human-robot interaction research worldwide [2].
NAO's corporate ownership has changed hands multiple times. Aldebaran Robotics was acquired by Japan's SoftBank Group in 2012 for approximately $100 million and subsequently rebranded as SoftBank Robotics Europe. In July 2022, SoftBank sold the company to United Robotics Group, which restored the Aldebaran name. Following Aldebaran's entry into receivership in June 2025, Chinese firm Maxvision Technologies purchased its core assets and established Maxtronics, a subsidiary based in France, to continue development and sales of NAO and Pepper [3].
Bruno Maisonnier, a graduate of Ecole Polytechnique and Telecom Paris, launched what he called "Project Nao" in 2004 with the ambition of creating an affordable, programmable humanoid robot [4]. In 2005, he formally established Aldebaran Robotics in Paris, making it the first French company dedicated to humanoid robotics. Between 2005 and 2007, the engineering team designed and built six successive prototypes, iterating on the robot's mechanical design, sensor suite, and software architecture [1].
The project received a major boost in August 2007, when NAO was selected to replace Sony's discontinued AIBO robot dog as the official platform for the RoboCup Standard Platform League (SPL). Sony had announced the end of AIBO production in 2006, and the RoboCup organizing committee evaluated several candidates before choosing NAO as the league's new standard robot [5]. This selection gave Aldebaran both international visibility and a concrete technical benchmark to meet.
In March 2008, Aldebaran delivered the first production units, the Nao RoboCup Edition, to competing teams in time for the 2008 RoboCup competition in Suzhou, China. Later that year, the company released the Nao Academics Edition, making the robot available to universities, educational institutions, and research laboratories worldwide [1].
In March 2012, SoftBank Mobile (later SoftBank Group) acquired a majority stake in Aldebaran Robotics for approximately US$100 million, with plans to invest an additional $40 to $50 million to accelerate development [6]. The acquisition was driven by SoftBank CEO Masayoshi Son's vision of addressing societal challenges, particularly caring for aging populations in Japan, through robotics. At the time of the acquisition, approximately 900 NAO units were in use across 30 countries [6].
The SoftBank era saw rapid expansion of NAO's global footprint. By the end of 2014, over 5,000 units were deployed in more than 70 countries [1]. The company leveraged SoftBank's resources to establish distribution networks in Asia, North America, and the Middle East. In 2014, the two companies collaborated to develop Pepper, a larger social robot designed for retail and hospitality environments, which complemented NAO's focus on education and research.
In 2015, SoftBank formally rebranded Aldebaran as SoftBank Robotics Europe, integrating it more closely into the parent company's robotics division. During this period, NAO continued to serve as the company's flagship education and research platform while Pepper targeted commercial deployments.
In July 2022, SoftBank sold its European robotics subsidiary to United Robotics Group (URG), a German company. URG restored the original Aldebaran name and continued manufacturing and supporting NAO and Pepper. However, the company faced financial pressures. Despite the large installed base, Aldebaran struggled to achieve sustained commercial growth, partly because NAO6 had not received a major hardware refresh since 2018 [7].
In February 2025, Aldebaran filed for bankruptcy protection in France. By June 2025, a French court placed the company in receivership, leading to the layoff of approximately 106 employees [7]. On July 10, 2025, Maxvision Technology Corp., a Shenzhen-listed Chinese technology company (stock code 002990 on the Shenzhen Stock Exchange), acquired Aldebaran's core assets, including all intellectual property rights related to NAO and Pepper [3].
On August 28, 2025, Maxvision established Maxtronics, a subsidiary based in France, to carry on development, sales, and customer support. The original engineering teams, product lines, and client support operations were retained. Maxvision committed to investing in NAO's continued development, with a particular focus on education and healthcare applications [3]. Maxtronics has since showcased NAO at the 2025 IROS conference, signaling ongoing commitment to the platform.
NAO has undergone six major hardware revisions since its initial prototype. Each generation introduced improvements to processing power, sensor capability, battery life, and build quality.
| Version | Year | Processor | RAM | Storage | Battery | Autonomy | Camera | Weight | Key changes |
|---|---|---|---|---|---|---|---|---|---|
| V3 (RoboCup/Academics) | 2008 | AMD Geode 500 MHz | 256 MB | 2 GB flash | 27.6 Wh | ~60 min | OV7670 VGA | 4.84 kg | First commercial release |
| V4 (Next Gen) | 2011 | Intel Atom Z530 1.6 GHz | 1 GB | 2 GB + 8 GB microSD | 48.6 Wh | ~90 min | MT9M114 960p | 5.0 kg | HD cameras, anti-collision, faster walking |
| V5 (Evolution) | 2014 | Intel Atom E3845 1.91 GHz | 4 GB DDR3 | 32 GB SSD | 62.5 Wh | ~90 min | OV5640 5 MP | 5.48 kg | Improved speech synthesis, facial detection |
| V6 (NAO6 / Power 6) | 2018 | Intel Atom E3845 1.91 GHz | 4 GB DDR3 | 32 GB SSD | 62.5 Wh | ~90 min | OV5640 5 MP | 5.48 kg | NAOqi 2.8 OS, improved AI frameworks |
The first commercially available NAO featured an AMD Geode processor running at 500 MHz with 256 MB of RAM. Its battery provided approximately 60 minutes of operation. While modest by later standards, the V3 established NAO's core form factor: a 58 cm bipedal humanoid with 25 degrees of freedom, two cameras, four microphones, and a full complement of inertial and tactile sensors. The V3 was subdivided into minor revisions (V3.2 in 2009, V3.3 in 2010) that refined the mechanical design [1][8].
Released in December 2011, the V4 represented the first substantial hardware upgrade. The processor jumped to an Intel Atom Z530 at 1.6 GHz with 1 GB of RAM, providing a roughly threefold increase in computing power. New features included high-density cameras (960p resolution), improved mechanical robustness, an anti-collision system, and a faster walking speed. The battery capacity nearly doubled from 27.6 Wh to 48.6 Wh, extending operating time to approximately 90 minutes [1].
Introduced in June 2014, the NAO Evolution upgraded to an Intel Atom E3845 quad-core processor at 1.91 GHz with 4 GB of DDR3 RAM and 32 GB SSD storage. The two cameras were upgraded to 5-megapixel OmniVision OV5640 sensors. Software improvements included enhanced multilingual speech synthesis, improved shape and facial detection using new algorithms, and better sound source localization via the four directional microphones. The V5 also featured improved durability for classroom and research environments [1].
The current model, released in June 2018, retained the same processor, RAM, and camera hardware as the V5 but shipped with NAOqi OS 2.8, incorporating more advanced artificial intelligence frameworks and improved behavioral scripting capabilities. Later software updates introduced generative AI activities, including chatbot capabilities and AI-driven conversational modes powered by external large language models [9]. The NAO6 remains the most recent hardware revision as of 2026.
The following specifications apply to the current NAO6 model unless otherwise noted.
| Specification | Value |
|---|---|
| Height | 574 mm (58 cm) |
| Weight | 5.48 kg (12.1 lb) |
| Degrees of freedom | 25 |
| DOF per arm | 5 |
| DOF per hand | 1 (3-fingered gripper) |
| DOF per leg | 5 |
| DOF head | 2 |
| DOF pelvis | 1 |
| Walking speed | Up to 0.4 m/s (1.44 km/h) |
| Finger count | 3 per hand |
| Materials | ABS-PC plastic body |
| Specification | Value |
|---|---|
| Processor | Intel Atom E3845 quad-core, 1.91 GHz |
| RAM | 4 GB DDR3 |
| Storage | 32 GB SSD |
| Operating system | NAOqi 2.8 (Linux-based) |
| Battery | 62.5 Wh lithium-ion |
| Operating time | 60 to 90 minutes (usage-dependent) |
| Connectivity | Wi-Fi (802.11 a/b/g/n), Ethernet |
| Sensor type | Details |
|---|---|
| Cameras | 2x OmniVision OV5640, 5 megapixel (top and bottom) |
| Microphones | 4 omnidirectional |
| Sonar | 2 emitters + 2 receivers (40 kHz, 0.2 to 3 m range) |
| Inertial measurement | 3-axis accelerometer + 2-axis gyroscope |
| Tactile sensors | 9 (3 on head, 3 on each hand) |
| Foot pressure sensors | 8 force-sensing resistors (4 per foot) |
| Foot bumpers | 2 (1 per foot) |
| Infrared | 2 emitters + 2 receivers |
| Joint encoders | 36 magnetic rotary encoders (12-bit precision) |
| Feature | Details |
|---|---|
| Speakers | 2 (located in ears) |
| Speech synthesis | Text-to-speech in 20+ languages |
| Speech recognition | Available in English, French, Spanish, German, Italian, Arabic, Dutch, Portuguese, Czech, Finnish, Russian, Swedish, Turkish, and others |
| Sound localization | Directional sound source detection via 4-microphone array |
NAO runs on NAOqi, a proprietary Linux-based operating system and middleware framework developed specifically for Aldebaran's robots. NAOqi provides a broker-based architecture that enables communication between software modules responsible for motion, audio, video, and sensor processing [10]. Each module advertises its available methods to the broker, and any other module (whether running locally on the robot or remotely on a connected computer) can discover and call those methods.
Key characteristics of the NAOqi framework include:
The NAOqi operating system has progressed through several major versions, from OpenNAO 1.6 through NAOqi 2.8, with each release adding improved sensor drivers, behavioral capabilities, and API features [1].
Choregraphe is a graphical programming environment that allows users to create behaviors for NAO without writing code. Developed by Aldebaran, it provides a drag-and-drop interface where pre-built behavior "boxes" (such as "Stand Up," "Walk To," "Say Text," or "Dance") can be placed on a flow diagram and connected to define sequences of actions [11]. Each box can be configured with parameters such as speed, distance, text content, and timing.
For more advanced users, Choregraphe also supports embedded Python scripting within custom boxes, enabling fine-grained control over robot behavior. The software includes a 3D simulator that allows developers to test and debug behaviors on a virtual NAO before deploying to physical hardware [11].
Choregraphe has been widely praised for lowering the barrier to entry for robotics programming, making it accessible to educators, students, and researchers who may not have extensive software development backgrounds.
Beyond Choregraphe, advanced developers can program NAO directly using the NAOqi SDK, which provides full access to all robot functions through Python, C++, Java, C# (.NET), and MATLAB APIs [12]. The SDK enables developers to:
NAO is compatible with the Robot Operating System (ROS) through community-maintained driver packages. This allows researchers to leverage the extensive ROS ecosystem of tools, libraries, and algorithms (including SLAM, path planning, and perception) when working with NAO. Integration with the Cyberbotics Webots simulator also enables large-scale simulation experiments [1].
Several educational platforms have been built around NAO to make it more accessible in classroom settings:
NAO's role in the RoboCup Standard Platform League (SPL) has been one of its most visible and enduring applications. In the SPL, all teams must use identical robot hardware, competing solely on the basis of their software for autonomous robot soccer. This format makes the SPL a rigorous benchmark for advances in computer vision, locomotion, reinforcement learning, multi-agent coordination, and real-time decision-making.
After Sony discontinued AIBO production in 2006, the RoboCup Federation evaluated several robots as potential replacements. NAO was selected in August 2007 [5]. The 2008 competition in Suzhou served as a transition year, with both AIBO and NAO matches running concurrently. From 2009 onward, the SPL has used NAO exclusively [5].
| Year | Champion | Location |
|---|---|---|
| 2008 | NUManoids (Australia/Ireland) | Suzhou, China |
| 2009 | B-Human (Germany) | Graz, Austria |
| 2010 | B-Human (Germany) | Singapore |
| 2011 | B-Human (Germany) | Istanbul, Turkey |
| 2012 | UT Austin Villa (USA) | Mexico City, Mexico |
| 2013 | B-Human (Germany) | Eindhoven, Netherlands |
| 2014 | rUNSWift / UNSW Sydney (Australia) | Joao Pessoa, Brazil |
| 2015 | UNSW Sydney (Australia) | Hefei, China |
| 2016 | B-Human (Germany) | Leipzig, Germany |
| 2017 | B-Human (Germany) | Nagoya, Japan |
| 2018 | Nao-Team HTWK (Germany) | Montreal, Canada |
| 2019 | B-Human (Germany) | Sydney, Australia |
B-Human, a team from the University of Bremen and the German Research Center for Artificial Intelligence (DFKI), has dominated the SPL, winning the championship in nine of the twelve seasons listed above [5]. Their success has been attributed to advances in vision processing, fast bipedal walking algorithms, and coordinated team play.
The SPL has driven significant improvements in NAO's software capabilities. Competing teams publish their research openly, contributing to the broader robotics community's understanding of autonomous behavior in humanoid robots.
NAO is the most widely used humanoid robot in education worldwide. By 2024, more than 13,000 units were deployed in universities, schools, and research institutions across over 70 countries [1]. The robot has been adopted by more than 200 academic institutions, including the University of Tokyo (which acquired 30 units in 2010), the University of Hertfordshire, the Indian Institute of Information Technology at Allahabad, the Indian Institute of Technology Kanpur, King Fahd University of Petroleum and Minerals in Saudi Arabia, the University of South Wales, and Montana State University [1].
In the United Kingdom, donated NAO robots have been used in numerous schools to introduce children to robotics and programming concepts. The robot's small size, non-threatening appearance, and expressive movement make it well-suited for classroom environments where students can interact with it at tabletop level.
A scoping review of NAO research published in 2021 identified approximately 300 research papers focusing on human-NAO interaction from 2010 to 2020 alone [2]. The largest numbers of publications came from the United States (33 papers), China (30), and France (25). Research applications span a broad range of fields:
| Application area | Examples |
|---|---|
| Human-robot interaction | Social cues, gesture recognition, conversational agents |
| Education | STEM teaching, programming education, language learning |
| Healthcare | Autism therapy, elderly care, rehabilitation exercises |
| Computer vision | Object recognition, face detection, SLAM |
| Locomotion | Bipedal walking algorithms, fall recovery, dynamic balance |
| Cognitive science | Theory of mind experiments, self-awareness studies |
| Sign language | Colombian Sign Language tutoring [13] |
| Artificial intelligence | LLM-powered conversational interactions [14] |
NAO continues to be an active subject of academic research. Studies published in 2024 and 2025 cover topics ranging from shoulder rehabilitation assistance to LLM-enhanced educational interactions, demonstrating the platform's ongoing relevance despite its aging hardware [13][14].
One of NAO's most impactful applications has been in therapy and education for children with autism spectrum disorder (ASD). The robot's predictable behavior, consistent emotional expression, simplified facial features, and patient interaction style make it an effective tool for engaging children who may struggle with the unpredictability and complexity of human social interaction [15].
Multiple studies have documented positive outcomes from NAO-assisted autism therapy:
RobotLAB, one of NAO's primary distributors in North America, offers a dedicated NAO Autism Pack that combines the robot with curriculum materials aligned to Applied Behavior Analysis (ABA) principles. The pack provides structured activities designed to help students on the autism spectrum develop social, communication, and academic skills through predictable, multi-sensory interactions [18].
NAO has also been deployed in medical settings. A 2025 study published in Frontiers in Psychiatry examined how NAO could facilitate interactions between healthcare professionals and patients with ASD, finding that the robot served as an effective mediator in clinical environments [19].
NAO has been featured in several high-profile public demonstrations and scientific experiments that have contributed to its international recognition.
At the 2010 Shanghai World Expo, Aldebaran staged a performance featuring 20 NAO robots executing a synchronized, choreographed dance routine on France Pavilion Day (June 21, 2010, which coincided with Music Day in France). The eight-minute performance, set to a three-part music compilation that included Maurice Ravel's Bolero, showcased the robots' range of movement and their ability to coordinate via Wi-Fi. The performance attracted widespread media coverage and became one of the most viewed robotics videos of the year [20].
In July 2015, Professor Selmer Bringsjord at Rensselaer Polytechnic Institute in Troy, New York, conducted a notable experiment in which three NAO robots were presented with a variation of the classic "King's Wise Men" logic puzzle [21]. Two of the three robots were programmed to believe they had been given a "dumbing pill" (simulated by pressing a button on their heads that prevented speech). When asked which pill they had received, only the third robot could speak. It initially replied "I don't know," then, after hearing its own voice and reasoning that it must not have received the dumbing pill, said: "Sorry, I know now. I was able to prove that I was not given a dumbing pill" [21].
The experiment was widely covered in international media as the first time a robot had passed a test of self-awareness, though researchers noted it demonstrated logical reasoning about one's own state rather than consciousness in any philosophical sense.
In December 2010, a NAO robot was programmed to deliver a stand-up comedy routine, demonstrating the platform's speech synthesis and gesture capabilities in an entertainment context [1].
In 2015, Mitsubishi UFJ Financial Group in Japan piloted NAO robots in bank branches, where they served as information assistants greeting customers and providing basic service guidance [1].
NAO was used in training exercises for International Space Station crew members and in prototyping elderly care assistance scenarios, demonstrating the platform's versatility across diverse application domains [1].
Pepper is NAO's larger sibling, also developed by Aldebaran/SoftBank Robotics. While both robots share the same corporate lineage and programming framework, they are designed for fundamentally different use cases.
| Feature | NAO | Pepper |
|---|---|---|
| Height | 58 cm | 120 cm |
| Weight | 5.48 kg | 28 kg |
| Locomotion | Bipedal walking | Wheeled base (omnidirectional) |
| Degrees of freedom | 25 | 20 |
| Display | None | 10.1-inch tablet on chest |
| Hands | 3-fingered grippers | 5-fingered (non-grasping) |
| Primary market | Education, research | Retail, hospitality, reception |
| Programming | Choregraphe, NAOqi SDK | Choregraphe, NAOqi SDK |
| Portability | Easily transportable | Large, difficult to transport |
| Price range | $9,000 to $16,000 | $20,000 to $30,000 |
NAO's smaller size and bipedal design make it well-suited for scenarios requiring body movements, gestures, and physical demonstrations, such as teaching dance, demonstrating exercises, or conducting research on bipedal locomotion. Pepper's larger frame, wheeled mobility, and integrated tablet make it better suited for public-facing roles where it needs to display content and navigate through open spaces like retail floors and hotel lobbies [22].
Both robots use the same NAOqi middleware and Choregraphe programming environment, so skills developed on one platform are largely transferable to the other. Speech recognition performance is comparable between the two, though Pepper's tablet screen provides an additional channel for displaying spoken words and visual content [22].
NAO occupies a unique position in the educational robotics market as a full humanoid platform at a price point accessible to institutions (though still expensive by consumer robot standards). Its competitors vary depending on the application segment.
In the education and research humanoid segment, NAO's closest competitors include:
At lower price points, NAO competes indirectly with smaller educational robots that teach programming concepts without full humanoid capabilities:
NAO's advantages over these lower-cost alternatives include its humanoid form (which enables research in social robotics, gesture recognition, and bipedal locomotion), its extensive sensor suite, and its support for professional-grade programming languages. However, its significantly higher price means it is typically purchased by institutions rather than individual consumers.
Alongside NAO, Aldebaran also developed Romeo, a full-scale humanoid research robot standing 1.4 m tall and weighing approximately 40 kg. Romeo was designed as a "big brother" to NAO, intended for research into assistive robotics for elderly and disabled individuals [23]. Featuring 37 degrees of freedom, a four-vertebra backbone, articulated feet, and back-drivable actuators, Romeo was developed in collaboration with more than a dozen academic and industrial partners, including INRIA and LAAS-CNRS. Unlike NAO, Romeo was never commercialized at scale and remained primarily a research platform [23].
NAO's pricing has varied across its generations and depends on the configuration, included software packages, and the vendor. As of 2024 to 2025, a new NAO V6 robot typically costs between $9,000 and $16,000 USD [24][25]. Some distributors offer additional packages:
Financing options such as leasing and trade-in programs for older models are available through authorized distributors. The robot is sold through a network of resellers including RobotLAB (North America), Generation Robots (Europe), and Proven Robotics (Middle East) [24].
Following Maxtronics' acquisition of Aldebaran's assets in 2025, NAO remains available for purchase, with the new ownership committing to continued production, development, and customer support [3].
Aldebaran Robotics raised significant external capital during its early years. In June 2011, the company closed a funding round led by Intel Capital that raised US$13 million, supporting the development of the NAO V4 "Next Gen" model and expansion of the company's commercial operations [1]. The subsequent acquisition by SoftBank in 2012 for approximately $100 million provided the resources needed for global scaling, though the company's path to profitability remained challenging throughout its various ownership changes [6].