Robot safety is the discipline concerned with minimizing the risk of physical harm, property damage, and other hazards arising from the operation of robotic systems. As robots move out of caged factory cells and into shared workspaces, public roads, hospitals, and homes, the field has grown from a narrow branch of industrial engineering into a broad, interdisciplinary challenge spanning mechanical engineering, computer science, artificial intelligence, ethics, and law.
The stakes are high. Between 1992 and 2017, the U.S. Bureau of Labor Statistics recorded 41 robot-related workplace fatalities, the majority involving stationary industrial robots striking workers during maintenance. In South Korea, government records show 369 robot-related accidents between 2009 and 2019, with 95% occurring in manufacturing. These figures, while small relative to overall workplace injuries, have shaped decades of standards development. They also underscore a counterintuitive finding: a 2025 study of 15 manufacturing industries across 18 European nations found that a 10% increase in robot density actually reduced workplace injuries by roughly 2%, largely because robots took over the most dangerous tasks.
This article covers the history of robot safety incidents, the international standards framework, physical safeguards for industrial and collaborative robots, safety in autonomous vehicles and mobile robots, software-level safety techniques including safe reinforcement learning, cybersecurity threats to robotic systems, ethical considerations, and the evolving regulatory landscape.
The first known human fatality caused by a robot occurred on January 25, 1979, at a Ford Motor Company casting plant in Flat Rock, Michigan. Robert Williams, a 25-year-old assembly line worker, was struck and killed by the arm of a one-ton robotic parts-retrieval vehicle built by Litton Industries. Williams had climbed into a storage rack to retrieve parts manually after the robot produced incorrect inventory readings. The robot's arm, unaware of his presence, struck him from behind. In 1983, a jury awarded his estate $10 million (later raised to $15 million), concluding that the system lacked adequate safety measures to detect human presence.
A similar incident occurred in Japan in 1981, when 37-year-old maintenance worker Kenji Urada was killed by a robotic arm at a Kawasaki Heavy Industries plant. These early fatalities exposed a fundamental problem: robots at the time had no ability to sense or respond to the presence of humans in their workspace.
Through the 1980s and 1990s, the standard response was physical separation. Robots operated inside fenced enclosures, with interlocked gates that cut power when opened. This approach, sometimes called the "cage paradigm," worked well for repetitive manufacturing but limited the usefulness of robots in tasks requiring human-robot interaction. The push toward collaborative robots (cobots) in the 2000s and 2010s forced a rethinking of safety from the ground up.
Robot safety standards form a layered system. At the top sit general functional safety standards like IEC 61508, which defines Safety Integrity Levels (SILs). Below those sit robot-specific standards that translate general principles into concrete requirements for robot manufacturers and integrators.
ISO 10218 is the primary international standard for industrial robot safety, published in two parts:
| Standard | Scope | Current edition |
|---|---|---|
| ISO 10218-1:2025 | Safety requirements for the industrial robot itself (design, controls, verification) | 3rd edition, 2025 |
| ISO 10218-2:2025 | Safety requirements for robot systems and integration (cell design, safeguarding, commissioning) | 3rd edition, 2025 |
| ISO/TS 15066:2016 | Supplementary guidance for collaborative robot applications (now folded into ISO 10218-2:2025) | Withdrawn, integrated |
The 2025 revision is a major overhaul. It replaces the 2011 editions and brings several changes:
In North America, ISO 10218 has been adopted as ANSI/RIA R15.06 in the United States and CSA Z434 in Canada. Updated versions of both national standards incorporating the 2025 ISO revisions are expected in 2025 or 2026.
Before its integration into ISO 10218-2:2025, ISO/TS 15066 (published in 2016) was the key document defining safety requirements for human-robot collaboration. Its most referenced contribution is a table of maximum permissible contact forces and pressures for different body regions, based on pain-onset research. These thresholds distinguish between two types of contact:
The following table shows representative transient contact limits from ISO/TS 15066 Annex A:
| Body region | Maximum transient force (N) | Maximum transient pressure (N/cm2) |
|---|---|---|
| Skull and forehead | 130 | 110 |
| Face | 65 | 110 |
| Chest | 140 | 110 |
| Upper arm and elbow | 150 | 130 |
| Hand and fingers | 140 | 200 |
| Thigh and knee | 220 | 160 |
Quasi-static (clamping) thresholds are typically 40-65% of the transient values. The face has the lowest force threshold at 65 N for transient contact, while the thigh and knee tolerate the highest at 220 N.
These limits apply specifically to blunt-impact and crushing hazards. As the standard notes, force limitation on the robot does not make the end-effector safe; a sharp tool or hot workpiece can cause injury at forces well below these thresholds, requiring separate risk assessment.
The ISO standards define four modes for collaborative applications, each with different safeguarding strategies:
| Mode | How it works | Typical sensors and mechanisms |
|---|---|---|
| Safety-rated monitored stop | The robot stops before a human enters the collaborative workspace and remains stationary while the human is present. Motion resumes only after the human leaves. | Safety-rated position, speed, and torque monitoring per ISO 13849 |
| Hand guiding | A human operator physically moves the robot by grasping a hand-guiding device. The robot's own drives provide compliant motion. | Force/torque sensors on the guiding device, enabling switch, emergency stop |
| Speed and separation monitoring | The robot dynamically adjusts its speed based on the measured distance to the nearest human. If the distance drops below a safety threshold, the robot slows or stops. | Laser scanners, 3D cameras, LiDAR, safety-rated speed monitoring |
| Power and force limiting | The robot is designed so that contact forces between it and a human never exceed the biomechanical limits defined in ISO/TS 15066. The robot may operate in close proximity to humans or even make incidental contact. | Joint torque sensors, compliant actuators, padded surfaces, current limiting |
Most commercial cobots from manufacturers like Universal Robots, FANUC, ABB, and KUKA use power and force limiting as their primary safety mechanism. Speed and separation monitoring is increasingly used in conjunction with PFL, especially in applications involving AI-powered 3D vision systems that can differentiate between people, objects, and background environments.
Autonomous mobile robots (AMRs) and automated guided vehicles (AGVs) used in warehouses and factories are covered by ISO 3691-4, titled "Industrial trucks: Safety requirements and verification; Part 4: Driverless industrial trucks and their systems." The standard specifies requirements including:
In the United States, the parallel standard is ANSI/RIA R15.08, which defines three types of Industrial Mobile Robots (IMRs):
| Type | Description |
|---|---|
| IMR Type A | Mobile robot without attachments |
| IMR Type B | Mobile robot with passive or active attachments, excluding manipulators |
| IMR Type C | Mobile robot platform with a manipulator arm |
Part 1 of R15.08 (published 2020) covers manufacturer requirements. Part 2 (published 2023) covers system integration. Part 3, covering user requirements, is in development.
For autonomous vehicles, ISO 21448 (Safety of the Intended Functionality, or SOTIF) addresses a category of hazards that traditional functional safety standards like ISO 26262 do not cover. While ISO 26262 deals with faults in electrical and electronic systems (a sensor fails, a chip malfunctions), SOTIF addresses hazards that arise when the system is working as designed but the design itself is insufficient for certain conditions.
For example, a self-driving car's perception system might correctly follow its programming yet misclassify a white truck against a bright sky as open road. There is no hardware fault; the limitation is in the intended functionality itself. SOTIF provides a framework for identifying, evaluating, and reducing these kinds of risks.
The standard introduces the concept of an Operational Design Domain (ODD), which defines the specific conditions under which an autonomous system is designed to operate safely (highway driving in clear weather, for instance, but not snowy mountain roads at night). SOTIF also establishes that autonomous vehicles should be significantly safer than human drivers to gain public acceptance; research suggests people expect autonomous vehicles to reduce death or injury risk by 75-80% compared to human-driven cars.
IEC 61508 is the umbrella standard for functional safety of electrical, electronic, and programmable electronic safety-related systems. It defines four Safety Integrity Levels (SILs):
| SIL | Probability of dangerous failure per hour (continuous mode) | Example applications |
|---|---|---|
| SIL 1 | >= 10^-6 to < 10^-5 | Light curtains, simple interlocks |
| SIL 2 | >= 10^-7 to < 10^-6 | Robot safety controllers, food processing safety systems |
| SIL 3 | >= 10^-8 to < 10^-7 | Emergency shutdown systems, railway signaling |
| SIL 4 | >= 10^-9 to < 10^-8 | Nuclear reactor protection systems |
Most industrial robot safety functions target SIL 2, which requires redundancy and diagnostic feedback. The robot-specific standards (ISO 10218, ISO 13849) build on IEC 61508's principles while tailoring them to robotic applications.
Modern robots employ multiple strategies to avoid collisions with humans:
Pre-collision detection uses sensors to detect humans before contact occurs. Laser scanners create a 2D protective zone around the robot. 3D depth cameras and LiDAR systems build volumetric maps of the workspace. Some systems, like those developed by OMRON, use AI to predict human walking trajectories several seconds into the future, calculating whether an over-approach is likely and rerouting the robot preemptively.
Speed and separation monitoring dynamically adjusts robot velocity based on human proximity. As a person approaches, the robot slows; if the person enters a critical zone, it stops entirely. This approach is formalized in the ISO standards and relies on safety-rated sensors with guaranteed response times.
Workspace monitoring uses safety-rated zone systems. An outer warning zone triggers reduced speed, while an inner protective zone triggers a stop. These zones can be dynamically reconfigured based on the task being performed.
When collision avoidance cannot guarantee contact-free operation (as in many collaborative applications), robots must limit the forces they can exert. Techniques include:
Collaborative robots are designed with rounded surfaces, minimal pinch points, and smooth contours to reduce injury potential. Gaps between moving parts are either eliminated or made large enough that fingers cannot become trapped. Emergency stop buttons are prominently placed and accessible from multiple positions around the robot.
Reinforcement learning (RL) allows robots to learn behaviors through trial and error, but unconstrained exploration can produce dangerous actions, especially when transferring policies from simulation to the real world. Safe RL addresses this through several approaches:
Constrained optimization formulates the RL problem with explicit safety constraints. Instead of only maximizing a reward function, the agent must also satisfy constraints on the expected cumulative cost (such as collision frequency or joint torque violations). Constrained Markov Decision Processes (CMDPs) and algorithms like Constrained Policy Optimization (CPO) and Lagrangian-based methods are commonly used.
Control barrier functions (CBFs) define a safe set in the state space and modify the robot's actions in real time to keep the system within that set. A CBF acts as a safety filter: the learned policy proposes an action, and the CBF adjusts it minimally to maintain safety guarantees. This approach can provide formal safety certificates while still allowing the RL agent flexibility in how it accomplishes tasks.
Shielding uses a safety monitor (or "shield") that runs alongside the RL policy. The shield has access to a verified model of safety constraints and can override the policy's actions when they would lead to unsafe states. This lets the RL agent explore freely in safe regions while hard-blocking dangerous behavior.
Formal verification applies mathematical proof techniques to guarantee that a robot's control policy satisfies specified safety properties. A 2026 survey in the field identifies two main pillars:
Policy learning with formal specifications. Instead of learning from hand-crafted reward functions, robots learn from formal specifications written in temporal logics such as Linear Temporal Logic (LTL) or Signal Temporal Logic (STL). LTL can express requirements like "always eventually visit the charging station" or "never enter the restricted zone while a human is present." STL extends this to continuous systems with real-valued signals and robustness metrics. These specifications can be automatically translated into reward functions that guide RL training.
Policy verification after learning. Once a policy is trained, verification methods check whether it satisfies safety properties across all possible states:
| Verification approach | Technique | Strengths | Limitations |
|---|---|---|---|
| Reachability analysis | Computes sets of states the robot can reach; checks if unsafe states are reachable | Provides rigorous guarantees for continuous systems | Computationally expensive for high-dimensional systems |
| Certificate functions | Uses Lyapunov functions, barrier certificates, or contraction metrics to prove stability and safety | Can work without a full system model (model-free verification) | Finding valid certificate functions is itself a hard problem |
| Runtime monitoring | Monitors the robot during execution against formal specifications; triggers fallback behavior if violations are detected | Low computational overhead; works with black-box policies | Reactive rather than proactive; violations may be detected too late |
| Model checking | Exhaustively explores all possible state transitions | Complete coverage for finite-state systems | Does not scale to continuous or very large state spaces |
A persistent challenge is scalability. Formal verification of neural network policies remains computationally expensive, and most practical deployments rely on runtime monitoring combined with conservative fallback controllers rather than full offline verification.
Many robot policies are trained in simulation before being deployed on physical hardware. The "reality gap" between simulated and real environments creates safety risks: a policy that behaves safely in simulation may exhibit unsafe behavior in the real world due to differences in physics, sensor noise, or environmental conditions.
Recent approaches to bridging this gap include:
Autonomous driving is one of the most safety-critical applications of AI in robotics. Unlike industrial robots that operate in controlled factory environments, autonomous vehicles must handle an essentially infinite variety of scenarios on public roads: unpredictable human drivers, pedestrians, cyclists, animals, construction zones, weather events, and sensor degradation.
The safety framework for autonomous vehicles rests on two pillars:
Validating autonomous vehicle safety is an open problem. Rand Corporation research has estimated that an autonomous vehicle fleet would need to drive hundreds of millions of miles, and potentially hundreds of billions of miles, to statistically demonstrate with confidence that it is safer than human driving. This makes real-world testing alone insufficient, driving the industry toward simulation-based testing, scenario-based validation, and formal methods.
Different jurisdictions have adopted different regulatory strategies:
| Jurisdiction | Approach | Status |
|---|---|---|
| United States (federal) | NHTSA voluntary guidance; no federal self-certification requirement specific to autonomy | Ongoing rulemaking |
| United States (state level) | Varies widely; California, Arizona, and Texas have permitting frameworks for autonomous vehicle testing and deployment | Active deployments (Waymo, Cruise, Zoox) |
| European Union | UN Regulation No. 157 for Automated Lane Keeping Systems (ALKS); EU AI Act classification of autonomous systems as high-risk | ALKS in effect; AI Act fully applicable August 2026 |
| China | Provisional regulations in Beijing, Shanghai, Shenzhen, and other cities; national-level regulations under development | Testing and limited commercial deployment |
| Japan | Amended Road Traffic Act (2023) permits Level 4 autonomous driving in designated areas | Limited deployment |
Unmanned aerial vehicles (UAVs or drones) operate under their own safety frameworks. In the United States, FAA Part 107 governs commercial drone operations for aircraft under 55 pounds. Key safety requirements include Remote Pilot Certification, visual line of sight operation (with limited waivers for beyond-visual-line-of-sight, or BVLOS), anti-collision lighting for night flights visible at three statute miles, and Remote ID broadcasting requirements (mandatory since September 2023) that allow authorities to identify and locate drones in flight.
As drone delivery and urban air mobility expand, new safety standards are being developed for operations over people, operations beyond the pilot's line of sight, and integration with manned aviation.
Surgical robots represent a domain where robot safety directly affects patient welfare. The da Vinci Surgical System, manufactured by Intuitive Surgical, is the most widely deployed surgical robot platform, with an estimated 15.9 million procedures performed between January 2015 and June 2025. A 2025 analysis of FDA MAUDE (Manufacturer and User Facility Device Experience) data identified 66,651 reports over that period, though recent reliability analyses indicate a 99% technical reliability rate.
Reported adverse events include instrument malfunctions, unintended tissue contact, electrical arcing, and system errors requiring conversion to open surgery. The FDA has issued warning letters to Intuitive Surgical regarding safety reporting practices and has conducted recalls, including a March 2025 recall of the Da Vinci 5 system related to a foot pedal spring failure.
Surgical robot safety depends on redundant mechanical stops, force feedback to the surgeon, real-time system monitoring with automatic shutdown on fault detection, and rigorous maintenance and sterilization protocols. Unlike autonomous robots, most surgical robots (including the da Vinci system) are teleoperated: a surgeon controls the instruments directly, with the robot providing enhanced precision and dexterity rather than autonomous decision-making.
As robots become networked, cybersecurity has become a safety issue, not just an IT concern. A compromised industrial robot could alter its movements to damage products, equipment, or injure workers. A hacked autonomous vehicle could endanger lives.
Research and real-world incidents have revealed several categories of robot cybersecurity risk:
The integration of cybersecurity requirements into ISO 10218:2025 represents a first step. The IEC 62443 standard series addresses cybersecurity for industrial automation and control systems more broadly, defining Security Levels (SL 1 through SL 4) analogous to Safety Integrity Levels. The EU Cyber Resilience Act (CRA), which becomes mandatory in 2027, will impose cybersecurity requirements on products with digital elements, including robots.
Isaac Asimov introduced his Three Laws of Robotics in the 1942 short story "Runaround":
Asimov later added a Zeroth Law: "A robot may not harm humanity, or, by inaction, allow humanity to come to harm."
These laws have influenced popular thinking about robot ethics and have inspired real-world safety guidelines. Google DeepMind's "Robot Constitution," for example, includes safety-focused prompts instructing robots to avoid tasks involving humans, animals, sharp objects, and electrical appliances, echoing Asimov's First Law.
However, Asimov himself wrote his stories to explore how the laws would fail in practice. No current technology can implement the laws as stated, because they require a robot to understand concepts like "harm" and "inaction" with human-level judgment. Modern AI safety researchers generally view Asimov's laws as a useful conversation starter rather than an actionable engineering framework. Real-world robot safety depends on specific, testable requirements (force limits, stopping distances, failure probabilities) rather than broad ethical commandments.
The trolley problem, a thought experiment about choosing between two harmful outcomes, has become closely associated with autonomous driving ethics. The scenario asks: if an autonomous vehicle must choose between hitting one group of people or another, how should it decide?
Most autonomous vehicle researchers and ethicists consider the trolley problem a misleading framing. Real autonomous vehicle accidents involve sensor uncertainty, reaction time constraints, and probabilistic outcomes, not clean binary choices. The German Ethics Commission for Automated and Connected Driving explicitly stated that autonomous vehicles should not be programmed to make such choices; systems should detect whether human life is present and take all available action to avoid harm, but should not attempt to rank or trade lives.
A more practical ethical framework focuses on the overall risk profile: does the autonomous system reduce total harm compared to human driving? Can it distribute risk fairly across all road users? Is the development process transparent and subject to independent audit?
When a robot causes harm, the question of liability is complex. Is the manufacturer responsible? The system integrator? The operator? The developer of the AI software? Different jurisdictions handle this differently:
The EU is building a layered regulatory framework for robot safety:
| Regulation | Scope | Timeline |
|---|---|---|
| Machinery Regulation (EU) 2023/1230 | Replaces the Machinery Directive; covers physical safety of robots as machines; includes AI-specific categories in its high-risk list | Mandatory from January 2027 |
| EU AI Act (Regulation (EU) 2024/1689) | Classifies AI systems by risk level; autonomous robots are generally "high-risk," requiring conformity assessment, risk management, and human oversight | Prohibited practices from February 2025; full application from August 2026 |
| Cyber Resilience Act | Cybersecurity requirements for products with digital elements, including connected robots | Mandatory from 2027 |
| General Product Safety Regulation (GPSR) | Baseline product safety for consumer products including consumer robots | In effect from December 2024 |
The U.S. approach relies more heavily on voluntary industry consensus standards (ANSI/RIA R15.06, R15.08) enforced through OSHA workplace safety requirements. OSHA does not have a robot-specific standard but applies general duty clause requirements and references ANSI/RIA standards in its enforcement guidance. NHTSA provides guidance for autonomous vehicles, and the FAA regulates drones under Part 107.
Efforts are underway through ISO, IEC, and the International Telecommunication Union (ITU) to harmonize robot safety standards globally. The goal is to prevent a fragmented regulatory landscape where robots certified in one country must be re-certified for each new market.
The emergence of humanoid robots such as Tesla Optimus, Boston Dynamics Atlas, Agility Robotics Digit, and Figure AI's Figure 02 presents new safety challenges. Unlike traditional industrial robots with fixed bases and well-defined workspaces, humanoid robots are mobile, can access unpredictable environments, and may work in close proximity to untrained members of the public.
As of early 2026, there are no publicly available independent safety certifications for any commercial humanoid robot. Human-supervision protocols, teleoperation mechanisms, and mean-time-between-failure data have not been disclosed by most manufacturers. Regulators in the EU, United States, and Asia are working on frameworks that humanoid robots will fall under once they leave controlled factory settings, but the standards lag behind the technology.
Safety requirements for humanoid robots will likely need to address: