Human-Robot Interaction Architecture Design
Human-robot interaction (HRI) architecture defines the structural frameworks, communication protocols, and decision logic that govern how robotic systems perceive, interpret, and respond to human presence, intent, and commands. This page describes the service landscape of HRI architecture design — the professional categories involved, the standards that structure the field, and the functional layers that distinguish one architectural approach from another. The domain spans industrial automation, surgical robotics, service robots, and autonomous vehicles, making architectural decisions foundational to both performance and regulatory compliance.
Definition and scope
HRI architecture is a subfield of robotics architecture that specifically addresses the bidirectional communication channel between a human operator or co-worker and a robotic agent. The International Organization for Standardization (ISO) defines requirements for collaborative robot operation under ISO/TS 15066:2016, which establishes power and force limiting thresholds and workspace-sharing constraints — specifications that directly shape architectural choices around sensor placement, control loop timing, and safety arbitration.
The scope of HRI architecture encompasses four principal interaction modalities:
- Physical interaction — direct force contact, hand-guiding, and collaborative manipulation
- Proxemic interaction — workspace sharing without physical contact, governed by proximity sensing and speed-separation monitoring
- Symbolic interaction — gesture recognition, speech commands, and graphical interface inputs
- Cognitive interaction — intent prediction, shared autonomy, and adaptive task allocation based on inferred human state
Each modality imposes distinct latency, sensing, and arbitration requirements on the underlying architecture. Physical interaction demands sub-10-millisecond control loop response times to satisfy ISO/TS 15066 safety limits; symbolic interaction tolerates latencies measured in hundreds of milliseconds. These differences are not cosmetic — they determine whether a deliberative planner or a reactive safety layer holds priority during a given interaction phase, a distinction elaborated in reactive vs. deliberative architecture.
How it works
HRI architecture typically operates as a layered control structure in which a safety monitor arbitrates between human input channels and the robot's autonomous planning stack. The layered control architecture model, as described in research published under the IEEE Robotics and Automation Society's conference proceedings, organizes these layers into a hierarchy:
- Perception layer — multimodal sensor fusion integrating depth cameras, force-torque sensors, and audio processing to detect human presence, posture, and intent signals
- Interpretation layer — semantic processing converts raw sensor data into labeled human states (e.g., "reaching toward robot workspace," "issuing verbal stop command")
- Arbitration layer — a safety monitor continuously evaluates whether human proximity or command state warrants override of the autonomous planning stack
- Execution layer — compliant motion controllers implement the arbitrated command, with force limits enforced in hardware or low-level firmware
The National Institute of Standards and Technology (NIST) has published performance metrics for HRI under its Performance of Human-Robot Teams program, identifying response latency, situation awareness transfer, and task completion rate as the primary quantifiable benchmarks for architectural evaluation. The sensor fusion architecture page details how multimodal inputs are combined before reaching the interpretation layer.
Common scenarios
HRI architecture manifests differently across deployment contexts. Three structurally distinct scenarios illustrate the range of design constraints:
Industrial collaborative assembly — A cobot operating under ISO 10218-1 (robot safety) and ISO/TS 15066 shares a workspace with a human assembler. The architecture prioritizes speed-separation monitoring and power-force limiting modes. The safety arbitration layer must complete its evaluation cycle within 8 milliseconds to maintain compliance with category-3 safety function requirements as defined under IEC 62061.
Surgical robotics teleoperation — The surgeon operates a master console at a spatial remove from the patient-side robot. The HRI architecture manages haptic feedback, motion scaling (typically 3:1 to 10:1 tremor filtering), and stereo video latency, which the U.S. Food and Drug Administration's 510(k) evaluation pathway assesses as part of human factors engineering submissions.
Service robot navigation in public spaces — A mobile service robot must interpret pedestrian intent and social proxemics. The architecture integrates pedestrian trajectory prediction models with social force modeling, a domain where NIST's test methods for autonomous robot navigation provide standardized obstacle-negotiation benchmarks. The broader mobile robot architecture framework describes the navigation stack that underlies this scenario.
Decision boundaries
The principal architectural decision in HRI design is the allocation of authority between human control and robot autonomy — a spectrum that runs from full teleoperation (0% robot autonomy) to supervised autonomy (human approves high-level goals only) to full autonomy (no runtime human input). NIST's taxonomy of robot autonomy levels, derived from the work of Thomas Sheridan at MIT, defines 10 levels of automation ranging from complete human control to full computer control.
A secondary decision boundary distinguishes tight coupling from loose coupling of the human-robot interface:
- Tight coupling — the human is in the robot's control loop at every cycle; latency requirements are stringent and errors propagate immediately
- Loose coupling — the human issues task-level goals; the robot autonomously selects methods, reducing latency requirements but increasing demands on autonomous decision-making architecture
Safety architecture choices follow from this boundary. Tightly coupled systems embed safety enforcement in hardware and firmware. Loosely coupled systems require software-level fault tolerance and formal verification of the autonomous planning stack. The functional safety ISO robotics standards — specifically IEC 61508 and ISO 13849 — define the required safety integrity levels for each coupling regime.
The Robotics Architecture Authority index structures the broader landscape of architectural domains from which HRI design draws its component frameworks.