Mobile Robot Architecture: Navigation and System Design
Mobile robot architecture encompasses the integrated hardware, software, and algorithmic structures that enable autonomous ground vehicles, aerial drones, and aquatic platforms to perceive their environment, plan paths, and execute movement without continuous human intervention. This page describes the structural composition of mobile robotic systems, the functional layers that govern navigation and control, the operational scenarios that define deployment boundaries, and the architectural decision points that distinguish one system design from another. These considerations are foundational to procurement, integration, and certification work across logistics, defense, agriculture, and industrial facilities.
Definition and scope
A mobile robot, as defined in ISO 8373:2021, is a robot "able to travel under its own control." That deceptively brief definition carries substantial engineering consequence: it excludes fixed manipulators, transfer machines, and any platform requiring continuous human guidance. Within the scope of mobile robot architecture, the relevant domains include autonomous mobile robots (AMRs), automated guided vehicles (AGVs), unmanned aerial vehicles (UAVs), and unmanned ground vehicles (UGVs), each subject to distinct locomotion constraints, sensor requirements, and safety envelope specifications.
The National Institute of Standards and Technology (NIST) maintains active research programs in mobile robot performance standards, including test methods for navigation autonomy and mobility evaluation that inform procurement specifications used by federal agencies. The NIST performance metric framework distinguishes between three primary autonomy dimensions: sensing, perceiving, and acting — a classification that directly shapes how system architects partition functional responsibility across subsystems.
Scope boundaries also intersect safety regulation. In the United States, the American National Standards Institute and the Association for Advancing Automation jointly publish ANSI/RIA R15.08, the industrial mobile robot safety standard, which defines minimum performance criteria for hazard detection, speed limiting, and operational domain restrictions. Any mobile robot deployed in shared human-robot workspaces must comply with these boundaries, making architectural safety design inseparable from functional design.
How it works
Mobile robot architecture is structured as a layered functional stack, where each layer has defined inputs, outputs, and interfaces. The standard decomposition, aligned with the ROS 2 Navigation framework published by Open Robotics, organizes the stack into five operational layers:
- Sensor acquisition layer — Raw data ingestion from lidar, cameras, IMUs, wheel encoders, and ultrasonic rangefinders. This layer performs hardware abstraction, managed through a hardware abstraction layer that decouples sensor hardware from downstream processing.
- Perception layer — Sensor fusion, object detection, and environment representation. The robotic perception pipeline transforms raw sensor streams into structured representations (point clouds, semantic maps, obstacle grids) that navigation algorithms consume.
- Localization and mapping layer — Simultaneous localization and mapping (SLAM) algorithms maintain a probabilistic estimate of robot pose within a map. Algorithms such as Extended Kalman Filter SLAM, particle filter-based FastSLAM, and graph-based SLAM operate at this layer.
- Planning layer — Global path planning (typically A* or Dijkstra-based) and local trajectory planning (Dynamic Window Approach, TEB, or model predictive control) generate motion commands. Motion planning architecture governs algorithm selection based on environment dynamism and compute constraints.
- Execution and control layer — Low-level actuator commands are issued to drive motors, steering servos, or rotor controllers through defined actuator control interfaces, with feedback loops operating at cycle times typically between 1 ms and 10 ms in real-time control systems.
Sensor fusion architecture is the critical integration mechanism between layers 1 and 2. Kalman filter variants and deep learning-based fusion models combine heterogeneous data streams to produce estimates with lower uncertainty than any single sensor could achieve alone. The middleware layer — described in detail under middleware selection for robotics — manages inter-layer communication, with ROS 2 using a DDS (Data Distribution Service) transport layer that supports Quality of Service (QoS) profiles configurable for real-time or best-effort delivery.
Common scenarios
Mobile robot architecture manifests differently across deployment contexts, with each scenario imposing distinct constraints on sensor selection, compute allocation, and communication topology.
Warehouse and logistics AMRs operate in semi-structured indoor environments where reflective floors and repetitive rack geometry challenge lidar-based SLAM. Systems in this class, such as those conforming to MHI (Material Handling Institute) standards, typically use 2D lidar at a height of 200–400 mm above floor level combined with ceiling-mounted fiducial markers for localization correction. Payload capacities in this class range from 100 kg to over 1,500 kg, with navigation accuracy requirements of ±10 mm for docking operations.
Outdoor UGVs in agricultural or defense applications must handle unstructured terrain, variable lighting, and GPS-degraded environments. These platforms integrate 3D lidar (commonly 32-beam or 64-beam units), stereo cameras, and RTK-GPS with IMU fusion to maintain localization accuracy under tree canopy or in GPS-contested zones. The edge computing architecture required for these platforms is substantially more demanding than indoor equivalents because cloud connectivity cannot be assumed.
Hospital and healthcare mobile platforms operate under specific electromagnetic compatibility constraints imposed by FDA guidance on medical device electromagnetic interference. Navigation speed is typically limited to 0.5 m/s in occupied corridors, and safety architecture must implement ANSI/RIA R15.08 protective field configurations.
Multi-robot deployments, covered in depth under multi-robot system architecture, introduce fleet management as an architectural layer above individual robot stacks, requiring task allocation algorithms, traffic coordination, and shared map management.
Decision boundaries
Architectural choices in mobile robot design involve structured tradeoffs where no single solution dominates across all operational contexts. The principal decision boundaries are:
AGV vs. AMR: Automated guided vehicles rely on fixed infrastructure — magnetic tape, floor QR codes, or wire guidance — to define paths, requiring zero onboard mapping capability but demanding significant facility modification. AMRs use onboard SLAM and obstacle avoidance to navigate dynamically, requiring more sophisticated sensor and compute hardware but tolerating infrastructure change. ANSI/RIA R15.08 applies to both categories, but the safety architecture differs materially: AGVs rely on path-boundary monitoring while AMRs rely on volumetric protective field scanning.
Onboard compute vs. edge/cloud offload: Processing SLAM and perception entirely onboard eliminates latency and connectivity dependencies but constrains compute power by payload and thermal limits. Cloud robotics architecture offloads compute-intensive tasks such as semantic mapping updates or fleet optimization to remote infrastructure, introducing round-trip latency typically between 20 ms and 150 ms on enterprise Wi-Fi 6 networks — a figure that renders cloud-dependent path planning impractical for high-speed platforms but acceptable for map update workflows.
Proprietary vs. open-source stack: Proprietary navigation stacks from hardware vendors offer validated performance guarantees but limit integration flexibility. Open-source robotics architecture frameworks, particularly ROS 2, provide modular extensibility across more than 200 published navigation packages but impose integration and maintenance overhead that requires dedicated software engineering resources.
Safety architecture integration: Robot safety architecture must be designed as a functional layer with defined fail-safe states — not retrofitted onto a completed navigation stack. ISO 3691-4:2020, covering industrial trucks including AMRs, specifies performance level requirements for safety-critical functions that constrain sensor redundancy choices and actuator braking architecture from the initial design phase.
The full landscape of mobile robot platforms, frameworks, and vendor solutions is indexed at the Robotics Architecture Authority, which organizes the domain across hardware, software, and standards dimensions for professional and procurement audiences.
References
- ISO 8373:2021 — Robots and Robotic Devices: Vocabulary
- NIST Robot Systems Program
- ANSI/RIA R15.08 — Industrial Mobile Robots Safety Standard (A3 Automate)
- ROS 2 Navigation Documentation — Open Robotics
- ISO 3691-4:2020 — Industrial Trucks: Safety Requirements (ISO)
- International Federation of Robotics — World Robotics Reports