Resilience Meets Autonomy: Governing Embodied AI in Critical Infrastructure

Critical infrastructure increasingly incorporates embodied AI for monitoring, predictive maintenance, and decision support. However, AI systems designed to handle statistically representable uncertainty struggle with cascading failures and crisis dyn…

Authors: Puneet Sharma, Christer Henrik Pursiainen

Resilience Meets Autonomy: Go v erning Embodied AI in Critical Infrastructure Puneet Sharma Department of Automation and Pr ocess Engineering (IAP) UiT The Ar ctic University of Norway T romsø, Norway puneet.sharma@uit.no Christer Henrik Pursiainen Department of T echnology and Safety (ITS) UiT The Ar ctic University of Norway T romsø, Norway christer .h.pursiainen@uit.no Abstract —Critical infrastructure incr easingly incorporates em- bodied AI for monitoring, pr edictive maintenance, and decision support. Howev er , AI systems designed to handle statistically repr esentable uncertainty struggle with cascading failur es and crisis dynamics that exceed their training assumptions. This paper argues that Embodied AI’ s resilience depends on bounded autonomy within a hybrid gover nance architectur e. W e outline four o versight modes and map them to critical infrastructure sectors based on task complexity , risk level, and consequence severity . Drawing on the EU AI Act, ISO safety standards, and crisis management research, we argue that effective gover nance requir es a structured allocation of machine capability and human judgement. Index T erms —critical infrastructure, artificial intelligence, re- silience, embodied artificial intelligence I . I N T RO D U C T I O N Critical infrastructure (CI) became a major public policy concern in the early 2000s, driven by rapid technological dev elopment, increasing interdependence, and shifting threat perceptions. Using the European Union as a reference, CI may be understood as physical or virtual assets or systems essential to maintaining vital societal functions, including health, safety , security , and economic or social well-being [13]. T wo paradigm shifts have reshaped the field. The original protectiv e logic—centred on risk prevention—g av e way to a ”resilience shift” redirecting attention toward infrastructures’ capacity to absorb shocks, recov er , and adapt [32], [39]. A further ”AI-driv en shift” has since emerged, as autonomous and robotic systems promise gains in monitoring, predicti ve maintenance, anomaly detection, and decision support across infrastructure sectors [4], [23], [28], [41]. Y et critical infrastructure operates under systemic uncer- tainty , where cascading failures, unexpected disruptions, and crisis dynamics are inherent to comple x sociotechnical sys- tems. Autonomous AI systems are typically designed to man- age statistically representable uncertainty , whereas resilience gov ernance must also confront abnormal deviations, cascading effects, and systemic surprises that exceed prior modelling assumptions. The central question, therefore, is where the boundary between machine autonomy and human judgement should lie in safety-critical en vironments. Focusing on this tension, and particularly on embodied AI (EAI) within the broader context of AI-enabled CI, this paper argues that such system’ s resilience value depends on bounded autonomy embedded in a hybrid gov ernance archi- tecture. Neither full automation nor simple human o verride is sufficient; what is required is a structured combination of machine capability and human contextual judgement. The following sections examine why fully autonomous AI remains inadequate under conditions of systemic surprise and identify the key elements of a hybrid governance architecture for AI- enabled CI. I I . A U T O N O M Y A N D T H E P RO B L E M O F S Y S T E M I C S U R P R I S E AI is typically designed to handle normal variation through training data and probabilistic models, yet this reliance on historical data can introduce bias and limit its ability to gen- eralise to novel situations. Unexpected events and cascading failures often exceed the system’ s representational capacity . This is especially true of risks whose nature we do not fully understand ( known unknowns ), risks whose existence we know but fail to recognise as risks ( unknown knowns ), and risks we cannot e ven imagine ( unknown unknowns ) [35]. While many risks can be anticipated and mitigated in advance, others permit neither preparation nor prev ention [17]. Some are so complex that cause-and-effect relationships become clear only after the crisis, while others are so chaotic that they remain unintelligible ev en in hindsight [42]. The persistence of crisis surprise is therefore not an anomaly to be overcome by e ver more adv anced technologies such as AI. Rather , it reflects structural uncertainty and unexpected- ness across the interconnected domains of hazard, risk, and crisis. In CI, the most consequential AI vulnerabilities arise less from outright malfunction than from socio-technical design and interaction that fail to anticipate surprise. Central are errors of omission and commission , in which systems either fail to act when they should or act inappropriately under operational pressure. Such failures may originate in inputs, algorithmic processing, or output execution. When confronted with genuinely novel conditions for which no adequate rules exist, behaviour may become unpredictable: systems may fail to act, revert to brittle routines, repeat earlier responses, or behav e erratically [5], [8]. These vulnerabilities are amplified by contemporary AI architectures. In CI, AI—especially embodied systems—often operates within distributed and tightly coupled arrangements that integrate information and coordinate agents across organ- isational and technical layers. Such systems are vulnerable to what the safety literature calls “normal accidents, ” in which small internal disruptions propagate non-linearly through net- work dependencies and escalate into system-le vel outages [16]. Because infrastructures are interdependent—with elec- tricity and ICT often acting as escalation hubs—an initial AI failure can cascade far beyond its point of origin. Although the literature often emphasises AI’ s contribution to pre-crisis anticipation, risk management, and post-crisis learning [2], [3], its role during crisis is usually more limited: it supports sense-making, b ut struggles with genuinely novel situations that deviate from trained patterns and lack robust historical data. Under such conditions, fully automated decision-making may become unpredictable without human ov ersight. Most importantly , threats may also arise from the external en vironment. Such exogenous threats introduce a complemen- tary set of vulnerabilities. AI can strengthen cyber defence by improving anomaly detection and response, yet it also enlarges the attack surface and creates new opportunities for manipulation. Adversaries may poison data or implant backdoors so that compromised models appear to function nor - mally until a trigger—sometimes a concealed “kill switch”—is activ ated. The effects of such attacks are often amplified by gov ernance and procedural weaknesses, allowing seemingly minor manipulations to escalate into substantial harm [40], [43], [46]. Embodied AI (EAI) brings these fragilities into sharper focus. Exogenous vulnerabilities arise from dynamic en- vironments and hostile interference that mislead perception and decision-making. Endogenous vulnerabilities originate within the system itself—sensor limitations and failures, hard- ware wear, and software defects or design flaws that can trigger unsafe behaviour ev en without an attacker . Mixed vulnerabilities emerge when e xternal pressures exploit or accelerate internal weaknesses, producing coupled failures across perception, control, and action [36], [48]. The central task, then, is to design governance for AI- enabled CI that can withstand not only expected risks but also unexpected disruptions and systemic crises. Because these infrastructures underpin basic needs and vital societal func- tions, their AI integration raises acute questions of fairness, priv acy , transparency , accountability , and liability [27], [45], [46]. Delegating operational decision-making to autonomous systems may improve efficiency , but it also strains traditional gov ernance in safety-critical domains by blurring responsi- bility among designers, operators, infrastructure owners, and the system itself. The problem, therefore, is not whether AI should be used in critical infrastructure, but how its autonomy should be bounded, supervised, and assigned within a credible architecture of human responsibility . I I I . H U M A N O V E R S I G HT A N D B O U N D E D AU T O N O M Y I N C R I T I C A L I N F R A S T R U CT U R E If the problem is how AI autonomy should be bounded and supervised in critical infrastructure, gov ernance is where that problem must be addressed. Y et the governance of AI is not a new issue, and it remains unresolved, as regulators struggle to keep pace with technological change. In Europe, the EU AI Act (2024) [14], the world’ s first comprehensi ve and binding legal framework in this field, sets boundary conditions for “high-risk” AI-enabled CI. This includes AI systems used as “safety components” in the management and operation of critical digital infrastructure, road traf fic, and essential utilities such as water , gas, heating, and electricity—that is, components whose failure or malfunction may endanger health, safety , or property , and cause serious disruption to CI operations. The Act subjects such systems to strict e x ante and life- cycle obligations, including conformity requirements, docu- mentation, rob ustness, post-market monitoring, and incident reporting. It does not prohibit fully automated operation, but requires high-risk AI systems to be designed so that natural persons can o versee their functioning, ensure they are used as intended, and address their impacts throughout the lifecycle. Appropriate human oversight measures must therefore be identified, including, where relev ant, in-built operational con- straints that the system cannot ov erride itself, responsiveness to the human operator, and ov ersight by persons with the necessary competence, training, and authority . The weight attached to regulatory po wer , howe ver , varies. While the 144-page EU AI Act [14] can be read as precaution- ary , the 22-page U.S. National Security Memorandum on AI (NSM-25) [44] places the principal check on AI autonomy in human judgement, supported by centralised testing, e valuation, and security . Y et such policy instruments define only broad boundary conditions; they rarely operationalise them in detail. Standardisation is more voluntary than formal regulation, but often more precise. Recent ISO work suggests that the bound- ary between AI autonomy and human control is best under- stood not as a fixed divide b ut as a question of controllability and functional safety . ISO/IEC TR 5469:2024 [24] addresses AI in safety-related functions through the broader problem of assuring safe system performance, including the use of non- AI safety functions around AI-controlled equipment, while ISO/IEC TS 8200:2024 [25] emphasises observability , transfer of control, reaction to uncertainty , and verification of control- lability . In critical infrastructure, this implies that higher AI au- tonomy is acceptable only where the system remains monitor- able, interruptible, and capable of timely control transfer , with human authority retained over high-consequence and strategic decisions. Drawing on recent literature [10], [26], [27] and aligning with recent ISO work on functional safety and controllability , we distinguish four modes of integrating human oversight with AI systems, summarised in T able III and briefly discussed below . T ABLE I A I AU T ON O M Y A N D T H E O VE R S IG H T M O D E S I N C R I T IC A L I N F R AS T RU C T UR E Oversight mode Human role Key characteristics Main selection criteria Fully AI-A utomated (Human-out-of-the- Loop) None (Autonomous) High operational autonomy; rapid response and scalability; operates without routine human interven- tion, but within predefined safety constraints and fallback mecha- nisms. Routine operation of infrastructure and instantaneous physical stabilisation. Used, e.g., for millisecond- lev el load balancing in smart grids or preventing cascading electrical failures where humans are phys- ically too slo w to act, provided that functional safety is assured by system design and safeguards. Human-on-the-Loop (HO TL) Supervisor (Passi ve) The system operates autonomously under human supervision; the hu- man can monitor, interrupt, over- ride, or reclaim control if neces- sary . Predictiv e maintenance and steady-state operations. Used when the system is stable, but where a human must be able to override the AI and assume control if it misinterprets a physical anomaly or system state. Human-in-the-Loop (HITL) Gatekeeper (Active) Humans are a mandatory part of the decision chain; the system can- not proceed without approv al in high-impact or safety-critical ac- tions. Service restoration and reconfiguration. Used for high-consequence decisions, where control transfer to the human must occur before execution because errors may cause major physical, social, or environ- mental harm. Human-in- Command (HIC) Policy-maker (Strate- gic) Humans define goals, safety limits, rules of engagement, and escalation thresholds; AI operates only within these externally set constraints. Crisis management, especially in unexpected events, and disaster recov ery . Used during large-scale in- frastructure failures or cyber -warfare scenarios where strategic trade-offs, exceptional uncertainty , or crisis escalation require political, legal, and ethical ac- countability beyond automated decision-making. The four modes in T able III are best understood as ideal- typical ways of distributing autonomy and oversight within a hybrid system. In fully automated systems, decisions are made and ex ecuted without routine human participation; in HOTL arrangements, the system operates autonomously under human supervision, with intervention possible in exceptional cases; in HITL arrangements, action requires explicit human approv al; and in HIC arrangements, the human defines goals, limits, and rules of engagement at the strategic le vel. The crucial distinction is between HO TL and HITL: the former allows supervisory ov erride, whereas the latter makes human appro val part of the decision itself. In practice, these modes should be treated as complementary elements of a hybrid system rather than as fixed design choices [10]. Their appropriate use depends on task com- plexity , risk lev el, error consequences, regulatory and ethical requirements, response-time demands, cognitiv e load, and the respectiv e strengths of humans and AI. AI-enabled CI systems may therefore need to switch between modes, or operate sev eral of them concurrently , depending on function. Fully automated operation may suit low-risk or time-critical tasks; HO TL is appropriate for predictive maintenance and steady- state operations; HITL for high-consequence decisions; and HIC for strategic or unexpected situations where routine data and procedures are insufficient and ethical trade-offs arise. Optimising such a system requires not only robust design, domain-specific calibration, and operational testing, but also substantial inv estment in simulation-based training, exercises, and operator experience. This is especially important in crises, classically understood as abnormal or extraordinary situations that pose an existential threat and demand timely strategic response under inherent uncertainty [20], [21]. Under such conditions, neither AI nor human judgement is suf ficient on its own: AI may fail when unexpectedness exceeds its mod- elled assumptions, while human decision-making is degraded by stress, information overload, and cognitiv e bias. Human misjudgements increase, attention becomes more selecti ve, and tolerance for complexity declines [37]. At the same time, humans retain advantages in contextual interpretation, normative judgement, and adaptiv e problem framing. Resilience may be understood as an emer gent capac- ity in individual and team beha viour under adversity [7], often expressed through organisational improvisation when struc- tural expectations are disrupted and multiple futures appear plausible. In such moments, human actors draw selecti vely on past repertoires, imagine alternati ve courses of action, and make practical as well as moral judgements [1]. This becomes especially clear in embodied AI systems operating directly in physical critical infrastructure en vironments, where the relation between autonomy , controllability , and human responsibility must be specified in operational terms. I V . E M B O D I E D A I I N C R I T I C A L I N F R A S T RU C T U R E : O P E R A T I O N A L P R O M I SE A N D G O V E R NA N C E L I M I T S EAI represents the fusion of artificial intelligence with physical systems, enabling robots to perceiv e, act, and learn by interaction in the real world [11]. For robots operating in such environments, this in volves two critical capabilities: navigating safely—avoiding collisions with obstacles while achieving their goals—and manipulating targets effecti vely , such as turning a valv e or picking up an object. Although these tasks may seem tri vial for humans, they present significant challenges for robots due to a combination of factors. These include the complexities of the en vironment, which lead to exogenous vulnerabilities; the limitations and uncertainties of av ailable sensors, as well as the robot’ s physical body and its constraints, which contribute to endogenous vulnerabilities; and the complexity of the task itself, which often results in mixed vulnerabilities. Building on the challenges posed by exogenous, endoge- nous, and mixed vulnerabilities, robots deployed in differ- ent domains—whether aerial, ground, underwater , or sur- face—face unique, domain-specific obstacles that further com- plicate their ability to perceive, act, and learn in real-world en vironments. For instance, an autonomous drone operating in the aerial domain must make rapid decisions to av oid obstacles or adapt to sudden changes in wind conditions. Underwater robots must handle dynamic environments like ocean currents, which can unpredictably alter their trajectory . A legged robot must navigate different types of ground sur- faces (sand, soil, rock) and ground conditions such as wet, snow , ice. Furthermore, the en vironment could be unstructured, cluttered, hazardous and limited in resources such as the av ailability of communications, GPS and visibility [47]. These en vironmental factors amplify the need for robots to not only sense their surroundings but also adapt their actions in real time, underscoring the importance of robust EAI systems capable of handling such complexities. Simultaneous Localization and Mapping (SLAM) enable a robot to navig ate and map an unknown en vironment without any prior knowledge about it while simultaneously keeping track of its own location [9]. SLAM typically begins with the robot collecting sensor data from sources such as laser scanners, cameras, motion sensors and extracting distinctive features or landmarks from the environment [12]. A critical challenge within SLAM is the data association problem — correctly matching newly observ ed features to previously recorded landmarks — since incorrect associations can cause catastrophic drift in both the map and the localization estimate leading to endogenous vulnerabilities. While the early SLAM approaches used visual sensors such as stereo cameras to- gether with odometry data, inclusion of more adv anced robot- mountable sensors such as ultrasonic scanners, laser scanners, and radar have led to significant improvements in navigation abilities in degraded visual conditions such as dust, fog, rain, or darkness [19]. It is important to note that SLAM plays an integral part in an autonomous system’ s decision-making process, but an au- tonomous system is inherently more complex, as it integrates data from multiple streams and modalities to analyze, interpret, and ex ecute tasks ef fectively in pursuit of objecti ves defined by humans. This capability is increasingly employed in deploy- ment of autonomous systems such as unmanned aerial vehicles (U A Vs) autonomous vehicles, autonomous ships, unmanned surface vehicles, (USVs), le gged robots, and Autonomous Underwater V ehicles (A UVs), for monitoring, maintenance, and surveillance of critical infrastructures. The European Union’ s 2022 Directiv e on the Resilience of Critical Entities [1] identifies 11 sectors as part of critical infrastructure: Energy , T ransport, Banking, Finance, Health, Drinking W ater , W astew ater , Digital Infrastructure, Public Administration, Space, and Production, processing, and dis- tribution of food. Among these, the Banking, Financial, and Public Administration sectors primarily face challenges related to c ybersecurity resilience, with limited use of autonomous systems such as robots. In these sectors, autonomous systems are mainly employed for tasks such as physical branch se- curity , perimeter monitoring, and surveillance, where HO TL operators play a crucial role in ensuring resilience in the physi- cal domain. Howe ver , in the remaining sectors, the adoption of autonomous systems is becoming increasingly significant with varying degrees of AI autonomy ranging from human-out- of-the-loop to human-in-command (HIC) for heterogeneous aspects of CIs. T o that ef fect, T able IV translates the general framework of bounded autonomy (T able III) into selected embodied AI ap- plications in critical infrastructure, linking operational promise to dominant vulnerability patterns and the most appropriate modes of human ov ersight. Autonomous systems, po wered by interpretable AI and supported by human oversight mechanisms (HITL, HO TL, HIC), are rev olutionizing critical infrastructure (CI) sectors by enhancing resilience, operational ef ficiency , and safety . Across energy , transport, and digital infrastructure, UA Vs, USVs, and mobile robots support continuous monitoring, fault detection, and predictive maintenance of assets ranging from transmis- sion towers [31] and pipelines to of fshore wind farms [33] and subsea infrastructure [22]. Autonomous vehicles and Maritime Autonomous Surface Ships (MASS) [6], [34] ex- tend this logic to transport, enabling real-time navigation and operational continuity . In water , waste water , and digital infrastructure, crawling robots, U A Vs, and A UVs [29] reduce human exposure in hazardous or inaccessible en vironments while providing structural health monitoring. In space and other e xtreme en vironments, orbital and planetary systems [47] operate where direct human action is impossible, tolerating communication delays and high en vironmental uncertainty . In health and food systems, deli very robots and robot-assisted surgery platforms [15], [18] improve logistics and clinical precision, while autonomous agricultural robots [38] enhance efficienc y and resource optimisation across unstructured field en vironments. Across all domains, these systems excel in en vironments where human perception is limited, providing actionable insights and reducing the likelihood of catastrophic failures. While AI decision support systems hav e proven in valuable in enhancing situational awareness and decision-making, they also introduce the risk of cognitive ov erload for human oper- ators. The continuous stream of alerts and actionable insights generated by these systems can overwhelm operators, particu- larly in high-stakes or time-sensitiv e scenarios. This cognitive strain may result in delayed responses, misinterpretation of critical information, or decision fatigue, ultimately undermin- ing the intended benefits of AI assistance. This challenge is particularly relev ant in the context of CI, where the integration of AI systems is shifting from a purely protecti ve logic to one of resilience, emphasizing the ability to absorb shocks, recov er, and adapt. As AI systems increasingly take on roles in mon- itoring, predictiv e maintenance, and decision support across T ABLE II E M BO D I E D A I I N C RI T I C AL I N FR A S T RU CT U R E : O P E R A T I O NA L P RO M I SE , D OM I NA N T V U L N ER A B I LI T I ES , A N D M O ST A P PR O PR I A T E OV E R S IG H T M O D E S CI domain T ypical embodied AI ap- plication Main resilience contri- bution Dominant vulnerability Most appropriate over - sight modes Energy infrastructure U A Vs, USVs, and mobile robots for inspection of transmission towers [31], pipelines, of fshore wind farms [33], and subsea as- sets [22] Continuous monitoring, early fault detection, reduced human exposure, faster maintenance response Misinterpreted anomalies, cyber -physical manipulation, and cascading failure risk HO TL / HITL; HIC in large-scale outages or cri- sis escalation. T ransport systems Autonomous vehicles [6], drones, and maritime autonomous surface ships [34] Real-time navig ation, op- erational continuity , and safety optimisation Edge cases, unpredictable en- vironments, accident response, and contested responsibility in emergencies HO TL in routine supervision; HITL / HIC in emergencies and high-consequence decisions. W ater , waste water , and digital infrastructure Crawling or climbing robots, UA Vs, and A UVs for inspection of pipes, tunnels, dams, towers, undersea cables, and offshore structures [29] Inspection in hazardous or inaccessible environ- ments, structural health monitoring, and continu- ous surveillance Sensor degradation, poor vis- ibility , communication limits, hidden defects, and sabotage or tampering HO TL in routine inspec- tion; HITL where inter- vention decisions affect safety or service continu- ity . Health and hospital logis- tics Deliv ery robots [15], monitoring systems, perimeter security [30], and robot-assisted surgery [18] Efficienc y , reduced rou- tine burden, improved lo- gistics, and potential clin- ical precision Life-critical error consequences, explainability and trust problems, cognitive overload, and accountability HITL as the default; HIC for strategic and ethically sensitiv e decisions Food and agricultural sys- tems Autonomous agricultural robots for spraying, weed- ing, planting, harvesting, and field monitoring [38] Precision, ef ficiency , re- source optimisation, and continuity of food produc- tion Unstructured en vironments, mixed vulnerabilities, and variable safety and reliability under changing field conditions HO TL in routine opera- tions; HITL where failure may affect safety , environ- ment, or supply continuity Space and other harsh en- vironments Orbital, planetary , and subsea robots for inspection, maintenance, retriev al, docking, and exploration [47] Enables operation where direct human action is im- possible or highly con- strained Extreme uncertainty , commu- nication delays, environmen- tal unpredictability , and lim- ited recoverability after failure Fully automated / HO TL in routine operation; HIC for mission-le vel strate gic choices CI sectors, the boundary between machine autonomy and hu- man judgements becomes a critical consideration. T o address this tension, it is imperativ e to design human-centric [26] autonomous systems that effecti vely prioritize and filter in- formation, presenting only the most relev ant insights in a clear and manageable format. By aligning AI capabilities with the cognitiv e strengths and limitations of human operators, these systems can foster optimal collaboration. Mechanisms such as HITL, HO TL, HIC provide v arying de grees of human ov ersight across dif ferent aspects of CI. Howe ver , further research is needed to explore the intersection of EAI and the CI domain through the lens of hybrid gov ernance. Such studies will be critical in ensuring that AI systems not only enhance operational efficienc y but also support human decision-making in a sustainable and resilient manner . V . C O N C L U S I O N The article has elaborated the notion that AI can strengthen the resilience of critical infrastructure, but not by replacing human judgement. Under conditions of systemic uncertainty , cascading interdependence, and crisis surprise, resilience de- pends on bounded autonomy embedded in a hybrid gov ernance architecture. The challenge is therefore not whether AI should be used in critical infrastructure, but how dif ferent degrees of autonomy can be matched with appropriate forms of human ov ersight, controllability , and responsibility . R E F E R E N C E S [1] A. C. M. Abrantes, M. P . Cunha, and S. Clegg. Or ganizational improvisation theory: Integrating knowledge on management in the face of the unexpected. Academy of Management Collections , 4(3):96–114, 2025. [2] B. A. Alkhaleel. Current applications and future trends of artificial intelligence and machine learning in the resilience of interdependent critical infrastructures. In Proceedings of the 14th Annual International Confer ence on Industrial Engineering and Operations Management , pages 460–471, Dubai, United Arab Emirates, February 2024. [3] B. A. Alkhaleel. Machine learning applications in the resilience of interdependent critical infrastructure systems—a systematic literature revie w . International Journal of Critical Infrastructure Pr otection , 44:100646, 2024. [4] S. A. Argyroudis, S. A. Mitoulis, E. Chatzi, J. W . Baker , and I. Brilakis. Digital technologies can enhance climate resilience of critical infrastruc- ture. Climate Risk Management , 35:100387, 2022. [5] D. N. Banerjee and S. S. Chanda. Ai failures: A re view of underlying issues, 2020. [6] V . Bolbot, C. Gkerekos, G. Theotokatos, and E. Boulougouris. Au- tomatic traffic scenarios generation for autonomous ships collision av oidance system testing. Ocean Engineering , 254:111309, 2022. [7] C. Bowers, C. Kreutzer , J. Cannon-Bo wers, and J. Lamb . T eam resilience as a second-order emergent state: A theoretical model and research directions. F r ontiers in Psychology , 8:1360, 2017. [8] S. S. Chanda and D. N. Banerjee. Omission and commission errors underlying ai failures. AI & Society , 39(3):937–960, 2024. [9] T . Chen, S. Gupta, and A. Gupta. Learning exploration policies for navigation, 2019. [10] R. Cheruvu. Human-in/on-the-loop design for human controllability . In Handbook of Human-Centered Artificial Intelligence , pages 1–47. Springer Nature Singapore, Singapore, 2025. [11] J. Duan, S. Y u, H. L. T an, H. Zhu, and C. T an. A survey of embodied ai: From simulators to research tasks. IEEE Tr ansactions on Emer ging T opics in Computational Intelligence , 6(2):230–244, 2022. [12] H. Durrant-Whyte and T . Bailey . Simultaneous localization and map- ping: part i. IEEE Robotics & Automation Magazine , 13(2):99–110, 2006. [13] European Union. Directive (eu) 2022/2557 of the european parliament and of the council of 14 december 2022 on the resilience of critical entities. Official Journal of the European Union, L333, pp. 164–198, December 2022. Dec. 27, 2022. [14] European Union. Regulation (eu) 2024/1689 of the european parliament and of the council of 13 june 2024 laying down harmonised rules on artificial intelligence (ai act). Of ficial Journal of the European Union, OJ L, pp. 1–144, July 2024. Jul. 12, 2024. [15] M. P . Fanti, A. M. Mangini, M. Roccotelli, and B. Silvestri. Hospital drugs distribution with autonomous robot vehicles. In 2020 IEEE 16th International Conference on Automation Science and Engineering (CASE) , pages 1025–1030, August 2020. [16] V . Galaz, M. A. Centeno, P . W . Callahan, A. Causevic, T . Patterson, I. Brass, S. Baum, D. Farber , J. Fischer , D. Garcia, and T . McPhearson. Artificial intelligence, systemic risks, and sustainability . T echnology in Society , 67:101741, 2021. [17] S. Gundel. T ow ards a ne w typology of crises. Journal of Contingencies and Crisis Management , 13(3):106–115, 2005. [18] J. Han, J. Davids, H. Ashrafian, A. Darzi, D. S. Elson, and M. Soder gren. A systematic revie w of robotic surgery: From supervised paradigms to fully autonomous robotic approaches. The International Journal of Medical Robotics and Computer Assisted Surgery , 18(2):e2358, 2022. [19] J. Hatleskog, M. Nissov , and K. Alexis. Imu-preintegrated radar factors for asynchronous radar-lidar -inertial slam. In 2025 IEEE International Confer ence on Advanced Robotics (ICAR) , pages 725–732, San Juan, Argentina, 2025. [20] C. F . Hermann. Some consequences of crises which limit the viability of organizations. Administrative Science Quarterly , 8(1):61–82, 1963. [Online]. A vailable: https://www .jstor.or g/stable/2390887. [21] International Organization for Standardization. Security and resilience—vocab ulary . T echnical Report ISO 22300:2023, ISO, Genev a, Switzerland, 2023. [Online]. A vailable: https://www .iso.org/standard/85749.html. [22] G. Ioannou and et al. Underwater inspection and monitoring: T ech- nologies for autonomous operations. IEEE Aerospace and Electronic Systems Magazine , 39(5):4–16, 2024. [23] S. N. Islam and A. Biswas. Artificial intelligence for critical power infrastructure: Challenges and opportunities. Applied IT & Engineering , 1(1):1–9, 2023. [24] ISO/IEC. Artificial intelligence—functional safety and ai systems. T echnical Report ISO/IEC TR 5469:2024, International Organization for Standardization, Genev a, Switzerland, 2024. [25] ISO/IEC. Information technology—artificial intelli- gence—controllability of automated artificial intelligence systems. T echnical Report ISO/IEC TS 8200:2024, International Organization for Standardization, Geneva, Switzerland, 2024. [26] D. Kaber . From automation to autonomy through ai: Enabling and retaining human controllability . In W . Xu, editor, Handbook of Human- Center ed Artificial Intelligence . Springer, Singapore, 2025. [27] E. E. Kim. Ethical ai standards and go vernance: A perspecti ve of human- centered ai. In W . Xu, editor, Handbook of Human-Centered Artificial Intelligence . Springer, Singapore, 2026. [28] A. Kumar and B. J. Choi. Benchmarking machine learning based detection of cyber attacks for critical infrastructure. In Proc. Int. Conf. Information Networking (ICOIN) , pages 24–29, 2022. [29] D. Lattanzi and G. Miller . Review of robotic infrastructure inspection systems. Journal of Infrastructure Systems , 23(3):04017004, 2017. [30] P . Marmaglio, D. Consolati, C. Amici, and M. T iboni. Autonomous vehi- cles for healthcare applications: a revie w on mobile robotic systems and drones in hospital and clinical environments. Electr onics , 12(23):4791, 2023. [31] C. Martinez, C. Sampedro, A. Chauhan, and P . Campoy . T owards autonomous detection and tracking of electric towers for aerial power line inspection. In 2014 International Conference on Unmanned Aircr aft Systems (ICU AS) , pages 284–295, Orlando, FL, USA, 2014. [32] A. Mottahedi, F . Sereshki, M. Ataei, A. Nouri Qarahasanlou, and A. Barabadi. The resilience of critical infrastructure systems: A systematic literature review . Energies , 14(6):1571, 2021. [33] M. H. Nordin, S. Sharma, A. Khan, M. Gianni, S. Rajendran, and R. Sutton. Collaborative unmanned v ehicles for inspection, maintenance, and repairs of offshore wind turbines. Drones , 6(6):137, 2022. [34] I. O. Olayode, B. Du, A. Se verino, T . Campisi, and F . J. Alex. Systematic literature revie w on the applications, impacts, and public perceptions of autonomous vehicles in road transportation system. J ournal of T raffic and T ransportation Engineering (English Edition) , 10(6):1037–1060, 2023. [35] R. Pawson, G. W ong, and L. Owen. Known kno wns, known unknowns, unknown unknowns: The predicament of evidence-based policy . Amer- ican Journal of Evaluation , 32(4):518–546, 2011. [36] J. Perlo, A. Robey , F . Barez, L. Floridi, and J. M ¨ okander . Embodied ai: Emerging risks and opportunities for policy action, 2025. [37] C. Pursiainen and T . Forsber g. Biased decisions. In The Psychology of F oreign P olicy , pages 163–207. Springer International Publishing, Cham, 2021. [38] R. Rahmadian and M. Widyartono. Autonomous robotic in agriculture: A re view . In 2020 Thir d International Conference on V ocational Education and Electrical Engineering (ICVEE) , pages 1–6, Surabaya, Indonesia, 2020. [39] B. Rød, D. Lange, M. Theocharidou, and C. Pursiainen. From risk management to resilience management in critical infrastructure. Journal of Management in Engineering , 36(4):04020039, 2020. [40] L. Sambucci and E.-A. P araschiv . The accelerated integration of artificial intelligence systems and its potential to expand the vulnerability of the critical infrastructure. Romanian Journal of Information T echnology and Automatic Contr ol , 34(3):131–148, 2024. [41] I. H. Sarker . Introduction to ai-driv en cybersecurity and threat intel- ligence. In AI-Driven Cybersecurity and Threat Intelligence: Cyber Automation, Intelligent Decision-Making and Explainability , pages 3– 19. Springer Nature Switzerland, Cham, Switzerland, 2024. [42] D. J. Snowden. Multi-ontology sense making: A new simplicity in decision making. Informatics in Primary Car e , 13(1):45–53, 2005. [Online]. A vailable: https://www .agileleanhouse.com/lib/lib/People/DaveSno wden/SNowden578- 1421-1-PB.pdf. [43] M. T addeo, T . McCutcheon, and L. Floridi. Trusting artificial intel- ligence in cybersecurity is a double-edged sword. Nature Machine Intelligence , 1:557–560, 2019. [44] The White House. National security memorandum/nsm-25 on advancing the united states’ leadership in artificial intelligence; harnessing artificial intelligence to fulfill national security objectives; and fostering the safety , security , and trustworthiness of artificial intelligence, October 2024. [45] P . T immers. Ethics of ai and cybersecurity when sov ereignty is at stake. Minds and Machines , 29(4):635–645, 2019. [46] M. F . Umakor . Threat modelling for artificial intelligence gov ernance: Integrating ethical considerations into adversarial attack simulations for critical infrastructure using generativ e ai. W orld Journal of Advanced Resear ch and Reviews , 15(2):873–890, 2022. [47] C. W ong, E. Y ang, X.-T . Y an, and D. Gu. An ov erview of robotics and autonomous systems for harsh environments. In 2017 23rd Inter- national Confer ence on Automation and Computing (ICAC) , pages 1–6, Huddersfield, UK, 2017. [48] W . Xing, M. Li, M. Li, and M. Han. T ow ards robust and secure embodied ai: A survey on vulnerabilities and attacks, 2025.

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment