AI 자가모니터링 부재와 인지 자율성 확보를 위한 일곱 가지 핵심 결함

Reading time: 5 minute
...

📝 Original Info

  • Title: AI 자가모니터링 부재와 인지 자율성 확보를 위한 일곱 가지 핵심 결함
  • ArXiv ID: 2512.02280
  • Date: Pending
  • Authors: ** Noorbakhsh Amiri Golilarz, Sindhuja Penchala, Shahram Rahimi Department of Computer Science, The University of Alabama, Tuscaloosa, AL, USA 이메일: {noor.amiri, srahimi1}@ua.edu, spenchala@crimson.ua.edu **

📝 Abstract

Artificial intelligence has advanced rapidly across perception, language, reasoning, and multimodal domains. Yet despite these achievements, modern AI systems remain fundamentally limited in their ability to self-monitor, self-correct, and regulate their behavior autonomously in dynamic contexts. This paper identifies and analyzes seven core deficiencies that constrain contemporary AI models: the absence of intrinsic selfmonitoring, lack of meta-cognitive awareness, fixed and nonadaptive learning mechanisms, inability to restructure goals, lack of representational maintenance, insufficient embodied feedback, and the absence of intrinsic agency. Alongside identifying these limitations, we also outline a forward-looking perspective on how AI may evolve beyond them through architectures that mirror neurocognitive principles. We argue that these structural limitations prevent current architectures, including deep learning and transformer-based systems, from achieving robust generalization, lifelong adaptability, and real-world autonomy. Drawing on a comparative analysis of artificial systems and biological cognition [7] , and integrating insights from AI research, cognitive science, and neuroscience, we outline how these capabilities are absent in current models and why scaling alone cannot resolve them. We conclude by advocating for a paradigmatic shift toward cognitively grounded AI (cognitive autonomy) capable of self-directed adaptation, dynamic representation management, and intentional, goal-oriented behavior, paired with reformative oversight mechanisms [8] that ensure autonomous systems remain interpretable, governable, and aligned with human values.

💡 Deep Analysis

Figure 1

📄 Full Content

Bridging the Gap: Toward Cognitive Autonomy in Artificial Intelligence Noorbakhsh Amiri Golilarz, Sindhuja Penchala, Shahram Rahimi Department of Computer Science, The University of Alabama, Tuscaloosa, AL, USA {noor.amiri, srahimi1}@ua.edu, spenchala@crimson.ua.edu Abstract—Artificial intelligence has advanced rapidly across perception, language, reasoning, and multimodal domains. Yet despite these achievements, modern AI systems remain fun- damentally limited in their ability to self-monitor, self-correct, and regulate their behavior autonomously in dynamic contexts. This paper identifies and analyzes seven core deficiencies that constrain contemporary AI models: the absence of intrinsic self- monitoring, lack of meta-cognitive awareness, fixed and non- adaptive learning mechanisms, inability to restructure goals, lack of representational maintenance, insufficient embodied feedback, and the absence of intrinsic agency. Alongside identifying these limitations, we also outline a forward-looking perspective on how AI may evolve beyond them through architectures that mirror neurocognitive principles. We argue that these structural limitations prevent current architectures, including deep learning and transformer-based systems, from achieving robust general- ization, lifelong adaptability, and real-world autonomy. Drawing on a comparative analysis of artificial systems and biological cognition [7], and integrating insights from AI research, cognitive science, and neuroscience, we outline how these capabilities are absent in current models and why scaling alone cannot resolve them. We conclude by advocating for a paradigmatic shift toward cognitively grounded AI (cognitive autonomy) capable of self-directed adaptation, dynamic representation management, and intentional, goal-oriented behavior, paired with reformative oversight mechanisms [8] that ensure autonomous systems remain interpretable, governable, and aligned with human values. Index Terms—Artificial intelligence, self-monitoring, biologi- cal cognition, cognitive autonomy. I. INTRODUCTION Artificial intelligence has undergone a period of rapid acceleration driven chiefly by advances in deep learning and transformer architectures, enabling breakthroughs across lan- guage modeling, computer vision, scientific discovery, and multimodal reasoning [22] [20] [13]. Despite this progress, contemporary AI systems continue to exhibit fundamental lim- itations in autonomy, adaptability, and self-regulation. While large models can generate fluent language, solve complex recognition tasks, and perform sophisticated reasoning un- der some constraints, their capabilities remain structurally bounded by a static learning paradigm, the absence of intrinsic self-evaluation, and their dependency on externally imposed objectives [14] [10]. In contrast, biologically-grounded intelligence demon- strates continual self-assessment, context-dependent strategy adjustment, adaptive learning across multiple timescales, and embodied, exploratory capability that unfolds without external supervision [11] [5] [9]. Metacognitive monitoring in humans, supported by prefrontal circuitry, enables the brain to evaluate confidence in its own decisions and adjust behavior accord- ingly [6]. Learning itself is distributed across fast and slow timescales, with short-term and long-term synaptic plastic- ity jointly supporting rapid adaptation and stable knowledge acquisition [16]. At the level of perception and action, pre- dictive processing accounts of the brain emphasize closed perception–action loops [7] in which agents actively sample and reshape their environment to minimize prediction error, rather than passively receiving inputs [2]. Moreover, intrinsic motivation and curiosity drive spontaneous exploration and open-ended skill acquisition, providing an internal engine for self-directed learning even in the absence of explicit external rewards [18]. Artificial intelligence systems, by comparison, operate in a narrow reactive mode. This contrast raises a foundational question: what essential components of cognition are missing from today’s AI systems to achieve cognitive autonomy? The present paper addresses this question directly. We argue that there exist foundational deficiencies in contemporary AI ar- chitectures that prevent current systems from achieving robust autonomy, human-like adaptability, and cognitively grounded behavior. These deficiencies relate to missing capacities in self-monitoring, meta-awareness, adaptive plasticity, goal re- structuring, representational repair, embodied feedback, and autonomous initiative. We discuss these deficiencies concep- tually, grounding each in both AI practice and cognitive science theory, and we organize them into coherent structural dimensions that reveal why modern AI succeeds spectacularly in narrow contexts yet fails in open, dynamic, and uncertain environments. Our objective is not merely to critique the current paradigm, but to arti

📸 Image Gallery

Drawing_16_Cognitive.png Drawing_16_Structural.png

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut