초지능 AI 위협 논쟁의 실체와 위험성 재검토

Reading time: 6 minute
...

📝 Abstract

Two 2025 publications, “AI 2027” (Kokotajlo et al., 2025) and “If Anyone Builds It, Everyone Dies” (Yudkowsky & Soares, 2025) , assert that superintelligent artificial intelligence will almost certainly destroy or render humanity obsolete within the next decade. Both rest on the classic chain formulated by Good (1965) and Bostrom (2014): intelligence explosion, superintelligence, lethal misalignment. This article subjects each link to the empirical record of 2023-2025. Sixty years after Good’s speculation, none of the required phenomena (sustained recursive self-improvement, autonomous strategic awareness, or intractable lethal misalignment) have been observed. Current generative models remain narrow, statistically trained artefacts: powerful, opaque, and imperfect, but devoid of the properties that would make the catastrophic scenarios plausible. Following Whittaker (2025a Whittaker ( , 2025b Whittaker ( , 2025c) ) and Zuboff (2019 Zuboff ( , 2025)) , we argue that the existential-risk thesis functions primarily as an ideological distraction from the ongoing consolidation of surveillance capitalism and extreme concentration of computational power. The thesis is further inflated by the 2025 AI speculative bubble, where trillions in investments in rapidly depreciating “digital lettuce” hardware (McWilliams, 2025) mask lagging revenues and jobless growth rather than heralding superintelligence. The thesis remains, in November 2025, a speculative hypothesis amplified by a speculative financial bubble rather than a demonstrated probability.

💡 Analysis

Two 2025 publications, “AI 2027” (Kokotajlo et al., 2025) and “If Anyone Builds It, Everyone Dies” (Yudkowsky & Soares, 2025) , assert that superintelligent artificial intelligence will almost certainly destroy or render humanity obsolete within the next decade. Both rest on the classic chain formulated by Good (1965) and Bostrom (2014): intelligence explosion, superintelligence, lethal misalignment. This article subjects each link to the empirical record of 2023-2025. Sixty years after Good’s speculation, none of the required phenomena (sustained recursive self-improvement, autonomous strategic awareness, or intractable lethal misalignment) have been observed. Current generative models remain narrow, statistically trained artefacts: powerful, opaque, and imperfect, but devoid of the properties that would make the catastrophic scenarios plausible. Following Whittaker (2025a Whittaker ( , 2025b Whittaker ( , 2025c) ) and Zuboff (2019 Zuboff ( , 2025)) , we argue that the existential-risk thesis functions primarily as an ideological distraction from the ongoing consolidation of surveillance capitalism and extreme concentration of computational power. The thesis is further inflated by the 2025 AI speculative bubble, where trillions in investments in rapidly depreciating “digital lettuce” hardware (McWilliams, 2025) mask lagging revenues and jobless growth rather than heralding superintelligence. The thesis remains, in November 2025, a speculative hypothesis amplified by a speculative financial bubble rather than a demonstrated probability.

📄 Content

Humanity in the Age of AI: Reassessing 2025’s Existential-Risk Narratives A Critical Re-Examination of the 2025 Existential-Risk Claims Mohamed El Louadi Higher Institute of Management University of Tunis 41 rue de la Libert´e - Cit´e Bouchoucha 2000 Le Bardo, Tunisia Email: mohamed.louadi@isg.rnu.tn (ORCID: 0000-0003-1321-4967) 30 November 2025 Abstract Two 2025 publications, “AI 2027” (Kokotajlo et al., 2025) and “If Anyone Builds It, Everyone Dies” (Yudkowsky & Soares, 2025), assert that superintelligent artificial intelligence will almost certainly de- stroy or render humanity obsolete within the next decade. Both rest on the classic chain formulated by Good (1965) and Bostrom (2014): intelligence explosion, superintelligence, lethal misalignment. This article subjects each link to the empirical record of 2023-2025. Sixty years after Good’s speculation, none of the required phenomena (sustained recursive self-improvement, autonomous strategic awareness, or intractable lethal misalignment) have been observed. Current generative models remain narrow, sta- tistically trained artefacts: powerful, opaque, and imperfect, but devoid of the properties that would make the catastrophic scenarios plausible. Following Whittaker (2025a, 2025b, 2025c) and Zuboff (2019, 2025), we argue that the existential-risk thesis functions primarily as an ideological distraction from the ongoing consolidation of surveillance capitalism and extreme concentration of computational power. The thesis is further inflated by the 2025 AI speculative bubble, where trillions in investments in rapidly depreciating “digital lettuce” hardware (McWilliams, 2025) mask lagging revenues and jobless growth rather than heralding superintelligence. The thesis remains, in November 2025, a speculative hypothesis amplified by a speculative financial bubble rather than a demonstrated probability. Keywords: artificial intelligence, superintelligence, AGI, intelligence explosion, alignment, confabula- tion, existential risk. 1 arXiv:2512.04119v1 [cs.CY] 1 Dec 2025

  1. Introduction The year 2025 has seen a dramatic resurgence of existential-risk claims about artificial intelligence. Two publications in particular, “AI 2027” (Kokotajlo et al., 2025) and “If Anyone Builds It, Everyone Dies” (Yudkowsky & Soares, 2025), present superintelligence as an imminent civilizational threat. These claims revive the theoretical edifice erected by Good (1965) and Bostrom (2014). Yet, as Meredith Whittaker and Shoshana Zuboff have repeatedly emphasized throughout 2025, the dominant narrative of an imminent superintelligent takeover serves a crucial ideological function: it diverts public and regulatory attention from the concrete concentration of economic and computational power that is already reshaping global society (Whittaker, 2025a; Zuboff, 2025). This distraction is amplified by the AI investment bubble. Trillions of dollars flow into hardware dubbed “digital lettuce” by economist David McWilliams: perishable assets like GPUs that rapidly lose value due to technological obsolescence. Revenues lag behind valuations, and no net job creation occurs (McWilliams, 2025). This article examines the empirical foundations of the classic existential-risk chain against the record of 2023-2025. We introduce an AI Risk Hierarchy by Evidentiary Status and Tractability (Table I) that systematically distinguishes observable Level 1 risks (labour displacement, power concentration) from speculative Level 2 risks (superintelligence misalignment).
  2. The Intelligence Explosion: Sixty Years Without Empirical Validation The foundational premise of the existential-risk thesis is the intelligence-explosion hypothesis first ar- ticulated by I. J. Good in 1965. Good’s argument was deceptively simple: an ultraintelligent machine would be capable of redesigning itself faster and more effectively than human engineers could, thereby triggering a positive feedback loop of accelerating improvement. Each iteration would produce a ma- chine better able to design the next, yielding a runaway process that would culminate in intelligence “far beyond the human level in all respects” (Good, 1965, p. 33). Bostrom (2014) formalised this idea under the rubric of “superintelligence,” distinguishing multiple possible pathways (recursive self-improvement, whole-brain emulation, biological cognitive enhancement, and brain-computer interfaces) and between slow and fast take-off scenarios. It is worth noting that the term “ultraintelligence,” once central to speculative debates, has largely dis- appeared from serious AI research discourse by 2025. Contemporary technical literature prefers terms such as “frontier AI,” “general purpose AI,” or “transformative AI,” which emphasize measurable capa- bilities and tractable risks rather than hypothetical runaway cognition. This linguistic shift underscores the growing consensus that empirical scaling laws and observed limitations provide a more reliable foun- dation for

This content is AI-processed based on ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut