The Specification Trap: Why Content-Based AI Value Alignment Cannot Produce Robust Alignment
I argue that content-based AI value alignment--any approach that treats alignment as optimizing toward a formal value-object (reward function, utility function, constitutional principles, or learned p
I argue that content-based AI value alignment–any approach that treats alignment as optimizing toward a formal value-object (reward function, utility function, constitutional principles, or learned preference representation)–cannot, by itself, produce robust alignment under capability scaling, distributional shift, and increasing autonomy. This limitation arises from three philosophical results: Hume’s is-ought gap (behavioral data cannot entail normative conclusions), Berlin’s value pluralism (human values are irreducibly plural and incommensurable), and the extended frame problem (any value encoding will misfit future contexts that advanced AI creates). I show that RLHF, Constitutional AI, inverse reinforcement learning, and cooperative assistance games each instantiate this specification trap, and that their failure modes are structural, not engineering limitations. Proposed escape routes–continual updating, meta-preferences, moral realism–relocate the trap rather than exit it. Drawing on Fischer and Ravizza’s compatibilist theory, I argue that behavioral compliance does not constitute alignment: there is a principled distinction between simulated value-following and genuine reasons-responsiveness, and specification-based methods cannot produce the latter. The specification trap establishes a ceiling on content-based approaches, not their uselessness–but this ceiling becomes safety-critical at the capability frontier. The alignment problem must be reframed from value specification to value emergence.
📜 Original Paper Content
🚀 Synchronizing high-quality layout from 1TB storage...