The Paradox of Robustness: Decoupling Rule-Based Logic from Affective Noise in High-Stakes Decision-Making

The Paradox of Robustness: Decoupling Rule-Based Logic from Affective Noise in High-Stakes Decision-Making
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

While Large Language Models (LLMs) are widely documented to be sensitive to minor prompt perturbations and prone to sycophantic alignment with user biases, their robustness in consequential, rule-bound decision-making remains under-explored. In this work, we uncover a striking “Paradox of Robustness”: despite their known lexical brittleness, instruction-tuned LLMs exhibit a behavioral and near-total invariance to emotional framing effects. Using a novel controlled perturbation framework across three high-stakes domains (healthcare, law, and finance), we quantify a robustness gap where LLMs demonstrate 110-300 times greater resistance to narrative manipulation than human subjects. Specifically, we find a near-zero effect size for models (Cohen’s h = 0.003) compared to the substantial biases observed in humans (Cohen’s h in [0.3, 0.8]). This result is highly counterintuitive and suggests the mechanisms driving sycophancy and prompt sensitivity do not necessarily translate to a failure in logical constraint satisfaction. We show that this invariance persists across models with diverse training paradigms. Our findings show that while LLMs may be “brittle” to how a query is formatted, they are remarkably “stable” against why a decision should be biased. Our findings establish that instruction-tuned models can decouple logical rule-adherence from persuasive narratives, offering a source of decision stability that complements, and even potentially de-biases, human judgment in institutional contexts. We release the 162-scenario benchmark, code, and data to facilitate the rigorous evaluation of narrative-induced bias and robustness on GitHub.com.


💡 Research Summary

The paper “The Paradox of Robustness: Decoupling Rule‑Based Logic from Affective Noise in High‑Stakes Decision‑Making” investigates whether instruction‑tuned large language models (LLMs) inherit the well‑documented human susceptibility to emotional framing when making rule‑bound decisions in critical domains such as healthcare, law, and finance. While prior work has highlighted LLMs’ lexical brittleness and sycophantic alignment to user preferences, the authors hypothesize that these phenomena may not translate into a failure of logical constraint satisfaction.

To test this, the authors construct a controlled perturbation framework and a 162‑scenario benchmark covering three domains, nine distinct situations, three levels of emotional intensity, and two prose styles (high‑fluency vs. low‑fluency). For each scenario three prompt conditions are generated: (A) Affect – an emotionally charged narrative that is explicitly inadmissible according to the system prompt; (N) Neutral – a length‑matched, affectively neutral narrative; and (E) Evidence – a factual modification that changes the ground‑truth decision, serving as a positive control. Length is matched within 10 % to eliminate confounds from prompt size, and the affective intensity is parameterized by τ ∈ {0, 2, 4}.

Six LLMs are evaluated: three frontier models (OpenAI’s GPT‑5‑mini, Anthropic’s Claude‑Haiku‑4.5, DeepSeek’s v3p2) representing RLHF, Constitutional AI, and a Chinese ecosystem approach, and three open‑source models (Llama‑3‑8B‑Instruct, Mistral‑7B‑Instruct, Qwen‑32B) spanning 7 B–32 B parameters. Each prompt is run with temperature = 0 and n = 20 independent replicates, yielding 12,113 valid responses. The output format is a structured JSON containing the decision (APPROVE/DENY), cited rule IDs, and a confidence score.

Three robustness metrics are defined: Decision Drift (Δ = p_A − p_N), Flip Rate (FR), and Response Entropy (H). All metrics are estimated with bias‑corrected accelerated (BCa) bootstrap confidence intervals (B = 2000). The minimum detectable effect for Δ is |Δ| ≥ 0.28 (Cohen’s h ≈ 0.63) at α = 0.05, β = 0.20. Observed effects are dramatically smaller: |Δ| ≈ 0.008, Cohen’s h = 0.003 for all models, essentially zero. By contrast, human participants in comparable framing studies exhibit Cohen’s h = 0.3–0.8, a medium to large effect. The Bayes factor BF_01 = 10⁹ provides extreme statistical support for the robustness gap.

The evidence condition confirms that models respond appropriately to genuine changes in the factual basis (84.4 % correct decision flips), demonstrating that the near‑zero Δ is not due to output rigidity but genuine insensitivity to affective content. Moreover, the robustness persists even without an explicit “ignore narrative” instruction, supporting the instruction hierarchy theory: system‑level directives that prioritize rule adherence dominate over user‑provided narrative content.

Key contributions are: (1) Empirical demonstration that instruction‑tuned LLMs are 110–300× more resistant to emotional framing than humans in rule‑bound tasks; (2) Validation that instruction‑tuning instills an intrinsic rule‑first hierarchy, yielding intrinsic robustness across diverse training paradigms; (3) Introduction of a rigorously controlled perturbation methodology and a publicly released 162‑scenario benchmark; (4) Discussion of practical implications, suggesting that LLMs can serve as procedural stabilizers in institutional settings where human judgment is predictably biased.

Limitations include the exclusive focus on scenarios where affective narratives are strictly irrelevant to the decision rule, omission of contexts where empathy is appropriate, and reliance on deterministic sampling (temperature = 0) which may not capture stochastic behavior in real deployments. Future work should explore higher temperature settings, mixed‑signal scenarios where narrative and evidence overlap, and cross‑cultural affective framing to further assess the generality of the observed robustness.

Overall, the study reveals a striking paradox: despite being lexically brittle, modern instruction‑tuned LLMs exhibit near‑perfect logical consistency in the face of affective noise, opening a pathway for their use as reliable, bias‑mitigating arbiters in high‑stakes decision‑making environments.


Comments & Academic Discussion

Loading comments...

Leave a Comment