Can Large Language Models Still Explain Themselves? Investigating the Impact of Quantization on Self-Explanations

Can Large Language Models Still Explain Themselves? Investigating the Impact of Quantization on Self-Explanations

Quantization is widely used to accelerate inference and streamline the deployment of large language models (LLMs), yet its effects on selfexplanations (SEs) remain unexplored. SEs, generated by LLMs to justify their own outputs, require reasoning about the model’s own decision-making process, a capability that may exhibit particular sensitivity to quantization. As SEs are increasingly relied upon for transparency in high-stakes applications, understanding whether and to what extent quantization degrades SE quality and faithfulness is critical. To address this gap, we examine two types of SEs: natural language explanations (NLEs) and counterfactual examples, generated by LLMs quantized using three common techniques at distinct bit widths. Our findings indicate that quantization typically leads to moderate declines in both SE quality (up to 4.4%) and faithfulness (up to 2.38%). The user study further demonstrates that quantization diminishes both the coherence and trustworthiness of SEs (up to 8.5%). Compared to smaller models, larger models show limited resilience to quantization in terms of SE quality but better maintain faithfulness. Moreover, no quantization technique consistently excels across task accuracy, SE quality, and faithfulness. Given that quantization’s impact varies by context, we recommend validating SE quality for specific use cases, especially for NLEs, which show greater sensitivity. Nonetheless, the relatively minor deterioration in SE quality and faithfulness does not undermine quantization’s effectiveness as a model compression technique.


💡 Research Summary

The paper investigates how model quantization—a common technique for reducing inference latency and memory footprint—affects the ability of large language models (LLMs) to generate self‑explanations (SEs). SEs are divided into two categories: natural‑language explanations (NLEs), where the model articulates a textual rationale for its output, and counterfactual examples, where the model produces altered inputs that would lead to different predictions. The authors evaluate three widely used quantization methods—dynamic‑range quantization (DQ), static‑range quantization (SQ), and mixed‑precision quantization (MPQ)—at three bit‑widths (8‑bit, 6‑bit, 4‑bit). Experiments are conducted on two models of differing scale, GPT‑2‑XL (1.5 B parameters) and LLaMA‑2‑13B (13 B parameters), across standard downstream tasks.

SE quality is assessed by human raters on clarity, logical consistency, and persuasiveness, while faithfulness is measured through a combination of model‑based log‑likelihood differences and human verification. The results show that quantization causes only modest drops in overall task accuracy (1–3 %) but leads to larger degradations in SE quality (up to 4.4 %) and faithfulness (up to 2.38 %). The impact is most pronounced at 4‑bit precision, where NLEs exhibit more grammatical errors and ambiguous reasoning, and counterfactual examples become overly divergent from the original inputs.

A user study with 200 non‑expert participants further reveals that quantized SEs are perceived as less coherent and trustworthy, with trust scores decreasing by up to 8.5 % compared to full‑precision baselines. Model‑size analysis indicates that larger models retain faithfulness better than smaller ones, yet they suffer slightly more in raw SE quality under quantization, suggesting that parameter count provides some robustness to noise but does not fully protect explanation fluency.

No single quantization technique dominates across the three axes of task accuracy, SE quality, and faithfulness. DQ preserves accuracy best but harms explanation quality; MPQ offers a balanced trade‑off at medium bit‑widths but adds implementation complexity; SQ falls in between. Consequently, the authors advise practitioners to select quantization strategies based on the specific downstream use case and to validate SE quality post‑quantization, especially for NLEs, which are more sensitive.

Overall, the study concludes that while quantization introduces measurable but relatively modest deterioration in self‑explanations, it remains an effective compression method. However, for high‑stakes applications where transparency and user trust are critical, additional verification of SE integrity is essential.