RobustExplain: Evaluating Robustness of LLM-Based Explanation Agents for Recommendation

RobustExplain: Evaluating Robustness of LLM-Based Explanation Agents for Recommendation
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Large Language Models (LLMs) are increasingly used to generate natural-language explanations in recommender systems, acting as explanation agents that reason over user behavior histories. While prior work has focused on explanation fluency and relevance under fixed inputs, the robustness of LLM-generated explanations to realistic user behavior noise remains largely unexplored. In real-world web platforms, interaction histories are inherently noisy due to accidental clicks, temporal inconsistencies, missing values, and evolving preferences, raising concerns about explanation stability and user trust. We present RobustExplain, the first systematic evaluation framework for measuring the robustness of LLM-generated recommendation explanations. RobustExplain introduces five realistic user behavior perturbations evaluated across multiple severity levels and a multi-dimensional robustness metric capturing semantic, keyword, structural, and length consistency. Our goal is to establish a principled, task-level evaluation framework and initial robustness baselines, rather than to provide a comprehensive leaderboard across all available LLMs. Experiments on four representative LLMs (7B–70B) show that current models exhibit only moderate robustness, with larger models achieving up to 8% higher stability. Our results establish the first robustness benchmarks for explanation agents and highlight robustness as a critical dimension for trustworthy, agent-driven recommender systems at web scale.


💡 Research Summary

The paper introduces RobustExplain, the first systematic framework for evaluating how robust large language model (LLM)–generated explanations are when user interaction histories are perturbed. Recognizing that real‑world recommender systems contend with noisy clicks, timestamp errors, missing metadata, and evolving preferences, the authors define five realistic perturbation types: noise injection, temporal shuffle, behavior dilution, category drift, and missing values. Each type is parameterized with five severity levels, allowing fine‑grained analysis of model sensitivity.

To quantify robustness, the framework computes four complementary metrics between the original explanation and the one generated from a perturbed history: (1) Semantic Similarity (bag‑of‑words cosine), (2) Keyword Stability (Jaccard of extracted nouns, product names, and category terms), (3) Structural Consistency (BLEU score for n‑gram overlap), and (4) Length Stability (relative length difference). A weighted sum (α1·Sem + α2·Key + α3·Struct + α4·Len) yields a single robustness score, with the highest weight assigned to semantic preservation because it directly reflects the user‑facing meaning.

Experiments use a synthetic e‑commerce dataset containing 200 items across seven categories and generate user histories of 20–50 interactions. Four open‑source LLMs of increasing size (7B, 13B, 30B, 70B parameters) serve as explanation agents. For each model, the authors generate explanations for a fixed recommended item from both original and perturbed histories, then compute the four metrics and the aggregated robustness score.

Results show that current models achieve moderate robustness, with average scores around 0.50. Larger models consistently outperform smaller ones, achieving up to an 8 % increase in stability; the 70B model reaches a mean score of 0.58 versus 0.44 for the 7B model. Sensitivity varies by perturbation: temporal shuffle and missing‑value removal cause the greatest drops, especially in structural and length dimensions, indicating a reliance on accurate timestamps and metadata. Noise injection and behavior dilution have a milder impact, suggesting that LLMs can still isolate core preference signals amid random or peripheral interactions. Structural consistency and length stability appear largely independent of model size, implying that models may freely restructure sentences while preserving core meaning.

The authors highlight several contributions: (1) a novel perturbation taxonomy tailored to user‑behavior noise, (2) a multi‑dimensional robustness metric suite, (3) baseline robustness benchmarks across four LLM scales, and (4) actionable insights for building more trustworthy explanation agents. They acknowledge limitations, including reliance on synthetic data and the use of simple bag‑of‑words semantics, and propose future work involving real‑world logs, richer semantic similarity measures (e.g., BERTScore), and internal attention analysis to better understand why models fail under specific perturbations.

Overall, RobustExplain provides a concrete, reproducible methodology for assessing and improving the stability of LLM‑driven recommendation explanations, positioning robustness as a critical quality dimension for next‑generation, agent‑centric recommender systems deployed at web scale.


Comments & Academic Discussion

Loading comments...

Leave a Comment