Library Hallucinations in LLMs: Risk Analysis Grounded in Developer Queries
Large language models (LLMs) are increasingly used to generate code, yet they continue to hallucinate, often inventing non-existent libraries. Such library hallucinations are not just benign errors: they can mislead developers, break builds, and expose systems to supply chain threats such as slopsquatting. Despite increasing awareness of these risks, little is known about how real-world prompt variations affect hallucination rates. Therefore, we present the first systematic study of how user-level prompt variations impact library hallucinations in LLM-generated code. We evaluate seven diverse LLMs across two hallucination types: library name hallucinations (invalid imports) and library member hallucinations (invalid calls from valid libraries). We investigate how realistic user language extracted from developer forums and how user errors of varying degrees (one- or multi-character misspellings and completely fake names/members) affect LLM hallucination rates. Our findings reveal systemic vulnerabilities: one-character misspellings in library names trigger hallucinations in up to 26% of tasks, fake library names are accepted in up to 99% of tasks, and time-related prompts lead to hallucinations in up to 84% of tasks. Prompt engineering shows promise for mitigating hallucinations, but remains inconsistent and LLM-dependent. Our results underscore the fragility of LLMs to natural prompt variation and highlight the urgent need for safeguards against library-related hallucinations and their potential exploitation.
💡 Research Summary
The paper presents the first systematic study of how realistic developer‑level prompt variations affect library‑related hallucinations in code generated by large language models (LLMs). Using a filtered subset of 356 Python tasks from BigCodeBench—each requiring an external library—the authors evaluate seven state‑of‑the‑art LLMs (GPT‑4o‑mini, GPT‑5‑mini, Mistral‑8B, Qwen2.5‑Coder, Llama‑3.3, DeepSeek‑V3.1, Claude‑4.5‑Haiku). Three experiments are conducted.
Experiment 1 extracts natural language descriptors from Software Recommendations StackExchange (e.g., “open‑source”, “lightweight”, “modern”) and year‑based requests (“from 2023/2024/2025”). When these descriptors are inserted into a fixed prompt template, time‑related prompts trigger hallucinations in up to 84 % of tasks, showing that models will fabricate non‑existent libraries to satisfy recency demands.
Experiment 2 introduces controlled user errors: one‑character misspellings (edit distance 1), multi‑character misspellings (edit distance 2‑8), and completely fake library or member names. Even a single‑character typo leads to hallucinations in 10‑26 % of cases, while multi‑character errors raise the rate to 45‑79 %. Completely fabricated names are accepted by 95‑99 % of models, highlighting a severe amplification of typosquatting and the newer “slopsquatting” threat.
Experiment 3 evaluates four low‑effort prompt‑engineering strategies—chain‑of‑thought, step‑back, explicit verification request, and a brief corrective instruction—targeting the highest‑risk scenarios from the previous experiments. Mitigation effects are inconsistent: some models see a 5‑30 % reduction, but chain‑of‑thought can increase hallucinations, suggesting that reasoning prompts may reinforce false confidence.
Across all models, the study finds that LLMs are fragile to natural prompt variation, with architecture and training data influencing susceptibility but not eliminating the problem. The authors argue that developers should avoid ambiguous or time‑specific wording and double‑check library names, while LLM providers need stronger typo‑robustness and built‑in dependency validation. They also release the full benchmark suite (prompts, labels, outputs) to facilitate future work on detection and mitigation of library hallucinations.
Comments & Academic Discussion
Loading comments...
Leave a Comment