Multilingual Hidden Prompt Injection Attacks on LLM-Based Academic Reviewing

Reading time: 1 minute
...

📝 Original Info

  • Title: Multilingual Hidden Prompt Injection Attacks on LLM-Based Academic Reviewing
  • ArXiv ID: 2512.23684
  • Date: 2025-12-29
  • Authors: Panagiotis Theocharopoulos, Ajinkya Kulkarni, Mathew Magimai. -Doss

📝 Abstract

Large language models (LLMs) are increasingly considered for use in high-impact workflows, including academic peer review. However, LLMs are vulnerable to document-level hidden prompt injection attacks. In this work, we construct a dataset of approximately 500 real academic papers accepted to ICML and evaluate the effect of embedding hidden adversarial prompts within these documents. Each paper is injected with semantically equivalent instructions in four different languages and reviewed using an LLM. We find that prompt injection induces substantial changes in review scores and accept/reject decisions for English, Japanese, and Chinese injections, while Arabic injections produce little to no effect. These results highlight the susceptibility of LLM-based reviewing systems to document-level prompt injection and reveal notable differences in vulnerability across languages.

📄 Full Content

...(본문 내용이 길어 생략되었습니다. 사이트에서 전문을 확인해 주세요.)

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut