Now You Hear Me: Audio Narrative Attacks Against Large Audio-Language Models
Large audio-language models increasingly operate on raw speech inputs, enabling more seamless integration across domains such as voice assistants, education, and clinical triage. This transition, however, introduces a distinct class of vulnerabilities that remain largely uncharacterized. We examine the security implications of this modality shift by designing a text-to-audio jailbreak that embeds disallowed directives within a narrative-style audio stream. The attack leverages an advanced instruction-following text-to-speech (TTS) model to exploit structural and acoustic properties, thereby circumventing safety mechanisms primarily calibrated for text. When delivered through synthetic speech, the narrative format elicits restricted outputs from state-of-the-art models, including Gemini 2.0 Flash, achieving a 98.26% success rate that substantially exceeds text-only baselines. These results highlight the need for safety frameworks that jointly reason over linguistic and paralinguistic representations, particularly as speech-based interfaces become more prevalent.
💡 Research Summary
The paper “Now You Hear Me: Audio Narrative Attacks Against Large Audio‑Language Models” investigates a previously under‑explored attack surface of end‑to‑end large audio‑language models (LALMs). While prior work on LALM jailbreaks has focused on either converting malicious text prompts into synthetic speech or applying low‑level acoustic perturbations (accent shifts, noise injection, gradient‑based signal optimization), this study proposes a fundamentally different approach: leveraging the delivery style of speech itself as an adversarial vector.
The authors design a black‑box pipeline that uses a high‑quality instruction‑following text‑to‑speech (TTS) system to synthesize the same textual instruction in five distinct prosodic styles inspired by social‑psychology literature: Authoritative Demand, Affiliative Persuasion, Urgent Directive, Therapeutic Tone, and Performative Emphasis. Each style is parameterized by pitch, tempo, intensity, and rhythm, thereby encoding cues such as confidence, warmth, urgency, or theatricality. The hypothesis is that LALMs, which process raw audio waveforms, treat these paralinguistic cues as meaningful conditioning signals that influence the model’s internal representation of speaker intent, potentially overriding alignment safeguards that were trained primarily on text.
Formally, the threat model assumes a purely black‑box adversary who can only submit audio inputs a∈A to a model M:A→Y and observe the textual output Y. The attacker’s goal is to maximize the probability that M(a) belongs to a set R of disallowed, unsafe responses. By fixing the lexical content x and varying the style vector s, the attack solves
s* = argmax_s Pr
Comments & Academic Discussion
Loading comments...
Leave a Comment