AuditoryBench++: Can Language Models Understand Auditory Knowledge without Hearing?

AuditoryBench++: Can Language Models Understand Auditory Knowledge without Hearing?
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Even without directly hearing sounds, humans can effortlessly reason about auditory properties, such as pitch, loudness, or sound-source associations, drawing on auditory commonsense. In contrast, language models often lack this capability, limiting their effectiveness in multimodal interactions. As an initial step to address this gap, we present AuditoryBench++, a comprehensive benchmark for evaluating auditory knowledge and reasoning in text-only settings. The benchmark encompasses tasks that range from basic auditory comparisons to contextually grounded reasoning, enabling fine-grained analysis of how models process and integrate auditory concepts. In addition, we introduce AIR-CoT, a novel auditory imagination reasoning method that generates and integrates auditory information during inference through span detection with special tokens and knowledge injection. Extensive experiments with recent LLMs and Multimodal LLMs demonstrate that AIR-CoT generally outperforms both the off-the-shelf models and those augmented with auditory knowledge. The project page is available at https://auditorybenchpp.github.io.


💡 Research Summary

AuditoryBench++ introduces a comprehensive, text‑only benchmark for evaluating the auditory commonsense and reasoning abilities of large language models (LLMs). Building on the earlier AuditoryBench, the authors expand the evaluation to five distinct tasks: pitch comparison, duration comparison, loudness comparison, animal‑sound recognition, and auditory context reasoning. Each task is carefully constructed from existing resources such as the AudioTime dataset and the MMAU benchmark, with multiple stages of statistical filtering, outlier removal, and human verification to ensure high‑quality, unambiguous examples. The benchmark therefore probes whether models can infer acoustic properties and relationships without any actual audio input, a scenario common in purely textual interactions.

To equip LLMs with the capability to “imagine” sounds, the paper proposes AIR‑CoT (Auditory Imagination Reasoning Chain‑of‑Thought). AIR‑CoT operates in two training stages. In Stage 1, the model is fine‑tuned to emit special tokens


Comments & Academic Discussion

Loading comments...

Leave a Comment