Perceptually Aligning Representations of Music via Noise-Augmented Autoencoders
Reading time: 1 minute
...
📝 Original Info
- Title: Perceptually Aligning Representations of Music via Noise-Augmented Autoencoders
- ArXiv ID: 2511.05350
- Date: 2025-11-07
- Authors: 정보가 제공되지 않음 (논문에 명시된 저자 목록이 없습니다.)
📝 Abstract
We argue that training autoencoders to reconstruct inputs from noised versions of their encodings, when combined with perceptual losses, yields encodings that are structured according to a perceptual hierarchy. We demonstrate the emergence of this hierarchical structure by showing that, after training an audio autoencoder in this manner, perceptually salient information is captured in coarser representation structures than with conventional training. Furthermore, we show that such perceptual hierarchies improve latent diffusion decoding in the context of estimating surprisal in music pitches and predicting EEG-brain responses to music listening. Pretrained weights are available on github.com/CPJKU/pa-audioic.💡 Deep Analysis
📄 Full Content
Reference
This content is AI-processed based on open access ArXiv data.