STACodec: Semantic Token Assignment for Balancing Acoustic Fidelity and Semantic Information in Audio Codecs
Neural audio codecs are widely used for audio compression and can be integrated into token-based language models. Traditional codecs preserve acoustic details well but lack semantic information. Recent hybrid codecs attempt to incorporate semantic information through distillation, but this often degrades reconstruction performance, making it difficult to achieve both. To address this limitation, we introduce STACodec, a unified codec that integrates semantic information from self-supervised learning (SSL) models into the first layer of residual vector quantization (RVQ-1) via semantic token assignment (STA). To further eliminate reliance on SSL-based semantic tokenizers and improve efficiency during inference, we propose a semantic pre-distillation (SPD) module, which predicts semantic tokens directly for assignment to the first RVQ layer during inference. Experimental results show that STACodec outperforms existing hybrid codecs in both audio reconstruction and downstream semantic tasks, demonstrating a better balance between acoustic fidelity and semantic capability.
💡 Research Summary
STACodec tackles the long‑standing trade‑off between acoustic fidelity and semantic richness in neural audio codecs. Traditional codecs excel at reconstructing high‑quality waveforms but produce tokens that lack semantic awareness, limiting their usefulness for token‑based language models. Recent hybrid approaches try to inject semantic information via distillation or auxiliary supervision, yet this often pulls codebook embeddings away from accurate acoustic representations, degrading reconstruction quality.
The proposed solution consists of two complementary mechanisms. First, Semantic Token Assignment (STA) directly substitutes the index of the first residual vector quantizer (RVQ‑1) with a pre‑computed semantic token obtained from a self‑supervised learning (SSL) model (e.g., K‑means clusters of WavLM‑large or HuBERT‑base embeddings). Concretely, for each time step t, the code index c₁,t is forced to equal the semantic token cₛ,t. The corresponding codebook entry C₁
Comments & Academic Discussion
Loading comments...
Leave a Comment