Q2D2: A Geometry-Aware Audio Codec Leveraging Two-Dimensional Quantization
Reading time: 5 minute
...
📝 Original Info
Title: Q2D2: A Geometry-Aware Audio Codec Leveraging Two-Dimensional Quantization
ArXiv ID: 2512.01537
Date: 2025-12-01
Authors: ** Eliya Nachmani, Tal Shuster **
📝 Abstract
Recent neural audio codecs have achieved impressive reconstruction quality, typically relying on quantization methods such as Residual Vector Quantization (RVQ), Vector Quantization (VQ) and Finite Scalar Quantization (FSQ). However, these quantization techniques limit the geometric structure of the latent space, make it harder to capture correlations between features leading to inefficiency in representation learning, codebook utilization and token rate. In this paper we introduce Two Dimensional Quantization (Q2D2), a quantization scheme in which feature pairs are projected onto structured 2D grids such as hexagonal, rhombic, or rectangular tiling and quantized to the nearest grid values, yielding an implicit codebook defined by the product of grid levels, with codebook sizes comparable to conventional methods. Despite its simple geometric formulation, Q2D2 improves audio compression efficiency, with low token rates and high codebook utilization while maintaining state of the art reconstruction quality. Specifically, Q2D2 achieves competitive to superior performance in various objective and subjective reconstruction metrics, across extensive experiments in speech domain compared to state of the art models. Comprehensive ablation studies further confirm the effectiveness of our design choices.
💡 Deep Analysis
📄 Full Content
Q2D2: A GEOMETRY-AWARE AUDIO CODEC LEVERAG-
ING TWO-DIMENSIONAL QUANTIZATION
Eliya Nachmani, Tal Shuster
Department of Electronics and Computing Engineering
Ben-Gurion University, Israel
Audio samples: https://tashq.github.io/Q2D2/
ABSTRACT
Recent neural audio codecs have achieved impressive reconstruction quality, typ-
ically relying on quantization methods such as Residual Vector Quantization
(RVQ), Vector Quantization (VQ) and Finite Scalar Quantization (FSQ). However,
these quantization techniques limit the geometric structure of the latent space,
make it harder to capture correlations between features leading to inefficiency in
representation learning, codebook utilization and token rate. In this paper we in-
troduce Two-Dimensional Quantization (Q2D2), a quantization scheme in which
feature pairs are projected onto structured 2D grids—such as hexagonal, rhombic,
or rectangular tiling—and quantized to the nearest grid values, yielding an implicit
codebook defined by the product of grid levels, with codebook sizes comparable to
conventional methods. Despite its simple geometric formulation, Q2D2 improves
audio compression efficiency, with low token rates and high codebook utiliza-
tion while maintaining state of the art reconstruction quality. Specifically, Q2D2
achieves competitive to superior performance in various objective and subjective
reconstruction metrics, across extensive experiments in speech domain compared
to state of the art models. Comprehensive ablation studies further confirm the
effectiveness of our design choices.
1
INTRODUCTION
In recent years, Large Language Models (LLMs) (Brown et al., 2020) have demonstrated remarkable
progress in audio generation tasks, ranging from multi-speaker speech synthesis (Wang et al., 2023;
Kharitonov et al., 2023; Jiang et al., 2023; Ji et al., 2024a) to music generation (Agostinelli et al.,
2023) and general-purpose audio synthesis (Kreuk et al., 2022). At the same time, growing attention
has been devoted to incorporating speech as a modality within large multimodal systems, as seen
in models such as SpeechGPT (Zhang et al., 2023a), AnyGPT (Zhan et al., 2024), GPT-4o, GPT-5,
and Moshi (D´efossez et al., 2024). A key enabler of these advances has been the use of discrete
acoustic representations produced by neural codecs (Zeghidour et al., 2021; D´efossez et al., 2022;
Kumar et al., 2023; Ji et al., 2024b). By converting high-rate speech signals into compact sequences
of discrete tokens, acoustic codec models provide the crucial link between continuous audio and
token-based language models, thereby enabling the direct application of LLM architectures to audio.
Most end-to-end discrete codec models (D´efossez et al., 2022; Wu et al., 2023) adopt a three-stage
structure consisting of an encoder, a RVQ module (Lee et al., 2022), and a decoder. The encoder
performs downsampling of the audio signal in the time domain to obtain compressed audio frames.
Each compressed audio frame is then quantized by a series of quantizers, with each quantizer oper-
ating on the residual of the previous one. The number of quantizers determines the overall bitrate.
The decoder, on the other hand, performs upsampling in the time domain to reconstruct the audio
signal from the quantizer outputs. Existing acoustic codec models (Kumar et al., 2023; D´efossez
et al., 2022; Siuzdak, 2023) demonstrate impressive reconstruction quality, and generative models
based on discrete codecs are now capable of synthesizing speech at near-human levels. In response
(Ji et al., 2024b) proposed a much simpler design: instead of stacked RVQ, it uses a single VQ
layer (Gray, 1984) over features, showing that efficient tokenization can be achieved without deep
quantizer hierarchies. Additional models have contributed to the expansion of the codec landscape.
1
arXiv:2512.01537v1 [cs.SD] 1 Dec 2025
Hexagon Grid
(a)
x
y
ˆz
Rectangle grid
(b)
x
y
ˆz
Rhombic grid
(c)
x
y
ˆz
Figure 1: Visualization of quantization grids used in Q2D2: Hexagonal Grid (a): a hexagonal tiling
with 9 quantization levels in x and y axis. Rectangle Grid (b): a rectangle tiling with 7 quantization
levels in x and y axis. Rhombic Grid (c): a rhombic tiling with 7 quantization levels in x axis, and
6 levels yielding to 11 quantization levels in y axis.
Some models (Pan et al., 2024; Yang et al., 2023; Zhang et al., 2023b) enhanced robustness, control-
lability, and synthesis quality through architectural and training innovations, while other models
(Li et al., 2024; Liu et al., 2024; Xin et al., 2024) aimed for universality and scalability, either by
unifying audio and speech tasks under a single tokenizer or by increasing codec capacity. Com-
plementary efforts refined training strategies, with stronger discriminators advancing adversarial
learning (Ahn et al., 2024b;a).
Despite these successes, existing quantization schemes based on Vector Quantized-Variational Au-
toEncoder (VQ-VAE) and RVQ are challenging to optimize, and lead