Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function

Lattices for Distributed Source Coding: Jointly Gaussian Sources and   Reconstruction of a Linear Function

Consider a pair of correlated Gaussian sources (X1,X2). Two separate encoders observe the two components and communicate compressed versions of their observations to a common decoder. The decoder is interested in reconstructing a linear combination of X1 and X2 to within a mean-square distortion of D. We obtain an inner bound to the optimal rate-distortion region for this problem. A portion of this inner bound is achieved by a scheme that reconstructs the linear function directly rather than reconstructing the individual components X1 and X2 first. This results in a better rate region for certain parameter values. Our coding scheme relies on lattice coding techniques in contrast to more prevalent random coding arguments used to demonstrate achievable rate regions in information theory. We then consider the case of linear reconstruction of K sources and provide an inner bound to the optimal rate-distortion region. Some parts of the inner bound are achieved using the following coding structure: lattice vector quantization followed by “correlated” lattice-structured binning.


💡 Research Summary

The paper addresses a distributed source‑coding problem in which two correlated Gaussian sources (X_{1}) and (X_{2}) are observed separately, each encoder compresses its observation, and a common decoder wishes to reconstruct a prescribed linear combination (L = aX_{1}+bX_{2}) within a mean‑square distortion (D). Classical approaches (Slepian‑Wolf, Wyner‑Ziv) typically reconstruct each source individually and then form the linear function, which is sub‑optimal when the decoder’s interest lies solely in the function. The authors propose a fundamentally different “function‑direct‑reconstruction” strategy based on lattice coding.

Key ingredients of the scheme

  1. Lattice quantization – Each encoder quantizes its observation using a high‑density lattice (\Lambda_i). The quantization error is governed by the lattice’s normalized second moment (G(\Lambda_i)).
  2. Correlated lattice binning – The quantization residues are further binned using a shared “correlation lattice” (\Lambda_c). This binning operation is essentially a modulo‑(\Lambda_c) reduction that exploits the statistical dependence between (X_{1}) and (X_{2}).
  3. Decoder operation – From the received bin indices, the decoder reconstructs the lattice point in (\Lambda_c) that corresponds to the quantized value of the linear combination (L). No explicit reconstruction of (X_{1}) or (X_{2}) is performed.

Theoretical analysis
Assuming (\mathbf{X} = (X_{1},X_{2})^{\top}) follows a zero‑mean Gaussian distribution with covariance matrix (\mathbf{K}), the authors derive inner bounds on the achievable rate‑distortion region. The distortion contributed by the lattice quantizer is (D_i = G(\Lambda_i)\sigma^{2}(\Lambda_i)), where (\sigma^{2}(\Lambda_i)) denotes the lattice’s second‑moment per dimension. The individual rates satisfy \