Single-shot lossy compression: mutual information bounds
For several styles of fidelity constraints – guaranteed distortion, conditional excess distortion, excess distortion – we show mutual information upper bounds on the minimum expected description length needed to represent a random variable. Coupled with the corresponding converses, these results attest that as long as the information content in the data is not too low, minimizing the mutual information under an appropriate fidelity constraint serves as a reasonable proxy for the minimum description length of the data. We provide alternative characterizations of all three convex proxies, shedding light on the structure of their solutions.
💡 Research Summary
The paper investigates single‑shot (one‑sample) lossy compression from an information‑theoretic perspective, focusing on three increasingly permissive fidelity constraints: (i) guaranteed distortion, (ii) conditional excess distortion, and (iii) excess distortion. For each constraint the authors define a minimum‑expected‑description‑length quantity (the entropy of the quantizer output) and a convex proxy obtained by relaxing deterministic quantizers to stochastic kernels and replacing output entropy with mutual information I(X;Y).
For guaranteed distortion (ε=0) they define H_X(d,0) as the infimum of H(f(X)) over all deterministic mappings f that satisfy d(X,f(X))≤d almost surely, and R_X(d,0) as the infimum of I(X;Y) over all conditional distributions P_{Y|X} that satisfy the same hard constraint. They prove a tight sandwich:
R_X(d,0) ≤ H_X(d,0) ≤ R_X(d,0)+log₂(R_X(d,0)+1)+log₂ e.
This improves earlier results by removing an unspecified universal constant and by not requiring the distortion measure to be a metric. Moreover, they show that R_X(d,0) equals a “plus‑version” R⁺X(d,0)=inf{P_Y} E
Comments & Academic Discussion
Loading comments...
Leave a Comment