Bounding Lossy Compression using Lossless Codes at Reduced Precision
An alternative approach to two-part ‘critical compression’ is presented. Whereas previous results were based on summing a lossless code at reduced precision with a lossy-compressed error or noise term, the present approach uses a similar lossless code at reduced precision to establish absolute bounds which constrain an arbitrary lossy data compression algorithm applied to the original data.
💡 Research Summary
The paper introduces a novel framework for guaranteeing the quality of lossy‑compressed data while preserving high compression ratios. Traditional “critical compression” schemes store a reduced‑precision lossless representation of the most significant bits and then encode the remaining information with a lossy coder. Although this two‑part approach reduces overall bitrate, it lacks an explicit model of how the lossy component interacts with the lossless part, making it difficult to predict or enforce reconstruction fidelity.
The authors propose to replace the additive error‑term model with an absolute‑bounds strategy. Let the original signal X be represented with L bits (e.g., 8‑bit image samples, 16‑bit audio). Choose a precision parameter n (1 ≤ n ≤ L) that determines how many most‑significant bits will be stored losslessly. The lossless code encodes Y = ⌊X / 2^{L‑n}⌋·2^{L‑n}, i.e., the value obtained by zero‑padding the lower (L‑n) bits. Y can be recovered exactly from the lossless stream. Simultaneously, the full‑resolution X is processed by any arbitrary lossy compressor (JPEG, HEVC, BPG, etc.) producing a decoded approximation X̂.
During reconstruction the algorithm checks whether X̂ lies inside the interval