On Detection With Partial Information In The Gaussian Setup

On Detection With Partial Information In The Gaussian Setup

We introduce the problem of communication with partial information, where there is an asymmetry between the transmitter and the receiver codebooks. Practical applications of the proposed setup include the robust signal hashing problem within the context of multimedia security and asymmetric communications with resource-lacking receivers. We study this setup in a binary detection theoretic context for the additive colored Gaussian noise channel. In our proposed setup, the partial information available at the detector consists of dimensionality-reduced versions of the transmitter codewords, where the dimensionality reduction is achieved via a linear transform. We first derive the corresponding MAP-optimal detection rule and the corresponding conditional probability of error (conditioned on the partial information the detector possesses). Then, we constructively quantify an optimal class of linear transforms, where the cost function is the expected Chernoff bound on the conditional probability of error of the MAP-optimal detector.


💡 Research Summary

The paper introduces a novel communication scenario termed “communication with partial information,” in which the transmitter and receiver do not share identical codebooks. This asymmetry reflects practical constraints such as limited memory, computational resources, or security requirements on the receiver side. The authors study the problem in the context of binary hypothesis testing over an additive colored Gaussian noise channel. The transmitter sends one of two possible codewords, (s_0) or (s_1), corresponding to hypotheses (H_0) and (H_1). The receiver, however, only possesses dimensionality‑reduced versions of these codewords, (x_i = A s_i) for (i=0,1), where (A) is an (m \times n) linear transform with (m<n). The channel adds zero‑mean Gaussian noise (n\sim\mathcal N(0,\Sigma_n)), yielding the observation (y = s_i + n).

First, the authors derive the MAP (maximum a‑posteriori) detector that uses the partial information ({x_0,x_1}) together with the observation (y). By incorporating prior probabilities (\pi_0,\pi_1) and the known transform (A), the log‑likelihood ratio reduces to a linear discriminant: \