On the Communication Complexity of Secure Computation

On the Communication Complexity of Secure Computation
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Information theoretically secure multi-party computation (MPC) is a central primitive of modern cryptography. However, relatively little is known about the communication complexity of this primitive. In this work, we develop powerful information theoretic tools to prove lower bounds on the communication complexity of MPC. We restrict ourselves to a 3-party setting in order to bring out the power of these tools without introducing too many complications. Our techniques include the use of a data processing inequality for residual information - i.e., the gap between mutual information and G'acs-K"orner common information, a new information inequality for 3-party protocols, and the idea of distribution switching by which lower bounds computed under certain worst-case scenarios can be shown to apply for the general case. Using these techniques we obtain tight bounds on communication complexity by MPC protocols for various interesting functions. In particular, we show concrete functions that have “communication-ideal” protocols, which achieve the minimum communication simultaneously on all links in the network. Also, we obtain the first explicit example of a function that incurs a higher communication cost than the input length in the secure computation model of Feige, Kilian and Naor (1994), who had shown that such functions exist. We also show that our communication bounds imply tight lower bounds on the amount of randomness required by MPC protocols for many interesting functions.


💡 Research Summary

This paper tackles the largely open problem of establishing lower bounds on the communication complexity of information‑theoretically secure multi‑party computation (MPC). Focusing on a three‑party setting—two parties (Alice and Bob) hold inputs X and Y, while a third party (Charlie) must obtain an output Z that may be randomized—the authors develop a suite of information‑theoretic tools that yield generic, function‑specific lower bounds on the expected number of bits exchanged over each private link.

The first tool is a data‑processing inequality for residual information, defined as the gap between mutual information I(X;Y) and Gács‑Körner common information K_GK(X;Y). By showing that residual information cannot increase under processing, the authors relate the entropy of the communication transcript on any link to the conditional entropy of the secrets, obtaining a baseline bound (Theorem 1).

The second tool, called distribution switching, exploits the security requirement that the transcript distribution on certain links must be independent of the inputs. This allows the authors to optimize the lower bound over all full‑support input distributions, even when the protocol is allowed to adapt to the input distribution. The resulting bound (Theorem 2) holds universally for any such distribution.

A third contribution is a novel information inequality tailored to interactive three‑party protocols (Lemma 4). While correlated multiple secret sharing (CMSS) schemes provide a natural way to view the transcripts as shares of the inputs and output, CMSS lower bounds are generally weak because they assume a non‑interactive dealer. By leveraging the interactive nature of MPC, the new inequality yields stronger bounds (Theorem 3) that surpass those obtainable from CMSS alone.

Combining these tools, the paper derives tight communication lower bounds for several concrete functions. For group addition, controlled‑erasure, and remote oblivious transfer, the authors exhibit protocols that achieve the minimum possible communication simultaneously on all three links; they term such protocols communication‑ideal. Conversely, they prove that for the Boolean AND function the share size in an optimal CMSS is strictly smaller than the transcript size required by any secure protocol, thereby separating secret sharing from secure computation.

A particularly striking result is an explicit deterministic function f : {0,1}ⁿ × {0,1}ⁿ → {0,1}^{n−1} for which any secure protocol must incur a total communication of at least 3n − 1 bits to Charlie. This improves upon the earlier existential result of Feige, Kilian, and Naor (1994), which only showed the existence of functions with communication exceeding the input length in a non‑interactive model.

Finally, the authors show that the same communication lower bounds automatically imply lower bounds on the amount of shared randomness required by MPC protocols. For the functions studied (group‑add, controlled‑erasure, remote‑OT, sum), the presented protocols are shown to be randomness‑optimal, using the minimum possible amount of randomness.

In summary, the paper introduces three powerful information‑theoretic techniques—residual‑information data processing, distribution switching, and a new three‑party interactive inequality—to obtain the first generic, tight lower bounds on communication for secure three‑party computation. These results not only identify functions with communication‑ideal protocols but also demonstrate separations between secret sharing and secure computation and provide the first explicit functions whose secure computation necessarily exceeds the input size, thereby advancing our understanding of the fundamental communication costs inherent in information‑theoretic MPC.


Comments & Academic Discussion

Loading comments...

Leave a Comment