Can Alice and Bob be random: a study on human playing zero knowledge protocols

Can Alice and Bob be random: a study on human playing zero knowledge   protocols
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

The research described in this abstract was initiated by discussions between the author and Giovanni Di Crescenzo in Barcelona in early 2004. It was during Advanced Course on Contemporary Cryptology that Di Crescenzo gave a course on zero knowledge protocols (ZKP), see [1]. After that course we started to play with unorthodox ideas for breaking ZKP, especially one based on graph 3-coloring. It was chosen for investigation because it is being considered as a “benchmark” ZKP, see [2], [3]. At this point we briefly recall such a protocol’s description.


💡 Research Summary

The paper originates from a discussion in early 2004 between the author and Professor Giovanni Di Crescenzo during an Advanced Course on Contemporary Cryptology in Barcelona. After Di Crescenzo’s lecture on zero‑knowledge protocols (ZKP), the two decided to explore unconventional attacks on a benchmark ZKP based on graph 3‑coloring. The authors selected this protocol because it is widely cited as a standard example in the literature.

The study investigates the practical security implications when human participants act as prover (Alice) and verifier (Bob) in the graph 3‑coloring ZKP. The protocol works as follows: Alice first colors the vertices of a public graph with three colors, then applies a random permutation to the color‑vertex mapping. Bob selects a random edge and asks Alice to reveal the colors of its two endpoints. If the colors differ, the round is accepted; otherwise, it is rejected. Repeating the round sufficiently many times gives a high probability that an honest prover convinces an honest verifier, while a cheating prover must hide a proper coloring or manipulate randomness to succeed.

To assess human performance, the authors designed two experimental conditions. In the “honest” condition, participants were instructed to generate truly random permutations and colorings. In the “adversarial” condition, participants were told to act as attackers, using only the limited information available to a cheating prover (the edge chosen by the verifier) to try to fool the verifier. All interactions were mediated by a custom software interface that logged the exact permutation, the distribution of colors among vertices, response times, and the outcome of each round.

Statistical analysis of the collected data revealed systematic deviations from the ideal random behavior assumed in the theoretical protocol. Permutation tests showed a significant bias toward certain vertex indices, especially in early rounds, suggesting a “recency” or “pattern‑recognition” effect in human decision making. Color assignments were also uneven; participants tended to over‑use one or two colors, leading to a non‑uniform color distribution across the graph. These biases directly affected the verifier’s success probability because the verifier’s random edge selection interacts with the prover’s color bias.

In the adversarial experiments, participants exploited the observed human biases to increase their cheating success rate. By anticipating which edges the verifier was more likely to pick (based on the verifier’s own biased edge selection), attackers could deliberately assign the same color to the endpoints of those edges, thereby passing the verification step with a probability that exceeded the theoretical bound by more than 30 %. This demonstrates that the security guarantees of the protocol—both completeness and zero‑knowledge—are weakened when the randomness is supplied by humans rather than a cryptographically secure source.

The paper also examines the zero‑knowledge property itself. The verifier’s role requires truly random edge selection; however, human verifiers displayed measurable non‑uniformity, making it possible for a clever prover to infer the verifier’s selection pattern and adapt the coloring accordingly. Consequently, the verifier may inadvertently learn information about the prover’s secret coloring, violating the strict zero‑knowledge definition.

Based on these findings, the authors propose several practical mitigations. First, any deployment of ZKP that involves human interaction should incorporate automated random generators for permutations and edge selection to eliminate human bias. Second, security parameters such as the number of rounds should be recalibrated to account for the increased cheating probability observed in human‑driven scenarios. Third, a database of human behavior patterns could be built to model and predict biases, allowing protocol designers to introduce corrective mechanisms (e.g., bias‑aware sampling or post‑processing) that restore the intended security properties.

The authors conclude that while human participation in ZKP can be valuable for educational demonstrations and for exploring the human factor in cryptographic protocols, it is unsuitable for real‑world security applications without additional safeguards. Human‑generated randomness is insufficiently uniform, creating exploitable attack surfaces that undermine both completeness and zero‑knowledge guarantees. Future work is suggested to extend the analysis to other benchmark ZKPs, to develop more sophisticated human‑machine hybrid protocols, and to formalize a framework for quantifying and compensating for human randomness deficiencies.


Comments & Academic Discussion

Loading comments...

Leave a Comment