Generic case complexity and One-Way functions
The goal of this paper is to introduce ideas and methodology of the generic case complexity to cryptography community. This relatively new approach allows one to analyze the behavior of an algorithm on ‘‘most’’ inputs in a simple and intuitive fashion which has some practical advantages over classical methods based on averaging. We present an alternative definition of one-way function using the concepts of generic case complexity and show its equivalence to the standard definition. In addition we demonstrate the convenience of the new approach by giving a short proof that extending adversaries to a larger class of partial algorithms with errors does not change the strength of the security assumption.
💡 Research Summary
The paper introduces the framework of generic case complexity to the cryptographic community, arguing that it offers a more natural and intuitive way to reason about algorithmic behavior on “most” inputs than traditional average‑case analysis. The authors begin by formalizing generic complexity: inputs are taken from the binary string set I = {0,1}*, stratified into spheres Iₙ of fixed length n, and equipped with the uniform spherical distribution uₙ. A set R⊆I is called generic if its asymptotic density ρ(R) = 1, and strongly generic if the convergence to 1 is super‑polynomially fast. Conversely, negligible sets have density 0, and strongly negligible sets converge to 0 super‑polynomially. This quantitative notion of “most” versus “few” inputs allows one to separate the inputs on which an algorithm succeeds from the pathological ones.
Using this language, the paper revisits the classic definition of one‑way functions (OWFs) due to Diffie and Hellman, which informally requires that a function be easy to compute but hard to invert for “almost all” outputs. The traditional formulation quantifies hardness by averaging over all inputs, which can mask the existence of small but hard subsets. To address this, the authors propose two new definitions based on generic complexity:
-
Generically Strong One‑Way Function (Definition 2.1). A function f is efficiently computable by a deterministic polynomial‑time algorithm A′, and for every probabilistic polynomial‑time (PPT) adversary A, every constant c > 0, and all sufficiently large n, the set of inputs x∈Iₙ for which A succeeds with probability greater than n⁻ᶜ has measure uₙ that is smaller than any inverse polynomial p(n)⁻¹. In other words, the adversary’s success set must be strongly negligible.
-
Generically Weak One‑Way Function (Definition 2.2). The same efficient computability requirement holds, but the hardness condition is relaxed: for every PPT adversary A and every constant c > 0 there exists a polynomial p(n) such that, for all large n, the set of inputs on which A’s success probability is below n⁻ᶜ has measure at least 1/p(n). This captures the intuition that “most” inputs are hard to invert, without demanding a negligible success set.
The central technical contribution is Lemma 2.3, which shows that any PPT algorithm that inverts f on a non‑negligible fraction of inputs with probability exceeding n⁻ᶜ can be amplified—by independent repetitions and a Chernoff bound—to an algorithm that inverts f on a uniformly random input with probability exceeding an inverse polynomial. Consequently, the generic‑strong definition is equivalent to the standard average‑case definition of a strong OWF. An analogous equivalence holds for the weak notion (details are relegated to an appendix).
A further result demonstrates that extending the adversary model to partial algorithms that may err on a negligible set of inputs does not weaken the security assumption. Since the error set is strongly negligible, the overall inversion probability remains bounded by the same negligible function, preserving the OWF property.
The paper also surveys existing results in generic complexity, noting that many problems that are undecidable or NP‑complete in the worst case become generically easy (e.g., the word and conjugacy problems in finitely presented groups are decidable in linear time on a generic set of inputs). Conversely, there exist generically hard problems, such as bounded versions of the halting problem, which are generically NP‑complete. These observations motivate the potential of generic complexity as a source of cryptographic hardness: one could base OWFs on problems that are undecidable in general but easy on a generic set, thereby obtaining functions that are trivially computable yet hard to invert on almost all inputs.
In the concluding discussion, the authors outline future work: constructing concrete generic OWFs from undecidable algebraic problems, exploring reductions between generic and classical OWFs, and investigating the relationship between generic, average, and heuristic complexities. They argue that generic complexity provides a clean separation of input distributions from algorithmic randomness, simplifying security proofs and potentially leading to new cryptographic primitives with provable security based on generic hardness assumptions.
Comments & Academic Discussion
Loading comments...
Leave a Comment