Collision-resistant hash function based on composition of functions
cryptographic hash function is a deterministic procedure that compresses an arbitrary block of numerical data and returns a fixed-size bit string. There exist many hash functions: MD5, HAVAL, SHA, … It was reported that these hash functions are not longer secure. Our work is focused in the construction of a new hash function based on composition of functions. The construction used the NP-completeness of Three-dimensional contingency tables and the relaxation of the constraint that a hash function should also be a compression function.
💡 Research Summary
The paper proposes a new collision‑resistant hash function, denoted H₃, built as a composition H₃ = H₂ ∘ H₁. H₁ is a novel, variable‑length transformation that maps an arbitrary input into a three‑dimensional 0‑1 tensor A, multiplies it element‑wise by a fixed weight tensor W, and then applies a function g₂ that concatenates the binary representations of all row, column, and file sums of the resulting tensor. The authors claim that finding a pre‑image for a given output y (i.e., solving g₂(A·W) = y) is equivalent to solving a three‑dimensional contingency table (3DCT) problem, which is known to be NP‑complete. Consequently, they argue that both pre‑image and collision attacks on H₁ are computationally infeasible. H₂ is any conventional fixed‑length hash function (MD5, SHA‑1, HAVAL, etc.) applied to the output of H₁, thereby producing a final hash of standard size.
The paper’s structure proceeds as follows: it first reviews basic concepts of two‑dimensional 0‑1 matrices, introduces Ryser’s interchange operation, and defines a function g₁ that encodes row and column sums. It then extends to three dimensions, defines the 3DCT problem, and introduces g₂, which encodes all three families of sums. The authors describe how to construct H₁ using these tools, and they present a reduction from 3DCT to a specially crafted problem (Problem 5) that involves finding two distinct tensors A and B that produce identical g₂ values under two different weight tensors V and W. They claim this reduction proves that both pre‑image and collision finding for H₁ are NP‑complete. Finally, they suggest that by feeding H₁’s output into a standard hash H₂, the resulting H₃ inherits both the hardness of the underlying NP‑complete problem and the desirable fixed‑length output of conventional hashes.
While the idea of leveraging NP‑complete problems for cryptographic hardness is intriguing, the paper suffers from several critical shortcomings. First, NP‑completeness addresses worst‑case computational difficulty; cryptographic security requires average‑case hardness and resistance to structured attacks, which the authors do not demonstrate. The reduction from 3DCT to the pre‑image problem is informal and lacks a rigorous proof that the distribution of instances generated by typical inputs yields hard instances. Second, the function g₂ merely concatenates row, column, and file sums. Different tensors can share identical sums, leading to trivial collisions; the existence of such tensors is well‑known, and the paper does not provide a mechanism to prevent them. Third, H₁’s output length grows with the input size (roughly O(n³ log n) bits for an n×n×n tensor), making the construction impractical for real‑world hashing where constant‑size outputs are essential. No performance evaluation, memory consumption analysis, or implementation details are provided. Fourth, even though H₂ is a standard hash, it cannot compensate for structural weaknesses in H₁; if an attacker can generate two distinct A and B with the same g₂ value, the final hash will collide regardless of the downstream hash. Finally, the paper is riddled with typographical errors, inconsistent notation, and vague definitions, which undermine its credibility. In summary, the proposed H₃ is an interesting theoretical construct but lacks the rigorous security analysis, efficiency, and practical considerations required for a viable cryptographic hash function.
Comments & Academic Discussion
Loading comments...
Leave a Comment