Time-Complexity Characterization of NIST Lightweight Cryptography Finalists

Time-Complexity Characterization of NIST Lightweight Cryptography Finalists
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Lightweight cryptography is becoming essential as emerging technologies in digital identity systems and Internet of Things verification continue to demand strong cryptographic assurance on devices with limited processing power, memory, and energy resources. As these technologies move into routine use, they demand cryptographic primitives that maintain strong security and deliver predictable performance through clear theoretical models of time complexity. Although NIST’s lightweight cryptography project provides empirical evaluations of the ten finalist algorithms, a unified theoretical understanding of their time-complexity behavior remains absent. This work introduces a symbolic model that decomposes each scheme into initialization, data-processing, and finalization phases, enabling formal time-complexity derivation for all ten finalists. The results clarify how design parameters shape computational scaling on constrained mobile and embedded environments. The framework provides a foundation needed to distinguish algorithmic efficiency and guides the choice of primitives capable of supporting security systems in constrained environments.


💡 Research Summary

The paper addresses the lack of a unified theoretical framework for evaluating the time‑complexity of the ten finalists in the NIST Lightweight Cryptography (LWC) standardization process. Recognizing that emerging IoT, digital identity, and other resource‑constrained applications demand not only strong security but also predictable performance, the authors propose a symbolic three‑phase model that decomposes any authenticated‑encryption scheme into initialization, data‑processing, and finalization stages.

The model defines:

  • T_init = c_k + c_n, capturing fixed costs for key and nonce setup;
  • T_process = T_A + T_M, where T_A = a·(T_p + c_A) and T_M = m·(T_p + c_M) represent the costs of processing associated data and message blocks, respectively;
  • T_finalize = c_f, a constant cost for tag generation or verification.

Using this abstraction, the authors derive closed‑form asymptotic expressions for each finalist, expressed in terms of algorithm‑specific parameters such as permutation cost (b), rate (r), block size (n), and the number of processed blocks (a, m). The resulting complexities are summarized in Table I and discussed in depth.

Key findings include:

  1. Permutation‑based designs (Ascon, PHOTON‑Beetle, Xoodyak) have their overall cost dominated by the permutation cost b, leading to O(ℓ·b) scaling where ℓ denotes the total number of processed blocks (associated data plus message). The rate r influences the ceiling functions that model padding overhead.

  2. Block‑cipher‑based designs (GIFT‑COFB, Romulus‑N, TinyJambu) exhibit linear scaling with the number of blocks but include additional constants reflecting multiple encryption rounds or dual‑permutation structures. GIFT‑COFB achieves the simplest O(ℓ_A + ℓ_M) expression because its COFB mode adds virtually no per‑block overhead.

  3. Stream‑cipher design (Grain‑128AEAD) processes data at the bit level, yielding O(|M| + |AD|) without any padding penalty, which is advantageous for very small messages.

  4. Hybrid design (ISAP) combines a sponge‑based encryption core with a session‑key derivation step that is constant‑time, resulting in an overall O(|A| + |M|) complexity comparable to the simplest block‑cipher schemes while offering side‑channel resistance.

  5. ARX‑based SPARKLE (Schwaemm/Esch) introduces extra terms (2·|A|/r·b, 3·|M|/r·b, d/r·b, 2b) reflecting its dual functionality as both AEAD and hash primitive.

The comparative analysis reveals that no single algorithm dominates across all metrics. For ultra‑low‑power sensors, the bit‑level processing of Grain‑128AEAD is most efficient; for bulk data transfer, GIFT‑COFB’s minimal per‑block overhead is preferable; when integrated hashing is required, SPARKLE offers a balanced trade‑off despite higher constant factors; and for environments where side‑channel resistance is critical, ISAP provides a compelling combination of security and linear scaling.

The authors conclude that the three‑phase symbolic model provides a platform‑independent, mathematically rigorous basis for predicting algorithmic performance on constrained devices. They outline future work that will validate the theoretical predictions against real‑world mobile driver‑license and digital‑identity implementations, and they suggest extending the framework to incorporate energy‑consumption models and hardware‑specific optimizations. This work thus fills a critical gap between empirical benchmark tables and formal complexity theory, offering practitioners a clear tool for selecting the most appropriate lightweight primitive for a given constrained deployment scenario.


Comments & Academic Discussion

Loading comments...

Leave a Comment