Gaming security by obscurity
Shannon sought security against the attacker with unlimited computational powers: if an information source conveys some information, then Shannon’s attacker will surely extract that information. Diffie and Hellman refined Shannon’s attacker model by taking into account the fact that the real attackers are computationally limited. This idea became one of the greatest new paradigms in computer science, and led to modern cryptography. Shannon also sought security against the attacker with unlimited logical and observational powers, expressed through the maxim that “the enemy knows the system”. This view is still endorsed in cryptography. The popular formulation, going back to Kerckhoffs, is that “there is no security by obscurity”, meaning that the algorithms cannot be kept obscured from the attacker, and that security should only rely upon the secret keys. In fact, modern cryptography goes even further than Shannon or Kerckhoffs in tacitly assuming that if there is an algorithm that can break the system, then the attacker will surely find that algorithm. The attacker is not viewed as an omnipotent computer any more, but he is still construed as an omnipotent programmer. So the Diffie-Hellman step from unlimited to limited computational powers has not been extended into a step from unlimited to limited logical or programming powers. Is the assumption that all feasible algorithms will eventually be discovered and implemented really different from the assumption that everything that is computable will eventually be computed? The present paper explores some ways to refine the current models of the attacker, and of the defender, by taking into account their limited logical and programming powers. If the adaptive attacker actively queries the system to seek out its vulnerabilities, can the system gain some security by actively learning attacker’s methods, and adapting to them?
💡 Research Summary
The paper revisits the long‑standing Kerckhoffs principle that “security by obscurity” is impossible and argues that modern cryptography still implicitly assumes an omnipotent attacker who will eventually discover any algorithm that can break a system. Starting from Shannon’s model of an attacker with unlimited computational power, the authors trace the evolution to the Diffie‑Hellman model, which limits only computational resources while still treating the attacker as a “super‑programmer” capable of finding any feasible attack.
Recognizing that real attackers are also limited in logical reasoning, program synthesis, and the ability to infer hidden algorithms, the authors propose two complementary paradigms to capture these limits. The first is a game‑theoretic framework based on games of incomplete information. In this setting, both defender and attacker have private “types” (capabilities, goals, resources) and must act under uncertainty. The defender can observe the attacker’s probing queries, infer the attacker’s strategy, and deliberately increase the cost of information gathering. The paper illustrates this with an “attack‑vector game” where the defender’s adaptive policies are modeled as strategies in a zero‑sum game of incomplete information, and provides a sketch of a formal model in the appendix.
The second paradigm introduces the notion of logical complexity (or “one‑way programming”) inspired by algorithmic information theory and logical depth. Instead of focusing solely on computational hardness, the authors measure how difficult it is to understand or reconstruct a program, using Gödel‑Kleene indices and logical depth as proxies for the cognitive and mathematical effort required by an attacker. A program with high logical complexity would be infeasible for an attacker to reverse‑engineer, even if a theoretical attack exists, thereby providing a form of security that relies on obscurity in a quantifiable way.
The paper also contrasts static security—walls, gates, and cryptographic perimeters—with dynamic security that must handle insider threats, trust relationships, reputation, and social engineering. It argues that once an attacker penetrates the perimeter, the defender’s job shifts to detection, containment, and recovery, which can also be expressed as strategies in incomplete‑information games.
In the related‑work section the authors note that while attack‑defense trees and other game‑theoretic models have been used in security, true incomplete‑information games that explicitly model mutual ignorance have been largely absent. They also discuss how logical complexity builds on prior work on algorithmic information theory, logical depth, and Kleene’s realizability.
Finally, the paper outlines future research directions: (1) empirical validation of incomplete‑information security games, (2) development of tools to measure logical complexity of real‑world software, and (3) design of adaptive defense mechanisms that learn from attacker behavior. By integrating limited logical capabilities of attackers and allowing defenders to adaptively learn and respond, the authors argue that “security by obscurity” can be re‑examined as a viable, quantifiable component of modern security engineering rather than an outright myth.
Comments & Academic Discussion
Loading comments...
Leave a Comment