Mining Determinism in Human Strategic Behavior
This work lies in the fusion of experimental economics and data mining. It continues author’s previous work on mining behaviour rules of human subjects from experimental data, where game-theoretic predictions partially fail to work. Game-theoretic predictions aka equilibria only tend to success with experienced subjects on specific games, what is rarely given. Apart from game theory, contemporary experimental economics offers a number of alternative models. In relevant literature, these models are always biased by psychological and near-psychological theories and are claimed to be proven by the data. This work introduces a data mining approach to the problem without using vast psychological background. Apart from determinism, no other biases are regarded. Two datasets from different human subject experiments are taken for evaluation. The first one is a repeated mixed strategy zero sum game and the second - repeated ultimatum game. As result, the way of mining deterministic regularities in human strategic behaviour is described and evaluated. As future work, the design of a new representation formalism is discussed.
💡 Research Summary
The paper bridges experimental economics and data‑mining to uncover deterministic regularities in human strategic behavior without relying on extensive psychological theory. The authors begin by critiquing the conventional game‑theoretic approach, which predicts that rational agents will play mixed‑strategy Nash equilibria in repeated games. Empirical evidence shows that such predictions only hold for highly experienced subjects or in narrowly defined games; for most participants, especially novices, observed actions deviate markedly from the theoretical mix. Moreover, alternative models in experimental economics typically embed psychological or “near‑psychological” assumptions and then claim validation by the same data, creating a circular reasoning problem.
To avoid these biases, the authors adopt a minimalist stance: assume only determinism—that human choices are governed by repeatable rules—and let the data reveal those rules. Two experimental datasets are used for validation. The first consists of a repeated mixed‑strategy zero‑sum game where each round each player simultaneously chooses one of two actions, with payoffs determined by a standard payoff matrix. Classical theory predicts a 50/50 randomization in equilibrium. The second dataset comes from a repeated ultimatum game, in which a proposer offers a split of a fixed sum and the responder either accepts (both receive the proposed shares) or rejects (both receive nothing). Standard economic theory predicts minimal offers and acceptance of any positive amount.
Data preprocessing involved encoding each round’s action, the opponent’s previous actions, and the resulting payoff into a feature vector. The authors then applied three complementary mining techniques: decision‑tree induction, rule‑learning algorithms, and association‑rule mining. Decision trees were grown using information gain, pruned pre‑emptively, and evaluated with ten‑fold cross‑validation to avoid over‑fitting. Rule learners extracted patterns that satisfied user‑defined support and confidence thresholds, while association‑rule mining identified frequent co‑occurrences of opponent behavior and subject response. The final output was a set of human‑readable conditional rules (e.g., “If opponent rejected the last two offers, increase the next offer by 10%”).
Analysis of the zero‑sum game revealed that participants did not randomize. Instead, they displayed systematic conditional strategies: after observing an opponent repeat a particular action, subjects either switched to the opposite action (avoidance) or mirrored the opponent (retaliation). The frequency of these responses correlated with the accumulated payoff, indicating that subjects were dynamically adjusting expectations based on recent outcomes rather than adhering to a static mixed strategy.
In the repeated ultimatum game, the mining process uncovered two principal deterministic patterns. Proposers tended to raise their offers incrementally after a rejection, suggesting a learning process that balances the desire to avoid further rejections with the cost of conceding more. Responders exhibited a threshold rule: offers below a certain proportion of the pie were consistently rejected, while offers above that threshold were accepted. Importantly, the threshold itself shifted slightly upward after each rejection, reflecting a form of “fairness escalation” that aligns with findings from behavioral economics but emerged here purely from data‑driven rule extraction.
The authors argue that these findings demonstrate the feasibility of uncovering robust, deterministic behavioral regularities without invoking elaborate psychological constructs. The deterministic assumption proved sufficient to capture systematic deviations from equilibrium predictions, and the mined rules were both predictive (they generalized to unseen rounds) and interpretable (they could be expressed in natural language).
Beyond the empirical results, the paper discusses methodological implications. By treating human strategic interaction as a sequence mining problem, researchers can apply a wide array of existing machine‑learning tools, thereby reducing reliance on ad‑hoc theoretical models. However, the authors acknowledge limitations: the current rule representation is flat and may struggle to capture deeper hierarchical strategies or long‑range dependencies. Consequently, they propose future work on a richer formalism—potentially graph‑based or process‑algebraic—that can encode nested conditionalities and temporal abstractions.
In summary, the study provides a compelling case that human strategic behavior, even in classic game‑theoretic settings, often follows deterministic patterns that can be extracted through systematic data mining. This approach offers a promising alternative to traditional equilibrium analysis and opens avenues for more nuanced models of decision‑making in economics, artificial intelligence, and human‑computer interaction.