📝 Original Info
- Title: Pruning as a Game: Equilibrium-Driven Sparsification of Neural Networks
- ArXiv ID: 2512.22106
- Date: 2025-12-26
- Authors: Zubair Shah, Noaman Khan
📝 Abstract
Neural network pruning is widely used to reduce model size and computational cost. Yet, most existing methods treat sparsity as an externally imposed constraint, enforced through heuristic importance scores or training-time regularization. In this work, we propose a fundamentally different perspective: pruning as an equilibrium outcome of strategic interaction among model components. We model parameter groups such as weights, neurons, or filters as players in a continuous non-cooperative game, where each player selects its level of participation in the network to balance contribution against redundancy and competition. Within this formulation, sparsity emerges naturally when continued participation becomes a dominated strategy at equilibrium. We analyze the resulting game and show that dominated players collapse to zero participation under mild conditions, providing a principled explanation for pruning behavior. Building on this insight, we derive a simple equilibrium-driven pruning algorithm that jointly updates network parameters and participation variables without relying on explicit importance scores. This work focuses on establishing a principled formulation and empirical validation of pruning as an equilibrium phenomenon, rather than exhaustive architectural or large-scale benchmarking. Experiments on standard benchmarks demonstrate that the proposed approach achieves competitive sparsity-accuracy trade-offs while offering an interpretable, theory-grounded alternative to existing pruning methods.
💡 Deep Analysis
Deep Dive into Pruning as a Game: Equilibrium-Driven Sparsification of Neural Networks.
Neural network pruning is widely used to reduce model size and computational cost. Yet, most existing methods treat sparsity as an externally imposed constraint, enforced through heuristic importance scores or training-time regularization. In this work, we propose a fundamentally different perspective: pruning as an equilibrium outcome of strategic interaction among model components. We model parameter groups such as weights, neurons, or filters as players in a continuous non-cooperative game, where each player selects its level of participation in the network to balance contribution against redundancy and competition. Within this formulation, sparsity emerges naturally when continued participation becomes a dominated strategy at equilibrium. We analyze the resulting game and show that dominated players collapse to zero participation under mild conditions, providing a principled explanation for pruning behavior. Building on this insight, we derive a simple equilibrium-driven pruning al
📄 Full Content
Pruning as a Game: Equilibrium-Driven
Sparsification of Neural Networks
Zubair Shah
College of Science and Engineering
Hamad Bin Khalifa University
Doha, Qatar
zshah@hbku.edu.qa
Noaman Khan
College of Science and Engineering
Hamad Bin Khalifa University
Doha, Qatar
nokh88609@hbku.edu.qa
Abstract
Neural network pruning is widely used to reduce model size and computational cost. Yet, most
existing methods treat sparsity as an externally imposed constraint, enforced through heuristic
importance scores or training-time regularization. In this work, we propose a fundamentally
different perspective: pruning as an equilibrium outcome of strategic interaction among model
components. We model parameter groups such as weights, neurons, or filters as players in a
continuous non-cooperative game, where each player selects its level of participation in the network
to balance contribution against redundancy and competition. Within this formulation, sparsity
emerges naturally when continued participation becomes a dominated strategy at equilibrium. We
analyze the resulting game and show that dominated players collapse to zero participation under
mild conditions, providing a principled explanation for pruning behavior. Building on this insight,
we derive a simple equilibrium-driven pruning algorithm that jointly updates network parameters
and participation variables without relying on explicit importance scores. This work focuses
on establishing a principled formulation and empirical validation of pruning as an equilibrium
phenomenon, rather than exhaustive architectural or large-scale benchmarking. Experiments
on standard benchmarks demonstrate that the proposed approach achieves competitive sparsity–
accuracy trade-offs while offering an interpretable, theory-grounded alternative to existing pruning
methods.
1
Introduction
Neural network pruning is a central technique for reducing model size, computational cost, and energy
consumption without retraining from scratch. Over the past decade, a wide range of pruning methods
have been proposed, including magnitude-based thresholding, sensitivity and saliency metrics, and
lottery-ticket style rewinding. Despite their empirical success, these methods share a common
conceptual assumption: pruning is treated as a centralized, post-hoc decision, applied externally to a
trained model using heuristics that rank parameters by importance.
This prevailing view implicitly assumes that sparsity is something that must be imposed on a
network. Parameters are evaluated, scored, and removed by an external criterion, typically based on
magnitude, gradients, or training dynamics. While effective in practice, this perspective offers limited
insight into a more fundamental question: why does sparsity emerge in overparameterized networks
at all? In particular, existing approaches do not model the interactions among parameters that lead
some components to become redundant while others remain essential.
In this work, we argue that pruning is more naturally understood as the outcome of strategic
interaction among model components competing for limited representational resources. During
training, parameters do not contribute independently; instead, they interact through shared gradients,
overlapping activations, and redundant representations. Some components provide unique and
arXiv:2512.22106v1 [cs.AI] 26 Dec 2025
indispensable contributions, while others become increasingly redundant as training progresses.
From this perspective, sparsity is not an externally enforced constraint, but an emergent property of
competition and dominance among parameters.
Motivated by this observation, we propose a game-theoretic formulation of neural network
pruning. We model parameter groups such as weights, neurons, or filters as players in a game whose
strategies determine their level of participation in the network. Each player receives a payoff that
balances its contribution to the training objective against the cost of redundancy and competition
with other players. Pruning arises naturally when a player’s optimal strategy collapses to zero at
equilibrium, indicating that continued participation is no longer beneficial.
Contributions. The main contributions of this paper are:
• We introduce a game-theoretic formulation of neural network pruning, modeling parameter
groups as strategic players.
• We show that sparsity emerges naturally as a stable equilibrium of the proposed game.
• We derive a simple equilibrium-driven pruning algorithm grounded in this theoretical frame-
work.
• We empirically demonstrate that the proposed approach achieves competitive sparsity-accuracy
trade-offs while providing a principled explanation for pruning behavior.
2
Related Work
Early pruning methods focused on estimating the sensitivity of the loss function to parameter removal.
Optimal Brain Damage (OBD) [1] and Optimal Brain Surgeon (OBS) [2] introduced second-order
Taylor expansions to quantify the impact of pruning individual we
…(Full text truncated)…
📸 Image Gallery
Reference
This content is AI-processed based on ArXiv data.