A Statistical Measure of Complexity
In this chapter, a statistical measure of complexity is introduced and some of its properties are discussed. Also, some straightforward applications are shown.
In this chapter, a statistical measure of complexity is introduced and some of its properties are discussed. Also, some straightforward applications are shown.
💡 Research Summary
The paper introduces a novel statistical measure of complexity, denoted (C), and explores its theoretical properties and practical applications. The authors begin by reviewing the long‑standing problem of quantifying complexity in diverse scientific fields, noting that existing measures such as the LMC complexity, efficiency indices, and algorithmic complexity either depend on specific model assumptions, suffer from high computational cost, or behave unintuitively at the extremes of order and randomness. To address these shortcomings, the authors propose a definition based solely on the probability distribution of system states. Let ({p_i}_{i=1}^N) be the probabilities of the possible microstates. The Shannon entropy (H = -\sum_i p_i \log p_i) quantifies disorder, while the “divergence” term (D = \sum_i p_i \log p_i) (essentially the negative of the entropy) captures the degree of order. The product
\
📜 Original Paper Content
🚀 Synchronizing high-quality layout from 1TB storage...