Maximum entropy principle and power-law tailed distributions

Maximum entropy principle and power-law tailed distributions
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

In ordinary statistical mechanics the Boltzmann-Shannon entropy is related to the Maxwell-Bolzmann distribution $p_i$ by means of a twofold link. The first link is differential and is offered by the Jaynes Maximum Entropy Principle. The second link is algebraic and imposes that both the entropy and the distribution must be expressed in terms of the same function in direct and inverse form. Indeed, the Maxwell-Boltzmann distribution $p_i$ is expressed in terms of the exponential function, while the Boltzmann-Shannon entropy is defined as the mean value of $-\ln(p_i)$. In generalized statistical mechanics the second link is customarily relaxed. Here we consider the question if and how is it possible to select generalized statistical theories in which the above mentioned twofold link between entropy and the distribution function continues to hold, such as in the case of ordinary statistical mechanics. Within this scenario, there emerge new couples of direct-inverse functions, i.e. generalized logarithms $\Lambda(x)$ and generalized exponentials $\Lambda^{-1}(x)$, defining coherent and self-consistent generalized statistical theories. Interestingly, all these theories preserve the main features of ordinary statistical mechanics, and predict distribution functions presenting power-law tails. Furthermore, the obtained generalized entropies are both thermodynamically and Lesche stable.


💡 Research Summary

The paper revisits the foundational structure of statistical mechanics, focusing on the dual relationship that links entropy and the equilibrium probability distribution. In classical Boltzmann‑Shannon theory the entropy (S=-\sum_i p_i\ln p_i) and the Maxwell‑Boltzmann distribution (p_i\propto e^{-\beta E_i}) are connected in two ways: (i) variationally through Jaynes’ Maximum Entropy (MaxEnt) principle, and (ii) algebraically because the same functional form—logarithm and its inverse exponential—appears in both definitions. Existing generalized frameworks (e.g., Tsallis, κ‑statistics) usually abandon the second, algebraic link, allowing entropy and distribution to be expressed by different functions. Consequently, while power‑law tails emerge, the structural symmetry between entropy and distribution is lost.

The authors ask whether a generalized theory can preserve both links simultaneously. Their answer is affirmative: they introduce a pair of mutually inverse functions, a generalized logarithm (\Lambda(x)) and its inverse generalized exponential (\Lambda^{-1}(x)). The entropy is defined as
\


Comments & Academic Discussion

Loading comments...

Leave a Comment