Is There a Unique Physical Entropy? Micro versus Macro
Entropy in thermodynamics is an extensive quantity, whereas standard methods in statistical mechanics give rise to a non-extensive expression for the entropy. This discrepancy is often seen as a sign that basic formulas of statistical mechanics should be revised, either on the basis of quantum mechanics or on the basis of general and fundamental considerations about the (in)distinguishability of particles. In this article we argue against this response. We show that both the extensive thermodynamic and the non-extensive statistical entropy are perfectly alright within their own fields of application. Changes in the statistical formulas that remove the discrepancy must be seen as motivated by pragmatic reasons (conventions) rather than as justified by basic arguments about particle statistics.
💡 Research Summary
The paper tackles a long‑standing conceptual tension between the thermodynamic and statistical‑mechanical definitions of entropy. In classical thermodynamics, entropy is a state function that scales linearly with the size of the system; when a macroscopic system is divided into two independent subsystems, the total entropy is exactly the sum of the subsystems’ entropies. This extensive character is experimentally verified through the second law, Carnot cycles, and calorimetric measurements.
Statistical mechanics, on the other hand, derives entropy from the probability distribution of microscopic states via the Boltzmann–Gibbs formula (S = -k_{\mathrm B}\sum_i p_i\ln p_i). The subtlety arises when one decides whether the constituent particles are treated as distinguishable or indistinguishable. If particles are assumed distinguishable, the combinatorial factor (N!) does not appear, and the resulting entropy contains a term that grows like (\ln N), breaking strict extensivity. If one incorporates quantum‑mechanical indistinguishability (or applies Stirling’s approximation to the (N!) factor), the extra term cancels, restoring linear scaling.
The authors argue that this apparent discrepancy does not signal a failure of statistical mechanics nor a need to overhaul its foundations. Instead, the “non‑extensive” form is simply the correct expression for a model that counts each microscopic permutation as a distinct microstate. Thermodynamics deliberately ignores such microscopic redundancy because it deals only with macroscopic observables. Consequently, each formulation is internally consistent within its domain of applicability.
The paper reviews the various “fixes” that have been proposed—most notably the inclusion of the (\ln N!) correction to enforce extensivity. The authors contend that these adjustments are pragmatic conventions rather than derivations from deeper physical principles. Both the original Boltzmann–Gibbs entropy and the corrected, extensive version yield identical predictions for macroscopic thermodynamic quantities when the appropriate assumptions (distinguishability, quantum statistics, coarse‑graining) are made explicit.
In the concluding section, the authors propose a unifying perspective: entropy is a single physical quantity, but its mathematical representation depends on the level of description. For macroscopic thermodynamics, an extensive entropy is natural; for microscopic statistical mechanics, a non‑extensive form correctly accounts for state counting before any redundancy is removed. The key is to be transparent about which definition is being used, why it is chosen, and what underlying assumptions are involved. By doing so, the apparent conflict disappears, and researchers can move forward without confusing conventions with fundamental physics. This clarification has practical implications for teaching, for interpreting experimental data, and for developing theories that bridge the macro‑micro divide.
Comments & Academic Discussion
Loading comments...
Leave a Comment