Information-theoretic vs. thermodynamic entropy production in autonomous sensory networks

Information-theoretic vs. thermodynamic entropy production in autonomous   sensory networks

For sensory networks, we determine the rate with which they acquire information about the changing external conditions. Comparing this rate with the thermodynamic entropy production that quantifies the cost of maintaining the network, we find that there is no universal bound restricting the rate of obtaining information to be less than this thermodynamic cost. These results are obtained within a general bipartite model consisting of a stochastically changing environment that affects the instantaneous transition rates within the system. Moreover, they are illustrated with a simple four-states model motivated by cellular sensing. On the technical level, we obtain an upper bound on the rate of mutual information analytically and calculate this rate with a numerical method that estimates the entropy of a time-series generated with a simulation.


💡 Research Summary

The paper investigates the relationship between the rate at which an autonomous sensory network acquires information about a fluctuating external environment and the thermodynamic entropy production required to sustain the network’s operation. Using a general bipartite Markov model, the authors treat the environment as a stochastic two‑state process (X) that switches with rates (k_{+}) and (k_{-}). The internal sensor (Y) also has two states (e.g., active/inactive) but its transition rates (w_{01}^{(x)}) and (w_{10}^{(x)}) depend explicitly on the current state of X. Consequently the joint system ((X,Y)) forms a four‑state Markov chain with a bipartite structure: transitions in X never directly change Y and vice‑versa, yet the rates are coupled.

Two central quantities are defined. The information‑theoretic measure is the mutual information rate
\