Updating Probabilities: A Complex Agent Based Example
It has been shown that one can accommodate data (Bayes) and constraints (MaxEnt) in one method, the method of Maximum (relative) Entropy (ME) (Giffin 2007). In this paper we show a complex agent based example of inference with two different forms of information; moments and data. In this example, several agents each receive partial information about a system in the form of data. In addition, each agent agrees or is informed that there are certain global constraints on the system that are always true. The agents are then asked to make inferences about the entire system. The system becomes more complex as we add agents and allow them to share information. This system can have a geometrical form, such as a crystal structure. The shape may dictate how the agents are able to share information, such as sharing with nearest neighbors. This method can be used to model many systems where the agents or cells have local or partial information but must adhere to some global rules. This could also illustrate how the agents evolve and could illuminate emergent behavior of the system.
💡 Research Summary
The paper presents a unified inference framework that combines Bayesian updating with the Maximum Entropy (MaxEnt) principle, known as the method of Maximum (relative) Entropy (ME), and applies it to a complex agent‑based system. In this setting each agent receives only a fragment of the overall data—local observations—while all agents share a set of global constraints that are known to be true for the whole system (for example, moment constraints such as the mean and variance of some underlying variable). The central question is how agents can make coherent inferences about the entire system when their information is both partial and heterogeneous.
The authors begin by reviewing the theoretical underpinnings of ME. Starting from a prior distribution (p(\theta)) over the unknown system state (\theta), they impose global constraints in the form of expected values (\langle f_i(\theta)\rangle = F_i). These constraints are introduced via Lagrange multipliers (\lambda_i). When an agent (a) observes local data (D_a), the likelihood (p(D_a|\theta)) is multiplied by the prior and the exponential constraint factor, yielding the posterior:
\
Comments & Academic Discussion
Loading comments...
Leave a Comment