Bayesian Decision Making in Groups is Hard
We study the computations that Bayesian agents undertake when exchanging opinions over a network. The agents act repeatedly on their private information and take myopic actions that maximize their expected utility according to a fully rational poster…
Authors: Jan Hk{a}z{l}a, Ali Jadbabaie, Elchanan Mossel
Ba y esian Decision Making in Groups is Hard Jan Hązła 1 , 2 , Ali Jadbabaie 1 , 3 , Elc hanan Mossel 1 , 2 , M. Amin Rahimian 1 1 Institute for Data, Systems and So ciety 2 Department of Mathematics 3 Laboratory for Information and Decision Systems Massach usetts Institute of T echnology {jhazla,jadbabai,elmos,rahimian}@mit.edu W e study the computations that Bay esian agents undertake when exc hanging opinions ov er a netw ork. The agen ts act rep eatedly on their priv ate information and take m yopic actions that maximize their exp ected utilit y according to a fully rational p osterior belief. W e sho w that such computations are NP-hard for t wo natural utility functions: one with binary actions, and another where agents rev eal their p osterior b eliefs. In fact, we sho w that distinguishing b et ween p osteriors that are concentrated on differen t states of the world is NP-hard. Therefore, ev en approximating the Bay esian posterior b eliefs is hard. W e also describe a natural searc h algorithm to compute agents’ actions, which we call elimination of impossible signals , and show that if the netw ork is transitive, the algorithm can b e mo dified to run in polynomial time. Key wor ds : Observ ational Learning, Ba yesian Decision Theory , Computational Complexit y , Group Decision-Making, Computational So cial Choice, Inference ov er Graphs MSC2000 subje ct classific ation : Primary: 91B06; secondary: 68Q25, 91A35, 62C10 OR/MS subje ct classific ation : Primary: Games/group decisions: V oting/committees; secondary: Organizational studies: Decision making Effectiveness/performance Information; Net works/graphs JEL : D83, D85. W orking p ap er. July 30, 2019. A uthors ar e liste d in alphabetic or der. 1. Intro duction Man y decision-making problems in volv e interactions among individuals (agents) exchanging infor- mation with each other and striving to form rational opinions. Such situations arise in jury delib era- tions, exp ert committees, medical diagnoses, etc. Given their man y important applications, relev ant mo dels of decision making in groups hav e b een extensively considered o v er the y ears. The first in teresting case concerns only tw o agents. A fundamental insigh t offered b y Aumann ( 1976 ) indicates that having common priors and a common knowledge of p osterior b eliefs imply agreemen t: Rational agents cannot agree to disagree. Later work b y Geanakoplos and P olemarchakis ( 1982 ) demonstrates that such an agreement can b e ac hieved in finite time, by broadcasting p os- terior b eliefs back and forth. Banerjee ( 1992 ) and Bikhc handani et al. ( 1998 ) study the sequential in teraction where eac h agent observ es the decisions of every one b efore her. Acemoglu et al. ( 2011 ) 1 2 Hązła et al. Bayesian Decision Making in Gr oups is Har d extend the sequen tial learning mo del to a netw ork environmen t where agen ts only observe actions of their neighbors (rather than all preceding actions). Gale and Kariv ( 2003 ) consider repeated (rather than sequential) interactions o ver so cial net works where agents up date their b eliefs after observing actions of each other. F ollowing on, a large b o dy of literature studies differen t asp ects of rational opinion exchange, in particular the quality of information aggregation and learning in the limit (cf. A cemoglu and Ozdaglar ( 2011 ), Mossel and T amuz ( 2017 ) for tw o surv eys of kno wn results). Another prominen t approac h to the study of social learning is to mo del non-Ba yesian agents who use simpler, heuristic rules. One reason for considering non-Bay esian heuristics (so called “b ounded rationalit y”) in place of fully rational up dates is seeming in tractability of Bay esian calculations: Information from different neigh b ors can exhibit complex correlations, with no ob vious w ay to accoun t for, and remov e them. F or example, a Bay esian agen t ma y hav e to accoun t for the fact that her neighbors are influenced by the same source of information or ev en her o wn past actions ( Eyster and Rabin 2014 , Krishnam urthy and Hoiles 2014 ). Ev en though hardness of Bay esian computations in netw orked, learning models seems to b e widely b eliev ed, we are not aw are of an y previous work making a rigorous argument for it. Our present w ork addresses this gap. W e analyze algorithmic and complexity theoretic foundations of Bay esian so cial learning in tw o natural environmen ts that are commonly studied in the literature. In one of them the actions broadcast b y agen ts are coarse, in the sense that they are single bits. In the other one, w e assume that the actions are rich, consisting of agents’ full p osterior b eliefs. W e show that the computations of the agen ts are intractable in b oth cases. 1.1. Our contributions W e analyze a fairly well-studied mo del of Bay esian so cial learning. In this mo del there is a random v ariable θ which represen ts the unknown state of the w orld and determines pa y offs from different actions. A netw ork of agen ts receive priv ate signals which are indep enden t conditioned on the v alue of θ . A t ev ery step t = 0 , 1 , 2 , . . . , eac h agent outputs an action a i,t that maximizes her utilit y according to her curren t p osterior distribution of θ . The action is chosen my opically , i.e., only utility at the current time is considered and the p osterior µ i,t is computed using Bay es rule. Agents learn actions of their neigh b ors on the netw ork and pro ceed to the next step with up dated p osteriors. F or our hardness results, we study tw o natural v ariants of this mo del. First, w e consider the case of binary actions , where the state, signals and actions are all binary , and each agent outputs the guess for the state θ ∈ { 0 , 1 } that is most likely according to her current b elief. This mo del can b e though t of as rep eated voting (e.g., during jury delib erations or the papal concla ve in the Catholic Ch urch). W e are in terested in the complexit y of computations for pro ducing Bay esian p osterior Hązła et al. Bayesian Decision Making in Gr oups is Har d 3 b eliefs µ i,t or action a i,t . W e also study the r eve ale d b elief mo del where the utilities induce agents to rev eal their curren t p osteriors, or b eliefs. F ollo wing the detailed mo del description in Section 2 , we present our complexity results in Sec- tion 3 . W e show that it is NP-har d for the agents to compute their actions, b oth in the binary action and the rev ealed b elief mo del. As a common to ol in computational complexit y theory , NP- hardness provides rigorous evidence of worst-c ase intractabilit y . Note that w e only prov e existence of intractable netw ork structures and priv ate signals, not that they are “common” or “likely to arise”. Also, our reductions critically rely on the netw ork structure: They do not apply to sequen tial mo dels lik e the one in Banerjee ( 1992 ). One might suspect that the b eliefs can b e efficiently approximated, ev en if they are difficult to compute exactly . This is unfortunately not the case, and we further pro ve a har dness-of-appr oximation result: It is difficult ev en to distinguish b etw een posterior b eliefs that concen trate almost all of probability on one state and those that are concen trated on another state. In Section 3 we discuss in more detail what substantiv e economic assumptions are imp ortan t in deriving our complexit y results and some wa ys in which those results can b e extended. In Section 4 , w e study algorithms for Bay esian decision making in groups and describ e a nat- ural searc h algorithm to compute agen ts’ actions. The Ba y esian calculations are formalized as an algorithm for elimination of imp ossible signals (EIS), whereby the agent refines her knowledge by eliminating all profiles of priv ate signals that are inconsistent with her observ ations. In Subsection 4.1 , w e present recursive and iterative implementations of this algorithm. While the searc h ov er the p ossible signal profiles using this algorithm runs in exp onen tial time, these calculations simplify in certain netw ork structures. In Subsections 4.2 and 4.3 , we giv e examples of efficien t algorithms for suc h cases. As a side result, we provide a partial answer to one of the questions raised by Mossel and T amuz ( 2013 ), who provide an efficien t algorithm for computing the Bay esian binary actions in a complete graph: W e sho w that efficien t computation is possible for other graphs that ha ve a transitive structure when the action space is finite. In suc h tr ansitive networks , ev ery neighbor of a neigh b or of an agent is also her neighbor and therefore there are no indirect in teractions to complicate the Ba yesian inference. 1.2. Related wo rk Our results are related to the line of work that studies conditions for consensus and learning among rational agents ( Mossel et al. 2018 , Mueller-F rank 2013 , Smith and Sørensen 2000 ). Consensus refers to all agen ts con verging in their actions or b elief (cf. Gale and Kariv ( 2003 ), Rosen b erg et al. ( 2009 ) for consensus conditions in the net w ork mo del that w e study). Learning means that the consensus action is efficient, i.e., it represents the state of the w orld with high probabilit y . F or example, Mossel et al. ( 2014 , 2015 ) consider the binary action mo del (for m y opic and forward-looking agen ts, 4 Hązła et al. Bayesian Decision Making in Gr oups is Har d resp ectiv ely) and pro vide sufficient conditions for learning. These conditions are imp osed on the net work structure and consist of b ounded out-degree and an “egalitarian” connectivity , whereby if an agent i observes agent j , there is a reverse path from j to i of b ounded length (this condition is trivially satisfied for undirected net works). On the other hand, p ositive computational results for Ba y esian opinion exc hange (including the analysis of short-run dynamics) are restricted to small netw orks (e.g., with three agents ( Gale and Kariv 2003 , Section 5), see also examples in Rosen b erg et al. ( 2009 )) or sp ecial cases. The case of join tly Gaussian signals and b eliefs exhibits a linear-algebraic structure that allows for tractable computations ( Mossel et al. ( 2016 ), see also DeMarzo et al. ( 2003 )). Dasaratha et al. ( 2018 ) extend this setup to dynamic state spaces and priv ate signals. There are also efficien t algorithms for sp ecial net work structures, e.g., complete graphs and trees ( Kanoria and T amuz 2013 , Mossel and T amuz 2013 ). Moreov er, recursive tec hniques hav e b een applied to analyze Bay esian decision problems with partial success, Harel et al. ( 2014 ), Kanoria and T am uz ( 2013 ), Mossel et al. ( 2016 , 2014 ) and we also contribute to this literature b y offering new cases where Bay esian decision making is tractable (cf. Subsections 4.2 and 4.3 ). This state of affairs migh t hav e to do with our computational hardness results. Other w ays to ac hieve positive computational results are through alternativ e comm unication strategies or using non-Ba y esian information exc hange protocols. F or example, A cemoglu et al. ( 2014 ) analyze so cial learning among agen ts who directly communicate their en tire information (rep- resen ted as pairs of priv ate signals and their sources). Since eac h piece of information is tagged, there is no confounding, and Ba yesian up dating is simple. On the other hand, the exc hanged information has a significantly more complex form. In contrast, we think of our mo del as relev an t to situations where, as is often the case, it is not practical to exhaustively list all of one’s evidence and reasoning instead of stating or summarizing one’s opinion. A p opular approach to study b ounded rationality is b y replacing Ba yesian actions with heuristic (non-Ba yesian) rules ( Arieli et al. 2019a , b , Bala and Go yal 1998 , DeGro ot 1974 , Golub and Jackson 2010 , Jadbabaie et al. 2012 , Li and T an 2018 , Molavi et al. 2018 , Mueller-F rank and Neri 2017 ). These rules are often ro oted in empirically observed b eha vioral and cognitiv e biases. F or example, Li and T an ( 2018 ) consider a class of naiv e agents who take Bay esian actions but as if their lo cal neighborho o d is the en tire net work. This assumption remo ves the p ossibility of indirect interactions and, similar to the transitive structures (Subsection 4.2 ), simplifies Bay esian computations. Our work is orthogonal and complementary to these studies. W e prov e that Ba y esian reasoning is otherwise, in general, computationally intractable (b ecause of the difficult y of delineating confounded sources of information). There are also w orks that fo cus on tw o agents estimating an arbitrary random v ariable ( Aaronson 2005 ) — this is in contrast to our mo del where the state of the w orld is correlated with the priv ate Hązła et al. Bayesian Decision Making in Gr oups is Har d 5 signals in a simple wa y . The computational result of Aaronson ( 2005 ) concerns a proto col where the tw o agen ts k eep exc hanging their Bay esian p osteriors with a deliberately added noise term. One migh t question how “Ba y esian” suc h a proto col is, since the agents are not maximizing a utilit y function. On the other hand, the error terms can b e reinterpreted as transmission noise or computa- tion errors of rational agen ts (where the agents ha ve common kno wledge of the noise distribution). Aaronson ( 2005 ) sho ws that this proto col can b e efficiently implemen ted (approximately and on a verage with resp ect to priv ate signals) for any constant n um b er of rounds. As far as we can see, the pro of of Aaronson ( 2005 ) does not extend to many agen ts and net works. In Subsection 3.6 , we show ho w to adapt our hardness reduction to this noisy action setting. Notwithstanding, w e cannot logi- cally exclude the p ossibility of a result like Aaronson ( 2005 ), since we sho w only worst-case hardness and the algorithm in Aaronson ( 2005 ) works on av erage. Therefore, we leav e it as an in teresting op en problem: In the net work mo del with noise, do es there exist an av erage-case efficient algorithm, or are computations hard on a v erage (at least with resp ect to priv ate signal profiles)? In fact, our results can b e also interpreted in the context of other works p oin ting at computational reasons for why economic or so ciological mo dels fail to accurately reflect reality (cf., e.g., Arora et al. ( 2011 ) on the computational complexity of financial deriv atives and V elupillai ( 2000 ) on the computable foundations of economics). On the one hand, a mo del cannot b e considered plausible if it requires the participan ts or agents to p erform computations that need a prohibitiv ely long time. On the other hand, the predictions of suc h a mo del can b e rendered inaccessible b y the computational barriers. The literature on computational hardness of Bay esian reasoning in so cial net works is nascent. There are some hardness results in the literature on Bay esian inference in graphical mo dels (see Kwisthout ( 2011 ) and references therein), but these are quite different from mo dels considered in this w ork. P apadimitriou and T sitsiklis ( 1987 ) consider partially observ ed Mark ov decision processes (POMDP). These Mark ovian processes are not directly comparable to our mo del, but they exhibit similar flav or in so far as rep eated interaction s are concerned. Papadimitriou and T sitsiklis ( 1987 ) pro v e that computing optimal exp ected utility in a POMDP is PSP A CE- hard, ac hieving a stronger notion of hardness than NP-hardness. Ho wev er, their result does not extend to hardness of approximation, i.e., they only show that it is hard to decide if the optimal agen t’s strategy achiev es p ositiv e (but p ossibly v ery small) exp ected utilit y . Moreov er, the setup for Ba yesian decision making in groups is different (arguably less general, i.e., more challenging for a hardness pro of ) than a POMDP . Subsequently , we need differen t tec hniques for our purp oses. W e also p oin t out a follo w-up w ork by the authors of this pap er ( Hązła et al. 2019 ), where w e use significan tly more techni cal argumen ts to show that the computations in the binary action mo del are also (worst case) PSP A CE-hard to approximate. W e b eliev e the details of the latter work migh t b e of interest to complexity theorists. Here, we focus on dev eloping more general arguments to inform op erations research and so cial learning applications. 6 Hązła et al. Bayesian Decision Making in Gr oups is Har d 2. The Ba yesian Group Decision Mo del W e consider a finite group of agents, whose in teractions are represented by a fixed directed graph G . F or eac h agent i in G , N i denotes her neighborho o d: The subset of agents whose actions are observ ed by agent i . Without loss of generality , we will assume that i ∈ N i , i.e., an agent alwa ys observ es herself. W e model the topic of the discussion/group decision pro cess by a state θ b elonging to a finite set Θ . F or example, in the course of a p olitical debate, Θ can b e the set of all p olitical parties with θ represen ting the party that is most likely to increase so ciety’s welfare. The v alue of θ is not known to the agen ts, but they all start with a common prior b elief ab out it, whic h is a distribution with probabilit y mass function ν ( · ) : Θ → [0 , 1] . Initially , each agent i receives a priv ate signal s i , correlated with the state θ . The priv ate signal s i b elongs to a finite set S i and its distribution conditioned on θ is denoted by P i,θ ( · ) , which is referred to as the signal structur e of agent i . Conditioned on the state θ , the signals s i are indep enden t across agen ts, and w e use P θ ( · ) = Q i P i,θ ( · ) to denote their join t pro duct distribution. After receiving the signals, the agents in teract rep eatedly , in discrete times t = 0 , 1 , 2 , . . . . Asso- ciated with every agent i is an action space A i that represents the choices a v ailable to her at any time t ∈ N 0 , and a utilit y u i ( · , · ) : A i × Θ → R which represen ts her preferences with respect to com binations of actions and states. At ev ery time t ∈ N , agent i takes action a i,t that maximizes her exp ected utility based on her observation history h i,t : a i,t = arg max a i ∈A i E [ u i ( a i , θ ) | h i , t ] , (1) where the history h i,t is defined as { s i } ∪ { a j,τ for all j ∈ N i , and τ < t } , i.e., agent i observes her priv ate signal, as well as actions of all her neighbors at times strictly less than t . The netw ork, signal structures, action spaces and utilities, as well as the prior, are all common kno wledge among the agents. W e use the notation arg max a ∈A to include the following, common kno wledge, rule when the maximizer is not unique: W e assume that the action spaces are (arbitrarily) ordered and an agent will break ties b y c ho osing the low est-rank ed action in her ordering. The sp ecific tie-breaking rule is not imp ortan t for our results. The agents’ b ehavior is my opic in that it do es not take into account strategic considerations ab out future rounds; cf. Subsection 3.2 . W e denote the Bay esian p osterior b elief of agent i giv en her history of observ ations by its proba- bilit y mass function µ i,t ( · ) : Θ → [0 , 1] . In this notation, the exp ectation in ( 1 ) is taken with resp ect to the Ba yesian p osterior b elief µ i,t . T o sum up, agent i at time t chooses an action a i,t ∈ A i , maximizing her exp ected utility condi- tioned on the observ ation history h i,t . Then, she observes the most recent actions of her neigh b ors Hązła et al. Bayesian Decision Making in Gr oups is Har d 7 Figure 1 The Decision Flow Diagram for T w o Bay esian Agents { a j,t for all j ∈ N i } , up dates her action to a i,t +1 ∈ A i , and so on. A decision flo w diagram for an example of t wo interacting agents is provided in Figure 1 . Our main fo cus in this pap er is on the computational and algorithmic asp ects of the group decision pro cess. Sp ecifically , w e will b e concerned with the following computational problem: Pr oblem 1 (GR OUP-DECISION). A t a time t , given the graph structure G , agent i and the observ ation history h i,t , determine the Ba y esian action a i,t . 2.1. Natural Utility Functions: Binary A ctions and Revealed Beliefs A natural example of a utilit y function is based on the idea of rep eated voting, for example, as an idealized mo del of jury delib erations or the papal conclav e in the Catholic Ch urc h. In this mo del, the p ossible actions correspond to the states of the world, i.e., A i = Θ and the utilities are given by u i ( a, θ ) = 1 ( a = θ ) . In other words, the agents receiv e a unit rew ard for guessing the state correctly and zero otherwise. The exp ected reward of agent i at time t is maximized b y choosing the action that corresp onds to the maximum probability in µ i,t , i.e. the maximum a posteriori probabilit y (MAP) estimate. In case of binary world Θ = { 0 , 1 } with uniform prior and binary priv ate signals S i = { 0 , 1 } we call this example the binary action mo del. In another imp ortan t example, which we call the r eve ale d b elief mo del, the agen ts reveal their complete p osteriors, i.e., µ i,t . F ormally , let Θ := { θ 1 , . . . , θ m } and let e j ∈ R m b e a column vector of all zeros except for its j -th element which is equal to one. F urthermore, w e relax the requirement that the action spaces A i are finite sets; instead, for each agen t i ∈ [ n ] let A i b e the m -dimensional probabilit y simplex: A i = { ( x 1 , . . . , x m ) T ∈ R m : P m i =1 x i = 1 and x i ≥ 0 , ∀ i } . If the utility assigned to an action a := ( a 1 , . . . , a m ) T ∈ A i and a state θ j ∈ Θ measures the squared Euclidean distance b et ween a and e j , then it is optimal for agent i to reveal her b elief a i,t = ( µ i,t ( θ 1 ) , . . . , µ i,t ( θ m )) T . W e can state a sp ecial case of the GROUP-DECISION mo del in the rev ealed b elief setting: 8 Hązła et al. Bayesian Decision Making in Gr oups is Har d Pr oblem 2 (GR OUP-DECISION with revealed beliefs). A t an y time t , giv en the graph structure G , agent i and the observ ation history h i,t , determine the Ba y esian p osterior b elief µ i,t . 2.2. Log-Lik eliho o d Ratio and Log-Belief Ratio Notations Consider a finite state space Θ = { θ 1 , . . . , θ m } and for all 2 ≤ k ≤ m and s ∈ S i , let: λ i ( s, θ k ) := log P i,θ k ( s ) P i,θ 1 ( s ) , φ i,t ( θ k ) := log µ i,t ( θ k ) µ i,t ( θ 1 ) , γ ( θ k ) := log ν ( θ k ) ν ( θ 1 ) . (2) W e will also write λ i ( θ k ) := λ i ( s i , θ k ) . W e will call λ i the (signal) log -likeliho o d r atio and φ i,t the log -b elief r atio . If we assume that the agents start from uniform prior b eliefs and the size of the state space is m = 2 (as will b e the case for the hardness results in Section 3 ), we can emplo y a simpler notation. First, with uniform priors, we ha ve γ ( θ k ) = log ( ν ( θ k ) /ν ( θ 1 ) ) = 0 for all k . Moreov er, with binary state space Θ = { 0 , 1 } we only need to k eep track of one set of log -b elief and log -lik eliho o d ratios λ i := λ i (1) = log ( P i, 1 ( s i ) / P i, 0 ( s i ) ) , and φ i,t = φ i,t (1) = log ( µ i,t (1) / µ i,t (0) ) . Henceforth, we use λ i and φ i,t as there is no risk of confusion in dropping their argumen ts. Note that in the setting with binary state and signals ( S i = { 0 , 1 } ), there is a one-to-one corre- sp ondence b etw een informativ e signal structures satisfying P i, 0 (1) 6 = P i, 1 (1) , and log -lik eliho od ratios satisfying λ i (0) · λ i (1) < 0 . A ccordingly , we sometimes use log -likelihoo d ratios to sp ecify signal structures. Example 1 (Belief Exchange in the First Two R ounds). T o giv e some in tuition ab out our mo del and illustrate the usefulness of the log -lik eliho o d ratio and log -belief ratio notations, we explain ho w the agents in the binary action model can compute their actions at t = 0 and t = 1 . W e consider informativ e binary priv ate signals s i ∈ { 0 , 1 } with P i, 1 (1) > P i, 0 (1) . W e fo cus on computing the log -likelihoo d ratio ( φ i,t ), since a i,t = 1 if, and only if, φ i,t > 0 . A t time zero, the p osterior and log -belief ratio of agent i are determined b y her priv ate signal, as follo ws: µ i, 0 (1) = P i, 1 ( s i ) P i, 0 ( s i ) + P i, 1 ( s i ) , φ i, 0 = log P i, 1 ( s i ) P i, 0 ( s i ) . Therefore, we get a i, 0 = s i since P i, 1 (1) > P i, 0 (1) . At time one, agent i observes the actions, and therefore infers the priv ate signals, of her neigh b ors. Since the priv ate signals are conditionally indep enden t, the resp ectiv e log -likelihoo d ratios add up and w e get the follo wing expression (recall that i ∈ N i ): φ i, 1 = X j ∈N i φ j, 0 = X j ∈N i log P j, 1 ( a j, 0 ) P j, 0 ( a j, 0 ) = X j ∈N i λ j . (3) Ho wev er, the computation b ecomes significan tly more inv olved at later times. This is b ecause one needs to accoun t for dep endencies and redundancies in agen ts’ information and the resulting actions. Hązła et al. Bayesian Decision Making in Gr oups is Har d 9 (A) (B) Figure 2 ( A ) Illustration of the VER TEX-COVER reduction (Theorem 1 ); every edge ε j is connected to its tw o v ertices, and every vertex is connected to all its incid en t edges. ( B ) Illustration of the EXACT-CO VER reduction (Theorem 2 ); every element ε j b elongs to exactly three sets and ev ery set τ j con tains exactly three elements. 3. Ha rdness of Ba yesian Decisions Our hardness results use a standard approach from complexity theory; cf., e.g., Arora and Barak ( 2009 ). W e establish NP-har dness of computations in b oth binary action and revealed b elief mo dels. W e do so by exhibiting reductions from problems that are known to b e NP-hard. As shown b elow, t wo co vering problems: vertex cov er and set co v er, turn out to b e con venien t starting p oin ts for our reductions. W e now present our main hardness results. Theorem 1 (Binary A ction Mo del) . The GROUP-DECISION pr oblem in the binary action mo del is NP-har d at t = 2 . F urthermor e, for a network of n Bayesian agents in the binary action mo del, it is NP-har d to distinguish b etwe en p osterior b eliefs µ i, 2 (0) < exp( − Ω( n )) and µ i, 2 (1) < exp( − Ω( n )) . Pr o of sketch of The or em 1 . App endix A contains a detailed pro of. Our reduction is from an NP- hard problem of appr oximating vertex c over (VER TEX-COVER). A v ertex cov er on an undirected graph ˆ G m,n with n v ertices and m edges is a subset of vertices (denoted b y ˆ Σ ), such that each edge touches at least one vertex in ˆ Σ . W e consider the approximation version of VER TEX-CO VER, where ev ery input graph b elongs to one of tw o cases: (i) the YES case, where it has at least one small v ertex co ver (say , smaller than 0 . 85 n ), (ii) the NO case, where all its v ertex cov ers are large (say , larger than 0 . 999 n ). It is NP-hard to distinguish b et ween these tw o cases. W e show an efficient reduction that maps a graph ˆ G m,n to an instance of GROUP-DECISION in the binary action mo del. W e enco de the structure of ˆ G m,n b y a tw o-lay er netw ork, where the first 10 Hązła et al. Bayesian Decision Making in Gr oups is Har d la yer is comprised of “vertex agents”, which are connected to “edge agen ts” in the second lay er based on the incidence relations in ˆ G m,n (see Figure 2A ). W e let the verte x agents τ 1 , . . . , τ n receiv e Bernoulli priv ate signals with signal structure given b y p := P τ i , 1 (1) = 0 . 4 and p := P τ i , 0 (1) = 0 . 3 . Each edge agen t ( ε j ) observ es tw o v ertex agents corresp onding to its incident v ertices in ˆ G m,n . The priv ate signals of edge agents are uninformativ e. W e can verify that since p (1 − p ) = 0 . 24 > 0 . 21 = p (1 − p ) , an edge agen t ε j tak es action one at time one ( a ε j , 1 = 1 ) if, and only if, at least one of the t wo neighboring vertex agents ( τ i ) receives priv ate signal s τ i = 1 . Agen t i (whose decision w e sho w to b e NP-hard) receives an uninformative priv ate signal, and observ es all edge agents as well (see Figure 2A ). T o complete the reduction, we need to sp ecify the observ ation history of agent i , and w e do so by saying that all edge agen ts announce action one at time one a ε j , 1 = 1 . By our previous observ ation, this is equiv alent to saying that the priv ate signals of v ertex agents form a vertex cov er of ˆ G m,n . The crux of the pro of is in showing the following prop erty: • If every vertex cov er of ˆ G m,n has size at least 0 . 999 n , then agen t i concludes that at least 0 . 999 n of v ertex agent priv ate signals are ones. • On the other hand, if ˆ G m,n has a vertex co v er of size at most 0 . 85 n , then agen t i concludes that, almost certainly , at most 0 . 998 n of priv ate signals are ones. The first statement is clear. How ever, if there exists a vertex co ver of size 0 . 85 n , the priv ate signals migh t come from this small vertex co ver just as well as from any of the larger co vers. Since p = 0 . 3 and p = 0 . 4 , the size of any v ertex cov er is muc h larger than exp ected n umber of ones among the priv ate signals, regardless of the state θ . One could hop e that the concentration of measure would imply that seeing a smaller vertex cov er is relatively m uc h more lik ely , ev en if there is a significantly greater total n um b er of large vertex cov ers. In App endix A , we use a Chernoff b ound to conclude that this is indeed the case, and agent i can infer that, almost certainly , the priv ate signals form a v ertex cov er of size at most 0 . 998 n . After establishing that it is NP-hard to distinguish b et w een at least 0 . 999 n ones and at most 0 . 998 n ones among the priv ate signals, our construction concludes with a simple trick. W e will explain the idea assuming a gap b etw een 0 . 8 n and 0 . 6 n instead of incon venien tly small 0 . 999 n and 0 . 998 n . The complete details are pro vided in App endix A . Assume that agent i additionally observes another agen t κ . Agen t κ do es not observe any one and rev eals to agent i a v ery strong, indep enden t priv ate signal equiv alen t to n signals of vertex agen ts, all of them with v alue zero. If agent i is in the case where at least 0 . 8 n v ertex signals are ones, then her total observ ed signal strength is equal to at least 0 . 8 n ones out of 2 n total, i.e., at least 40% of all signals are ones. Giv en that p = 0 . 4 = 40% , agent i concludes that almost certainly θ = 1 , Hązła et al. Bayesian Decision Making in Gr oups is Har d 11 i.e., µ i, 2 (0) ≈ 0 . On the other hand, in case where (almost certainly) at most 0 . 6 n vertex signals are ones, total signal strength is at most 0 . 6 n out of 2 n , i.e., 30% of p ossible signals and, recalling p = 0 . 3 = 30% , agent i concludes that µ i, 2 (1) ≈ 0 . 1 Remark 1. A priori one migh t susp ect that the difficulty of distinguishing b etw een a i, 2 = 0 and a i, 2 = 1 arises only if the b elief of agent i is very close to the threshold µ i, 2 ≈ 1 / 2 . How ever, in our reduction the opp osite is true: F or a computationally b ounded agent, it is hop eless to distinguish b et ween worlds where θ = 0 with high probability (w.h.p.), and θ = 1 w.h.p. This can be thought of as a strong hardness of appro ximation result. W e also hav e a matching result for the revealed b elief mo del: Theorem 2 (Appro ximating Beliefs) . The GROUP-DECISION pr oblem is NP-har d in the r eve ale d b elief mo del with uniform priors, binary states Θ = { 0 , 1 } , and binary private signals s i ∈ { 0 , 1 } . In p articular, for a network of n Bayesian agents at t = 2 , it is NP-har d to distinguish b etwe en b eliefs µ i, 2 (0) ≤ exp( − Ω( n )) and µ i, 2 (1) ≤ exp( − Ω( n )) . Pr o of sketch for The or em 2 . App endix B con tains a detailed proof. Our reduction is from a v ariant of an NP-complete problem EXACT-CO VER. Let n b e a multiple of three and consider a set of n elemen ts ˆ E n = { ε 1 , . . . , ε n } and a family of n subsets of ˆ E n denoted by ˆ T n = { τ 1 , . . . , τ n } , τ j ⊂ ˆ E n for all j ∈ [ n ] . EXA CT-CO VER is the problem of deciding if there exists a collection ˆ T ⊆ ˆ T n that exactly co vers ˆ E n , that is, each elemen t ε i b elongs to exactly one set in ˆ T . W e use a restriction of EXACT-CO VER where each set has size three and each element app ears in exactly three sets; hence, if the exact co v er exists, then it consist of n/ 3 sets. W e use a tw o-lay er netw ork to enco de the inclusion relations b etw een the elements ˆ E n and subsets ˆ T n . There are n agents τ 1 , . . . , τ n in the first lay er to enco de the subsets and n agen ts ε 1 , . . . , ε n in the second la yer to enco de the elemen ts. Each “elemen t agent” observ es three “subset agents” corresp onding to subsets to which the elemen t b elongs (see Figure 2B ). Agent i (whose decision w e sho w to b e NP-hard) observes the rep orted b eliefs of all element agen ts. There is also one auxiliary agen t κ that is observ ed by all element agents. The priv ate signals of agent i and the element agents are non-informative. The subset agents observ e i.i.d. binary signals and the auxiliary agent κ observes another indep enden t binary signal, but with a differen t distribution. W e set up the signal structures and the b eliefs transmitted b y the elemen t agents to agent i such that there are tw o p ossible outcomes: Either s κ = 0 and all subset agen ts received p ositive signals s τ i = 1 ; or, s κ = 1 and the priv ate signals of subset agents form an exact cov er of the elements. Of course, the second alternative is p ossible only if an exact cov er exists. The first alternative implies that all subset agents received ones as priv ate signals, and therefore θ = 1 with high probability . In case of the second alternative, we sho w that almost certainly only 12 Hązła et al. Bayesian Decision Making in Gr oups is Har d one-third of subset agen ts received ones, and therefore θ = 0 with high probability . Therefore, if there is no exact co v er, agent i should compute µ i, 2 (0) ≈ 0 and otherwise µ i, 2 (1) ≈ 0 . W e conclude this section by discussing some asp ects and limitations of our pro of. W e also examine the economic assumptions b ehind our results and discuss what happ ens when these assumptions are relaxed. 3.1. W orst-Case and A verage-Case Reductions Our reductions are worst-case, b oth with resp ect to netw orks and signal profiles. That is, we show hardness only for a sp ecific class of netw orks, and for signal profiles in those netw orks that arise with exp onen tially small probability . W e cannot exclude existence of an efficient algorithm that computes Ba yesian b eliefs for all net work structures, with high probabilit y o v er signal profiles. Not withstand- ing, an y such purp orted algorithm must hav e a go o d reason to fail on our hard instances. This reflects a general phenomenon in computational complexity , where av erage-case hardness, ev en when susp ected to hold, seems to b e significan tly more difficult to rigorously demonstrate (see Bogdano v et al. ( 2006 ) for one survey). W e lea ve as a fascinating op en problem if our results can b e impro ved, for example for worst-case netw orks and a v erage-case signal profiles. One thing to note in this regard is that our reductions enco de the witnesses to NP problems (vertex and set cov ers) as signal profiles. That necessarily means that for hard p ositive instances (e.g., graphs with a small v ertex cov er) relev ant signal profiles will arise only with tiny probabilit y: Otherwise these instances w ould b e easy to solv e by sampling a p otential witness at random. Significant new ideas might b e needed to o vercome this problem. On the p ositiv e side, the worst-case nature of our hard instances mak es it p oten tially easier to em b ed them in more general or mo dified settings. W e discuss sev eral concrete cases b elo w. 3.2. F orw a rd-lo oking Agents Our results are restricted to m yopic agen ts. In the general framew ork of forw ard-lo oking utility maximizers with discoun t factor δ , m yopic agents are obtained as a sp ecial case by completely discoun ting the future pay-offs ( δ → 0 ). The computational difficulties for strategic agen ts se em to b e at least as large as for m yopic agen ts, but w e do not offer any formal results. Due to the multiplicit y of equilibria suggested by the folk theorem ( F uden b erg and Maskin 1986 ) — see also examples in Rosen b erg et al. ( 2009 ) and Mossel et al. ( 2015 ) — it is unclear to us ho w to make the computational problem well-posed. On the other hand, since in the limit t → ∞ the agents in any equilibrium act my opically ( Rosenberg et al. 2009 ), it seems plausible to exp ect that their computations will b e similarly hard as in our analysis. Hązła et al. Bayesian Decision Making in Gr oups is Har d 13 (A) (B) (C) (D) Figure 3 ( A ) W e can cancel out the effect of a distinct, non-uniform prior in an agent j by adding tw o auxiliary agen ts ( κ j and κ 0 j ), and hav e agent j observe only one of them. Both added agents will b e observed by ev ery other agent. ( B ) W e can replace the auxiliary agen t κ in the VER TEX-COVER reduction by n i agen ts ι 1 , . . . , ι n i with zero signals drawn from the same i.i.d distribution as the vertex agents. ( C ) W e can replace the auxiliary agent κ in the EXACT-CO VER reduction by five agen ts κ 1 , . . . , κ 5 with i.i.d. signals and set up their receiv ed signals and the observ ation structure such that the signals of κ 1 and κ 3 necessarily agree. ( D ) W e can mo dify the VER TEX-COVER reduction to work with noisy binary actions. Here each pair of vertex agents ( ε (1) and ε (2) ) are observed by a collection of edge agents ε (1) , . . . , ε ( k ) who all report the same noisy actions a 0 ε (1) , 1 = . . . = a 0 ε ( k ) , 1 = 1 . 3.3. Directed Links In b oth our reductions, w e use directed acyclic graphs. This is arguably a simpler case from an inference viewp oin t, since in netw orks containing cycles (including those with bidirectional links) an agen t needs to take into account her own, p ossibly indirect, influence on her neigh b ors. Therefore, our hardness results hold true, in spite of the (simpler) acyclic structure of our hard examples. In effect, our hardness results are applicable to undirected (bidirectional) net works without loss of generality . The reason is that replacing directed links with bidirectional ones do es not affect an y relev ant inferences in our reductions. In particular, our results apply to net works that exhibit agreemen t and learning, cf. Mossel et al. ( 2014 ). It is worth noting that since our results are ac hiev ed in a basic mo del with binary state and priv ate signals, they can b e easily em b edded in richer settings, e.g., with signal structures giv en by contin uous distributions. 3.4. Common Priors The common prior assumption simplifies the b elief calculations in our hard examples, but it do es not pla y a critical role otherwise. In fact, w e can argue that similar to the directed links, imp osition of common priors on the agen ts simplifies their inference tasks. This is consisten t with the fact that common priors are crucial for reac hing agreement ( Aumann 1976 ). 14 Hązła et al. Bayesian Decision Making in Gr oups is Har d W e note that in the binary action mo del the computations of agen ts with arbitrary priors can b e reduced to computations with uniform priors. One wa y to ac hiev e this is as follows: F or eac h agent j with a non-uniform prior ν j w e introduce t wo auxiliary agen ts κ j and κ 0 j with uniform priors. Agen t κ j is observ ed by every one, including agent j , while agent κ 0 j is observed by every one exc ept agen t j (see Figure 3A ). W e then set the signal structures of agents κ j and κ 0 j suc h that (cf. ( 2 )) λ κ j (1) = γ j = − λ κ 0 j (0) and sp ecify priv ate signals s κ j = 1 and s κ 0 j = 0 . One can verify that: 2 (i) The signal of agen t κ j effectiv ely shifts the prior of agent j to ν j . (ii) Since every one observed κ j , the fact that the prior of agen t j has b een shifted b ecomes common kno wledge. (iii) No agen t other than j shifts their b elief after observing b oth κ j and κ 0 j . 3.5. I.I.D. Signals Assuming that the priv ate signals are (conditionally) i.i.d. is common in social learning literature. It often simplifies the analysis and provides a useful appro ximation to study homogeneous p opulations. The signals in our reductions are not i.i.d., but this is only for conv enience. In App endix C , w e explain ho w to modify our pro ofs to work with i.i.d. signals. T o giv e a general idea, in each reduction there are tw o issues to deal with: First, the auxiliary agen t κ receives a sp ecial priv ate signal with a distribution that is different form an y other agents. In VER TEX-COVER, agent κ receives a v ery strong priv ate signal that induces a log -b elief ratio shift equiv alent to n i = cn zero vertex agent signals for some constant c > 0 . Therefore, it is not surprising that we can replace κ by n i agen ts with signal structure of v ertex agents, all rep orting zero priv ate signals (cf. Figure 3B ). In EXA CT-CO VER, the auxiliary agen t κ receives a sp ecial signal that is twice as strong compared to the subset agents (its log -lik eliho o d ratio is twice the subset agen t signals). W e can use t w o i.i.d. signals to ha v e the same effect, except that w e need a mec hanism to ensure that their signals agree (they are b oth zero, or b oth one). W e can ac hiev e this using fiv e auxiliary agents as shown in Figure 3C . The second issue is that agen t i , as well as the edge agen ts in VER TEX-COVER and the element agen ts in EXACT-CO VER, do not receiv e priv ate signals. This can b e remedied by a similar idea as presented in Subsection 3.4 . In particular, we allow the agen ts to receive priv ate signals which are then coun terv ailed by matching opp osite signals coming from auxiliary agen ts. 3.6. Noisy Actions As discussed in Subsection 1.2 , Aaronson ( 2005 ) shows that if tw o agen ts decide to add noise to their exchanged opinions, their rational b eliefs can b e approximated efficien tly . This is an in teresting mo del in its own right: A typical approach to b ounded rationality needs to choose a rule for up dating Hązła et al. Bayesian Decision Making in Gr oups is Har d 15 b eliefs, and any such c hoice is, to an extent, arbitrary . If, instead, it could b e shown that “noisy” Ba yesian up dates are efficient, it would provide for an in teresting alternativ e. Not withstanding, we show that adding noise do es not change our hardness results for netw ork mo dels. F or concreteness, we focus on a particular mo dification of the binary action mo del. How ev er, w e b elieve our ideas should work with most other natural v ariants. More precisely , we consider the binary action mo del with an additional parameter 0 < δ < 1 / 2 . All the rules are the same except that ev ery time an agen t broadcasts her opinion to the world, a glitch (bit flip) o ccurs with probabilit y δ . In other w ords, every time agent i computes an action a i,t = 1 ( µ i,t > 1 / 2) , its announced v alue ( a 0 i,t ) is flipped to 1 − a i,t , indep enden tly with probabilit y δ . W e assume that all neighbors of i observ e the same action (as opp osed to flipping with probabilit y δ indep endently for each neigh b or). Since the net w orks that w e consider are acyclical, it do es not matter if the agents observ e their o wn actions, i.e., if they learn that their actions w ere flipp ed. As before, all these rules are common kno wledge and the agen ts estimate their b eliefs ( µ i,t ) using the Ba y es rule. In Appendix D we show that estimating b eliefs in this mo del is still NP-hard. The main idea is that an agen t in the noiseless binary action mo del can b e replaced with m ultiple copies of noisy agen ts broadcasting the same action in suc h a wa y that the probabilit y of the transmission error is negligible compared to the other probabilities that determine the computed b eliefs (see Figure 3D ). 4. Algo rithms for Bay esian Choice Refinemen t of information partitions with increasing observ ations is a key feature of rational learning problems and it is fundamental to ma jor classical results that establish agreement ( Geanak oplos and P olemarchakis ( 1982 )) or learning ( Blac kw ell and Dubins ( 1962 ), Lehrer and Smoro dinsky ( 1996 )) among rational agen ts. In the group decision setting, the list of p ossible signal profiles is regarded as the information set represen ting the curren t understanding of the agen t ab out her en vironment, and the wa y additional observ ations are informative is by trimming the current information set and reducing the am biguity in the set of initial signals that hav e caused the agent’s history of past observ ations. Thereby , one can conceive a natural metho d of computing agen ts’ actions based on elimination of imp ossible signals . By successively eliminating signals that are inconsisten t with the new observ ations, w e refine the partitions of the space of priv ate signals, and at the same time, we keep trac k of the curren t information set that is consistent with the observ ations. As such, w e refer to this approach as “Elimination of Imp ossible Signals” or EIS. W e b egin by presenting a recursive version (REIS), and study its iterativ e implemen tations (IEIS) afterwards. 16 Hązła et al. Bayesian Decision Making in Gr oups is Har d T o pro ceed, let s = ( s 1 , . . . , s n ) ∈ S 1 × . . . × S n b e any profile of initial signals, and denote the set of all priv ate signal profiles that agent i regards as p ossible at time t , i.e. her information set at time t , b y I i,t ⊂ S 1 × . . . × S n ; this random set is a function of the observed history h i,t and is fully determined by the random profile of all priv ate signals s := ( s 1 , . . . , s n ) . Recall that the observ ation history h i,t is defined as { s i } ∪ { a j,τ for all j ∈ N i , and τ < t } . Hence, I i,t tak es into account the neigh b oring actions at all times strictly less than t . Starting from I i, 0 = { s i } × Q j 6 = i S j , at every step t > 0 agent i remov es those signal profiles in I i,t − 1 that are inconsisten t with her observ ation history h i,t , and constructs a censured set of signal profiles I i,t ⊂ I i,t − 1 . Recall that P θ ( · ) is the joint distribution of the priv ate signals of all agents. F or eac h i and t , the set of p ossible signals ( I i,t ) is mapped to a Bay esian posterior ( µ i,t ) as follo ws: µ i,t ( θ ) = P s ∈ I i,t P θ ( s ) ν ( θ ) P θ 0 ∈ Θ P s ∈ I i,t P θ 0 ( s ) ν ( θ 0 ) . (4) The p osterior b elief, in turn, enables the agent to c ho ose an optimal (my opic) action giv en her observ ations: a i,t = arg max a i ∈A i X θ 0 ∈ Θ u i ( a i , θ 0 ) µ i,t ( θ 0 ) . (5) It is conv enient to define a function A i that given a set of p ossible signal profiles I ⊂ Q n j =1 S j outputs the optimal action of agen t i as follo ws: A i ( I ) = arg max a ∈A i X θ 0 ∈ Θ u i ( a, θ 0 ) P s 0 ∈I P θ 0 ( s 0 ) ν ( θ 0 ) P θ 00 ∈ Θ P s 0 ∈I P θ 00 ( s 0 ) ν ( θ 00 ) . (6) Crucially , in addition to her o wn p ossible set I i,t , agent i keeps trac k of other agents’ p ossible sets as well. Therefore, it is useful to consider the function I ( j, t, s ) that outputs the set of signal profiles that agent j considers p ossible at time t if the initial priv ate signals are s . Subsequently , the action that agen t j takes if the initial priv ate signals are s is giv en by A j ( I ( j, t, s )) . Note that in the ab o ve notation, ∪ s ∈ I i,t I ( j, t, s ) is the set of all signal profiles that agent i cannot y et conclude are rejected b y agent j . Similarly , ∪ s ∈ I i,t A j ( I ( j, t, s )) is the list of all p ossible actions that agen t j ma y currently take, from the viewp oin t of agent i (consistent with agent i ’s observ ations so far). Giv en A j ( I ( j, t, s )) for all s ∈ I i,t − 1 and ev ery j ∈ N i , agen t i can reject an y s for whic h the observ ed neigh b oring action a j,t do es not agree with the sim ulated action: Reject any s suc h that a j,t 6 = A j ( I ( j, t, s )) for some j ∈ N i . The function I ( i, t, s ) can be defined recursiv ely b y listing all signal profiles s 0 that are consistent with s , pro ducing the same observ ations for agen t i up until time t . T o c hec k such consistencies one needs to make additional function calls of the form I ( j, τ , s 0 ) for j ∈ N i and τ < t . W e formalize this idea in Algorithm 0 by offering a recursiv e implemen tation for the elimination of imp ossible signals to compute Ba yesian actions (cf. T able 1 for a summary of the notation). 3 Hązła et al. Bayesian Decision Making in Gr oups is Har d 17 T able 1 Notation for Ba yesian group decision computations (Elimination of Imp ossible Signals) s = ( s 1 , s 2 , . . . , s n ) a profile of initial priv ate signals. I i,t the set of all signal profiles that are deemed possible by agent i , giv en her observ ations up until time t . I ( j, t, s ) the set of all signal profiles that are deemed p ossible by agen t j at time t , if the initial signals of all agen ts are prescrib ed according to s . A j ( I ( j, t, s )) the computed action of agent j at time t , if the initial signals of all agen ts are prescrib ed according to s . Algorithm 0: RECURSIVE-EIS ( i, t ) Input: Graph G , set of p ossible signal profiles I i,t , and neigh b oring actions a j,t , j ∈ N i Output: Ba yesian action a i,t +1 1. Initialize I i,t +1 = I i,t . 2. F or all s ∈ I i,t +1 , do: • F or all j ∈ N i , if a j,t 6 = A j ( I ( j, t, s )) , then set I i,t +1 = I i,t +1 \ { s } . 3. a i,t +1 = A i ( I i,t +1 ) . F unction I ( i, t, s ) : • If t = 0 , then set I = { s i } × Q j 6 = i S j • else if t > 0 : 1. Initialize I = ∅ . 2. F or all s 0 ∈ S 1 × . . . × S n , do: — If Consistent ( i, t, s, s 0 ) , then set I = I ∪ { s 0 } . return I F unction Consistent ( i, t, s, s 0 ) : 1. Initialize is_consistent = True . 2. F or all τ < t and j ∈ N i , do: • If A j ( I ( j, τ , s )) 6 = A j ( I ( j, τ , s 0 )) , then is_consistent = False . return is_consistent In Subsection 4.1 , w e describ e an iterative implementation of elimination of imp ossible signals (IEIS). The IEIS calculations scale exp onentially with the net w ork size; this is true, in general, with the exception of some densely connected net works where agen ts ha v e direct access to all the observ ations of their neigh b ors. W e expand on this sp ecial case (called transitive netw orks) in Subsection 4.2 . Finally , in Subsection 4.3 we discuss the revealed b eliefs case and iden tify additional net work structures for which Ba y esian calculations simplify , allowing for efficient Bay esian b elief exc hange. 4.1. Iterative Elimination of Imp ossible Signals (IEIS) T o pro ceed, we denote N τ i as the τ -th order neighborho o d of agent i comprising entirely of those agen ts who are at distance τ from agen t i ; in particular, N 1 i = N i , and w e use the con ven tion 18 Hązła et al. Bayesian Decision Making in Gr oups is Har d N 0 i = { i } . W e further denote ¯ N t i := ∪ t τ =0 N τ i as the set of all agen ts who are within distance t of or closer to agen t i ; w e sometimes refer to ¯ N t i as her ego-net of radius t . A t time zero, agent i initializes her list of p ossible signals I i, 0 = { s i } × Q j 6 = i S j . At time t , she has access to I i,t , the list of p ossible signal profiles that are consistent with her observ ations so far, as well as all signal profiles that she thinks eac h of the other agents w ould regard as p ossible conditioned on any profile of initial signals: I ( j, t − τ , s ) for s ∈ S 1 × . . . × S n , j ∈ N τ i , and τ ∈ [ t ] := { 1 , 2 , . . . , t } . Given the newly obtained information, which constitute her observ ations of the most recen t neighboring actions a j,t , j ∈ N i , she refines I i,t to I i,t +1 and up dates her b elief and actions accordingly , cf. ( 4 ) and ( 5 ). This is achiev ed as follows (w e use dist ( j, i ) to denote the length of the shortest path connecting j to i ): Algorithm 1: IEIS ( i, t ) Input: Graph G , set of p ossible signal profiles I i,t , I ( j, τ , s ) , for all s , τ ∈ [ t − dist ( j, i )] , j ∈ ¯ N t i , and neigh b oring actions a j,t , j ∈ N i Output: Ba yesian action a i,t +1 • SIMULA TE: F or all s := ( s 1 , . . . , s n ) ∈ S 1 × . . . × S n , do: 1. F or j ∈ N t +1 i , initialize I ( j, 0 , s ) = { s j } × Q k 6 = j S k . 2. F or τ = t, t − 1 , . . . , 1 , do: (a) F or j ∈ N τ i , do: i. Initialize I ( j, t + 1 − τ , s ) = I ( j, t − τ , s ) . ii. F or s 0 ∈ I ( j, t + 1 − τ , s ) do: — F or all k ∈ N j , if A k ( I ( k , t − τ , s 0 )) 6 = A k ( I ( k , t − τ , s )) , then set I ( j, t + 1 − τ , s ) = I ( j, t + 1 − τ , s ) \ { s 0 } . • UPD A TE: 1. Initialize I i,t +1 = I i,t . 2. F or all s ∈ I i,t +1 , do: — F or all j ∈ N i , if a j,t 6 = A j ( I ( j, t, s )) , then set I i,t +1 = I i,t +1 \ { s } . 3. Set a i,t +1 = A i ( I i,t +1 ) . Note that in the “SIMULA TE” part of the IEIS Algorithm, we make no use of the observ ations of agen t i . This step amoun ts to simulating the netw ork at all signal profiles. It is implemented such that the computations at time t are based on what was computed for making decisions prior to time t . In the “UPDA TE” part, we compare the most recently observed actions of neigh b ors with their simulated actions for eac h signal profile in I i,t to detect and eliminate the imp ossible ones. T o ev aluate the p ossibilit y of a signal profile using IEIS, agent i may need to consider actions that other agen ts could hav e taken in signal profiles that she has already rejected. In particular, simulating the net work at all p ossible profiles of agent i at time t , i.e. at all s ∈ I i,t , is not enough to ev aluate the condition, A k ( I ( k , t − τ , s 0 )) 6 = A k ( I ( k , t − τ , s )) , at step 2(a)ii of Algorithm 1–SIMULA TE, since s 0 ma y not b e included in I i,t . Hązła et al. Bayesian Decision Making in Gr oups is Har d 19 T able 2 Notation for Computations in T ransitive Netw orks S i,t the list of all priv ate signals that are deemed p ossible for agent i at time t , b y an agent who has observed her actions in a transitive netw ork structure up un til time t . I i,t ( s i ) = { s i }× Q j ∈N i S j,t the list of neigh b oring signal profiles that are deemed p ossible by agent i , giv en her observ ations of their actions up until time t conditioned on own priv ate signal b eing s i . In App endix E we describ e the complexity of the computations that the agent should undertak e using IEIS at an y time t in order to calculate her p osterior probabilit y µ i,t +1 and Ba yesian decision a i,t +1 giv en all her observ ations up to time t . Subsequently , we prov e that: Theorem 3 (Complexit y of IEIS) . Consider a network of size n with m states, and let M and A denote the maximum c ar dinality of the signal and action sp ac es ( m := card (Θ) , M = max k ∈ [ n ] card ( S k ) , and A = max k ∈ [ n ] card ( A k )) . The IEIS algorithm has O ( n 2 M 2 n − 1 mA ) running time, which given the private signal of agent i and the pr evious actions of her neighb ors { a j,τ : j ∈ N i , τ < t } in any network structur e, outputs a i,t , the Bayesian action of agent i at a fixe d time t . 4.2. IEIS over T ransitive Structures W e now shift fo cus to the sp ecial case of transitive net w orks, defined b elow. Definition 1 (Transitive Networks). W e call a netw ork structure transitive if the directed neigh b orho o d relationship betw een its nodes satisfies the reflexiv e and transitiv e prop erties. In particular, the transitive prop erty implies that an yone whose actions indirectly influence the obser- v ations of agen t i is also directly observed by her, i.e. any neigh b or of a neighbor of agent i is a neigh b or of agent i as well. In such structures, any agent whose actions indirectly influence the observ ations of agen t i is also directly observed by her. This sp ecial structure of transitiv e net works mitigates the issue of hidden observ ations, and as a result, Ba y esian inference in a transitive structure is significantly less complex. After initializing S j, 0 = S j and I i, 0 = { s i } × Q j ∈N i S j, 0 , agen t i needs only to k eep trac k of S j,t ⊆ S j for all j ∈ N i (cf. T able 2 ). This is b ecause, in transitive structures, the list of p ossible signal profiles decomp oses: I i,t = { s i } × Q j ∈N i S j,t . Updating in transitiv e structures is achiev ed b y incorp orating a j,t for each j ∈ N i individually , and transforming the resp ectiv e S j,t in to S j,t +1 . This up dating pro cedure is formalized in Algorithm 2. 20 Hązła et al. Bayesian Decision Making in Gr oups is Har d Algorithm 2: IEIS-TRANSITIVE ( i, t ) Input: T ransitiv e graph G , set of p ossible signal profiles S j,t , ∀ j ∈ N i , and neigh b oring actions a j,t , j ∈ N i . Output: Ba yesian action a i,t +1 1. F or all j ∈ N i , do: (a) Initialize S j,t +1 = S j,t . (b) F or all s j ∈ S j,t +1 , do: i. Set I j,t ( s j ) = { s j } × Q k ∈N j S k,t . ii. If a j,t 6 = A j ( I j,t ( s j )) , then set S j,t +1 = S j,t +1 \ { s j } . 2. Up date I i,t +1 = { s i } × Q j ∈N i S j,t +1 . 3. Set a i,t +1 = A i ( I i,t +1 ) . In App endix F , we determine the computational complexity of the IEIS-TRANSITIVE algorithm as follo ws: Theorem 4 (Efficien t Bay esian group decisions in transitiv e structures) . Consider a net- work of size n with m states, and let M and A denote the maximum c ar dinality of the signal and action sp ac es ( m := card (Θ) , M = max k ∈ [ n ] card ( S k ) , and A = max k ∈ [ n ] card ( A k )) . Ther e exists an algorithm with running time O ( Amn 2 M 2 ) which given the private signal of agent i and the pr evious actions of her neighb ors { a j,τ : j ∈ N i , τ < t } in any tr ansitive network, outputs a i,t , the Bayesian action of agent i at time t . 4.3. Algo rithms fo r Beliefs In general, GROUP-DECISION with revealed b eliefs is a hard problem p er Theorem 2 . Here, we in tro duce a structural prop ert y of the net works, called “transparency”, which leads to efficient b elief calculations in the revealed b elief mo del. Recall that the t -radius ego-net of agent i , ¯ N t i , is the set of all agents who are within distance t of or closer to agen t i . In a transparent net w ork, the b elief of ev ery agent at time t aggregates the likelihoo ds of all priv ate signals in their t -radius ego-net: Definition 2 (Transp arency). The graph structure G is transparen t if for all agen ts i ∈ [ n ] and all times t w e hav e that: φ i,t = P j ∈ ¯ N t i λ j , for an y choice of signal structures and all p ossible initial signals. Moreov er, we call G transparent to agen t i at time t , if for all j ∈ N i and every τ ≤ t − 1 we ha ve that: φ j,τ = P k ∈ ¯ N τ j λ k , for an y c hoice of signal structures and all p ossible initial signals. In any graph structure, the initial b elief exchange b et ween the agen ts reveals the likelihoo ds of the priv ate signals in the neigh b oring agen ts (see Example 1 and equation ( 3 ) therein). Hence, from her observ ations of the b eliefs of her neighbors at time zero, agent i learns all that she needs to kno w regarding their priv ate signals: Cor ollar y 1 (T ransparency at time one) . Al l gr aphs ar e tr ansp ar ent at time one. Hązła et al. Bayesian Decision Making in Gr oups is Har d 21 Ho wev er, the future neighboring b eliefs (at time tw o and b ey ond) are “less tr ansp ar ent ” when it comes to reflecting the neighbors’ knowledge of other priv ate signals that are received throughout the netw ork. In particular, the time one b eliefs of the neigh b ors φ j, 1 , j ∈ N i are given by φ j, 1 = P k ∈ ¯ N 1 j λ k ; hence, from observing the time one b elief of a neighbor, agent i w ould only get to know P k ∈N j λ k , rather than the individual v alues of λ k for eac h k ∈ N j . 4 Remark 2 (Transp arency, st a tistical efficiency, and imp ar tial inference). Suc h agen ts j whose b eliefs satisfy the equation in Definition 2 at some time τ are said to hold a tr ansp ar ent or efficient b elief; the latter signifies the fact that such a b elief coincides with the Ba yesian p osterior if agent j w ere given direct access to the priv ate signals of every agent in ¯ N τ j . This is indeed the b est p ossible (or statistically efficien t) b elief that agen t j can hope to form giv en the information a v ailable to her at time τ . The same connection to the statistically efficient b eliefs arise in the w ork of Eyster and Rabin ( 2014 ) who formulate the closely related concept of “impartial inference” in a mo del of sequen tial decisions b y different pla yers in successiv e rounds; accordingly , impartial inference ensures that the full informational con tent of all signals that influence a play er’s b eliefs can b e extracted and pla y ers can fully (rather than partially) infer their predecessors’ signals. In other words, under impartial inference, play ers’ immediate predecessors pro vide “sufficien t statistics” for earlier mov ers that are indirectly observed ( Eyster and Rabin 2014 , Section 3). Last but not least, it is worth noting that statistical efficiency or impartial inference are prop erties of the p osterior b eliefs, and as such the signal structures may b e designed so that statistical efficiency or impartial inference hold true for a particular problem setting; on the other hand, transparency is a structural prop erty of the net w ork and would hold true for any c hoice of signal structures and all p ossible initial signals. Our next example helps clarify the concept of transparency as a structural graph prop erty , and its relation to Ba y esian b elief computations. (A) (B) (C) (D) Figure 4 Structures ( B - D ) are transparen t, but ( A ) is not. 22 Hązła et al. Bayesian Decision Making in Gr oups is Har d Example 2 (Transp arent Structures). Figure 4 illustrates cases of transparen t and non- transparen t structures. All structures except ( A ) are transparent. T o see how the transparency is violated in ( A ), consider the b eliefs of agent i : φ i, 0 = λ i , φ i, 1 = λ i + λ j 1 + λ j 2 . A t time tw o, agent one observes the following rep orts: φ j 1 , 1 = λ j 1 + λ κ 1 + λ κ 2 , φ j 2 , 1 = λ j 2 + λ κ 2 + λ κ 3 . Kno wing φ j 1 , 0 = λ j 1 and φ j 2 , 0 = λ j 2 she can infer the v alues of the tw o sub-sums λ κ 1 + λ κ 2 and λ κ 2 + λ κ 3 , but there is no wa y for her to infer their total sum λ j 1 + λ j 2 + λ κ 1 + λ κ 2 + λ κ 3 . Agent i cannot hold a b elief that efficiently aggregates all priv ate signals at time tw o; hence, the first structure is not transparen t. Here, it is instructiv e to exactly c haracterize the non-transparen t Ba yesian p osterior b elief of agent i at time tw o. At time tw o, agent i can determine the sub-sum λ i + λ j 1 + λ j 2 and her b elief would in volv e a search only o v er the profile of the signals of the remaining agen ts ( s κ 1 , s κ 2 , s κ 3 ) . A t time tw o, she finds all ( s κ 1 , s κ 2 , s κ 3 ) that agree with the additionally inferred sub-sums λ κ 1 + λ κ 2 and λ κ 2 + λ κ 3 . If we use I i, 2 to denote the set of all such triplets of feasible signals ( s κ 1 , s κ 2 , s κ 3 ) , then w e can express φ i, 2 as follo ws: φ i, 2 = λ i + λ j 1 + λ j 2 + log P ( s κ 1 ,s κ 2 ,s κ 3 ) ∈ I i, 2 P κ 1 ,θ 2 ( s κ 1 ) P κ 2 ,θ 2 ( s κ 2 ) P κ 3 ,θ 2 ( s κ 3 ) P ( s κ 1 ,s κ 2 ,s κ 3 ) ∈ I i, 2 P κ 1 ,θ 1 ( s κ 1 ) P κ 2 ,θ 1 ( s κ 2 ) P κ 3 ,θ 1 ( s κ 3 ) , (7) where I i, 2 = { ( s κ 1 , s κ 2 , s κ 3 ) : log P κ 1 ,θ 2 ( s κ 1 ) P κ 1 ,θ 1 ( s κ 1 ) + log P κ 2 ,θ 2 ( s κ 2 ) P κ 2 ,θ 1 ( s κ 2 ) = λ κ 1 + λ κ 2 , and log P κ 1 ,θ 2 ( s κ 1 ) P κ 1 ,θ 1 ( s κ 1 ) + log P κ 3 ,θ 2 ( s κ 3 ) P κ 3 ,θ 1 ( s κ 3 ) = λ κ 2 + λ κ 3 } . W e now mov e to the next structure ( B ). The ambiguit y in determining λ κ 1 + λ κ 2 + λ κ 3 is resolv ed in ( B ) b y simply adding a direct link so that agent κ 2 is directly observ ed by agent i . Subsequently , agen t i holds an efficien t p osterior b elief at time tw o: φ i, 2 = λ i + λ j 1 + λ j 2 + λ κ 1 + λ κ 2 + λ κ 3 . In ( C ), agen t i observes the following rep orts of her neighbors: φ j 1 , 0 = λ j 1 , φ j 2 , 0 = λ j 2 , φ j 1 , 1 = λ j 1 + λ κ 1 + λ κ 2 , Hązła et al. Bayesian Decision Making in Gr oups is Har d 23 and can use these observ ations at time tw o, to solv e for the sum of log -likelihoo d ratios of priv ate signals of ev eryb o dy: φ i, 2 = λ i + φ j 1 , 1 + φ j 2 , 0 = λ i + λ j 1 + λ j 2 + λ κ 1 + λ κ 2 Structure ( D ) is also transparent. At time t w o, agen t i observ es φ j 1 , 1 = λ j 1 + λ κ 1 + λ κ 2 and φ j 2 , 1 = λ j 2 + λ κ 3 + λ κ 4 , in addition to her o wn priv ate signal λ i . Her b elief at time tw o is given b y: φ i, 2 = λ i + φ j 1 , 1 + φ j 2 , 1 = λ i + λ j 1 + λ j 2 + λ κ 1 + λ κ 2 + λ κ 3 + λ κ 4 . A time three, agen t i adds φ j 1 , 2 = φ j 1 , 2 + λ l = λ j 1 + λ κ 1 + λ κ 2 + λ l to her observ ations and her b elief at time three is give by: φ i, 3 = λ i + φ j 1 , 1 + φ j 2 , 1 + ( φ j 1 , 2 − φ j 1 , 1 ) = λ i + λ j 1 + λ j 2 + λ κ 1 + λ κ 2 + λ κ 3 + λ κ 4 + λ l . This example illustrates a case where an agen t learns the sum of log -likelihoo d ratios of signals of agen ts in her higher-order neighborho o ds even though she cannot determine each log -likelihoo d ratio individually . In structure ( D ), agent i learns { λ i , λ j 1 , λ j 2 , λ κ 1 + λ κ 2 , λ κ 3 + λ κ 4 , λ l } , and in particular, she can determine the total sum of log -likelihoo d ratios of all of the signals in her extended neigh- b orhoo d, but she nev er learns the v alues of the individual log -likelihoo d ratios { λ κ 1 , λ κ 2 , λ κ 3 , λ κ 4 } . The following is a sufficien t graphical condition for agent i to hold an efficien t (transparent) b elief at time t : there are no agents k ∈ ¯ N t i that has multiple paths to agent i , unless it is among her neigh b ors (agent k is directly observed by agent i ). Pr oposition 1 (Graphical Condition for T ransparency) . A gent i wil l hold a tr ansp ar ent (efficient) Bayesian p osterior b elief at time t if for any k ∈ ¯ N t i \ N i ther e is a unique p ath fr om k to i . The graphical condition that is prop osed ab ov e is only sufficient. F or example, structures ( C ) and ( D ) in Example 2 violate this condition, despite b oth b eing transparent. W e present the pro of of Proposition 1 in App endix G . W e provide a constructiv e pro of b y showing how to compute the Ba yesian p osterior b y aggregating the c hanges (innov ations) in the up dated b eliefs of neighbors and using the information ab out b eliefs of agents with multiple paths, to correct for redundancies. A ccordingly , for structures that satisfy the sufficien t condition for transparency , w e obtain a simple 24 Hązła et al. Bayesian Decision Making in Gr oups is Har d (and efficien t) algorithm for up dating b eliefs by setting the total inno v ation at every step equal to the sum of the most recent innov ations observed at eac h of the neighbors, correcting for those neigh b ors who are b eing double-counted. W e define inno v ations as the change in the observed log - b elief ratio of agents b etw een tw o consecutive steps: ˆ φ i,t := φ i,t − φ i,t − 1 , and initialize them with ˆ φ i, 0 := φ i, 0 = λ i . Algorithm 3: CORRECTED-INNOV A TIONS ( i, t ) Input: Graph G satisfying Prop osition 1 , φ i,t , and ˆ φ j,t , j ∈ N i . Output: P osterior log -b elief ratio φ i,t +1 1. A GGREGA TE: ˆ φ i,t +1 = P j ∈N i [ ˆ φ j,t − P k ∈N i ∩N t j φ k, 0 ] , 2. UPD A TE: φ i,t +1 = φ i,t + ˆ φ i,t +1 . Note that the transitive net works in tro duced in Subsection 4.2 , by definition, satisfy the sufficien t condition of Prop osition 1 . Our next corollary summarizes this observ ation. Cor ollar y 2 (T ransitivity is sufficient for transparency) . A l l tr ansitive networks ar e tr ans- p ar ent. Complete graphs are transitive, and therefore, transparen t. Directed paths and ro oted trees are other classes where Bay esian b elief exchange is efficient, since they satisfy the sufficient condition of Prop osition 1 . These sp ecial cases are explained next. Example 3 (Complete graphs, directed p a ths, and r ooted trees). Complete graphs are a sp ecial case where every agent gets to know ab out the likelihoo ds of the priv ate signal of all other agents at time one. Subsequently , ev ery agent in a complete graph holds an efficient b elief at time tw o. Directed paths and ro oted (directed) trees are other classes of transparen t structures, whic h satisfy the sufficien t structural condition of Prop osition 1 . Indeed, in case of a rooted tree for an y agent k that is indirectly observed by agent i , there is a unique path connecting k to i . As such the correction terms for the sum of innov ations in Algorithm 3 is alwa ys zero. Hence, for ro oted trees w e ha ve ˆ φ i,t +1 = P j ∈N i ˆ φ j,t : the inno v ation at each step is equal to the total innov ations observ ed in all the neighbors. 4.3.1. Efficien t Belief Calculations in T ransparen t Structures Here w e describ e calcu- lations of a Bay esian agent in a transparen t structure. Since the netw ork is transparent to agen t i , she has access to the follo wing information from the b eliefs that she has observed in her neigh b ors at times τ ≤ t , b efore deciding her b elief for time t + 1 : • Her o wn signal s i and its log -likelihoo d ratio λ i . • Her observ ations of the neighboring b eliefs: { µ j,τ : j ∈ N i , τ ≤ t } . Hązła et al. Bayesian Decision Making in Gr oups is Har d 25 Due to transparency , the neighboring b eliefs reveal the follo wing information ab out sums of log -likelihoo d ratios of priv ate signals of subsets of other agents in the net work: P k ∈ ¯ N τ j λ k = φ i,τ , for all τ ≤ t, and an y j ∈ N i . T o decide her b elief, agent i constructs the follo wing system of linear equations in card ¯ N t +1 + 1 unknowns: { λ j : j ∈ ¯ N t +1 , and φ ? } , where φ ? = P j ∈ ¯ N t +1 λ j is the b est p ossible (statistically efficient) b elief for agen t i at time t + 1 : ( P k ∈ ¯ N τ j λ k = φ j,τ , for all τ ≤ t, and any j ∈ N i , P j ∈ ¯ N t +1 i λ j − φ ? = 0 . (8) Note that ( 8 ) lists all the information a v ailable to agent i when forming her b elief in a transparent structure. Hence, transparency is in fact a statemen t ab out the linear system of equations in ( 8 ): In transparent structures φ ? can b e determined uniquely by solving the linear system ( 8 ). Hence, φ i,t +1 = φ ? , is not only statistically efficient but also computationally efficien t. F or a transparen t structure the complexity of determining the Bay esian p osterior b elief at time t + 1 is the same as the complexit y of p erforming Gauss-Jordan steps which is O ( n 3 ) for solving the t . card ( N i ) equations in card ( ¯ N t +1 i ) unkno wns. Note that here w e mak e no attempts to optimize these computations b ey ond the fact that their gro wth is p olynomial in n . Cor ollar y 3 (Efficient Computation of T ransparen t Beliefs) . Consider the r eve ale d b elief mo del of opinion exchange in tr ansp ar ent structur es. Ther e is an algorithm that runs in p olynomial- time and c omputes the Bayesian p osteriors in tr ansp ar ent structur es. In general non-transparent cases, the neigh b oring b eliefs are highly non-linear functions of the log - lik eliho o d ratios — see e.g. ( 7 ), and the ab o v e forw ard reasoning approach can no longer b e applied. Indeed, when transparency is violated then b eliefs represen t what signal profiles agents regard as p ossible rather than what they know ab out the log -likelihoo d ratios of signals of others whom they ha ve directly or indirectly observed. In particular, the agen t cannot use the rep orted b eliefs of the neigh b ors directly to make inferences ab out the original causes of those rep orts which are the priv ate signals. Instead, to keep track of the p ossible signal profiles that are consistent with her observ ations the agen t employs a version of the IEIS algorithm of Subsection 4.1 that is tailored to the case of rev ealed b eliefs. 5. Conclusions, Op en Problems, and Future Directions W e pro ved hardness results for computing Bay esian actions and appro ximating p osterior b eliefs in a mo del of decision making in groups (Theorems 1 and 2 ). W e also discussed a few generalizations and limitations of those results. W e further augmented these hardness results b y offering sp ecial cases where Ba yesian calculations simplify and efficient computation of Bay esian actions and p osterior b eliefs is p ossible (transitive and transparent net w orks). 26 Hązła et al. Bayesian Decision Making in Gr oups is Har d A p otentially challenging research direction is to develop a satisfactory theory of rational informa- tion exchange in ligh t of computational constrain ts. It would b e interesting to reconcile more fully our negative results with the more p ositive picture presen ted b y Aaronson ( 2005 ). Less ambitiously , a more exact characterization of computational hardness for different net work and utilit y struc- tures is certainly p ossible. Developmen t of an a v erage-case complexity result would b e particularly in teresting and relev ant. Another ma jor direction is to in v estigate other configurations and structures for whic h the compu- tation of Bay esian actions is ac hiev able in p olynomial-time, in particular, to develop tight conditions on the netw ork structure that result in necessary and sufficient conditions for transparency . It is also of interest to know the quality of information aggregation; i.e. under what conditions on the signal structure and netw ork top ology , Ba yesian actions coincide with the b est action given the aggregate information of all agen ts. Hązła et al. Bayesian Decision Making in Gr oups is Har d 27 App endix A: Pro of of Theorem 1 (VERTEX-CO VER Reduction) Our reduction is from hardness of appro ximation for the v ertex cov er problem. Definition 3 (Ver tex Cover of a Graph). Giv en a graph ˆ G m,n = ( ˆ V , ˆ E ) , with | ˆ E | = m edge and | ˆ V | = n v ertices, a vertex cov er ˆ Σ is a subset of v ertices such that every edge of ˆ G m,n is inciden t to at least one v ertex in ˆ Σ . Let ˆ Ξ denote the set of all v ertex co vers of ˆ G m,n . Theorem 5 (Hardness of appro ximation of VER TEX-CO VER, Khot et al. ( 2018 )) . F or every ε > 0 , given a simple gr aph ˆ G m,n with n vertic es and m e dges, it is NP-har d to distinguish b etwe en: • YES c ase: ther e exists a vertex c over ˆ Σ of size | ˆ Σ | ≤ 0 . 85 n . • NO c ase: e ach vertex c over ˆ Σ has size | ˆ Σ | > 0 . 999 n . Theorem 5 follo ws from recent works on the tw o-to-tw o conjecture culminating in Khot et al. ( 2018 ). F or completeness, we note that the constants can b e impro ved to √ 2 / 2 + in the YES case and 1 − in the NO case. W e now restate Theorem 1 more formally: Theorem 6. Ther e exists a p olynomial-time r e duction that maps a gr aph ˆ G m,n onto an instanc e of GR OUP-DECISION in the binary action mo del wher e: • Ther e ar e n + m + 2 agents and the time is set to t = 2 . • F or every agent j , her signal structur e c onsists of efficiently c omputable numb ers that satisfy the fol lowing: exp( − O ( n )) < P j,θ (1) < 1 − exp( − O ( n )) . (9) F urthermor e, letting i b e the agent sp e cifie d in the r e duction: • If ˆ G m,n has a vertex c over of size at most 0 . 85 n , then the b elief of i at time two satisfies µ i, 2 (1) < exp( − Ω( n )) . • If al l vertex c overs of ˆ G m,n have size at le ast 0 . 999 n , then the b elief of i satisfies µ i, 2 (0) < exp( − Ω( n )) . Consider a graph input to the v ertex co ver problem ˆ G m,n with m edges and n vertices. W e encode the structure of ˆ G m,n b y a t wo lay er netw ork, with n vertex agents τ 1 , . . . , τ n and m edge agents ε 1 , . . . , ε m . Each edge agent observ es tw o v ertex agen ts corresponding to its inciden t v ertices in ˆ G m,n (see Figure 2A ). Eac h vertex agent τ receiv es a priv ate binary signal s τ suc h that: P τ , 1 (1) = P 1 { s τ = 1 } = 0 . 4 =: p, P τ , 0 (1) = 0 . 3 =: p, 28 Hązła et al. Bayesian Decision Making in Gr oups is Har d where w e use the notation P θ {· · · } to denote probabilit y of an even t conditioned on the v alue of the state ( θ ). The net w ork also contains tw o more agents that w e call i and κ . Agent κ do es not observe any other agents, while agent i observ es κ and all edge agents. W e analyze the decision problem of agen t i at time t = 2 . W e assume that agent i and the edge agen ts ε 1 , . . . , ε m receiv e non-informativ e priv ate signals. The signal structure of agen t κ will be sp ecified later. W e give the observ ation history of agent i as follows: All edge agen ts claim a ε j , 1 = 1 and κ claims a κ, 0 = 0 . That concludes the description of the reduction. Clearly , the reduction is computable in p olynomial time and the signal structures satisfy ( 9 ), except for agent κ , whic h we will chec k so on. In the rest of the pro of, we sho w that graphs with small vertex cov ers map on to netw orks where agent i puts a tin y b elief on state one, and graphs with only large vertex cov ers map on to net works where agent i concentrates her b elief on state one. Consider an y edge agent ε and let ε (1) and ε (2) b e the vertex agents whose actions are observed b y ε . Recalling Example 1 , we know that for an y v ertex agen t τ her log -b elief ratio at time zero is determined b y her priv ate signal: φ τ , 0 = λ τ , and consequen tly a τ , 0 = s τ . F urthermore, by ( 3 ), the b elief µ ε, 1 and log -b elief ratio φ ε, 1 are determined by the neighboring actions (and priv ate signals) a ε (1) , 0 = s ε (1) and a ε (2) , 0 = s ε (2) . Clearly , if s ε (1) = s ε (2) , then ε broadcasts a matc hing action a ε, 1 = s ε (1) = s ε (2) . On the other hand, if s ε (1) 6 = s ε (2) , then the b elief of ε is given b y: µ ε, 1 (1) = ¯ p (1 − ¯ p ) ¯ p (1 − ¯ p ) + p (1 − p ) = (0 . 4)(0 . 6) (0 . 4)(0 . 6) + (0 . 3)(0 . 7) > 1 2 , and therefore a ε, 1 = 1 , whenever a ε (1) , 0 6 = a ε (2) , 0 . T o sum up, we hav e: F act 1. a ε, 1 = 1 { s ε (1) = 1 or s ε (2) = 1 } . The follo wing observ ation immediately follo ws from F act 1 and relates our GR OUP-DECISION instance to v ertex cov ers of ˆ G m,n : F act 2. Define a random v ariable Σ as Σ := { τ ∈ ˆ V : s τ = 1 } . Then, Σ is a vertex co ver of graph ˆ G m,n = ( ˆ V , ˆ E ) if, and only if, a ε, 1 = 1 for all ε ∈ ˆ E . Recall that we are interested in the decision problem of agen t i at time tw o, given that she has observ ed a ε, 1 = 1 for all ε ∈ ˆ E , i.e., she has learned that the priv ate signals of v ertex agents form a v ertex co v er of ˆ G m,n . Given a particular v ertex co ver ˆ Σ , let us denote its size by | ˆ Σ | = αn for some α = α ( ˆ Σ) ∈ { 1 n , 2 n , . . . , n − 1 n , 1 } . Then, we can write P 1 { Σ = ˆ Σ } = p α (1 − p ) (1 − α ) n =: q ( α ) n , (10) P 0 { Σ = ˆ Σ } = p α (1 − p ) (1 − α ) n =: q ( α ) n , (11) Hązła et al. Bayesian Decision Making in Gr oups is Har d 29 where q ( α ) = p α (1 − p ) (1 − α ) and q ( α ) = p α (1 − p ) (1 − α ) . W e are now ready to consider the Bay esian p osterior b elief of agent i at time tw o. It is more con venien t to work with the log -belief ratio φ i, 2 : φ i, 2 = log µ i, 2 (1) µ i, 2 (0) = log P { θ = 1 | a ε, 1 = 1 for all ε ∈ ˆ E and a κ, 0 = 0 } P { θ = 0 | a ε, 1 = 1 for all ε ∈ ˆ E and a κ, 0 = 0 } ! = log P 1 { Σ is a vertex cov er and a κ, 0 = 0 } P 0 { Σ is a vertex cov er and a κ, 0 = 0 } = log X ˆ Σ ∈ ˆ Ξ P 1 { Σ = ˆ Σ } P κ, 1 (0) X ˆ Σ ∈ ˆ Ξ P 0 { Σ = ˆ Σ } P κ, 0 (0) = log X ˆ Σ ∈ ˆ Ξ q ( α ) n X ˆ Σ ∈ ˆ Ξ q ( α ) n + λ κ (0) , (12) where along the wa y we in v oked the uniform prior, F act 2 , as well as ( 10 ) and ( 11 ) — recall ˆ Ξ denotes the set of all v ertex cov ers of ˆ G m,n . W e now inv estigate this Bay esian p osterior in the YES and NO cases of VER TEX-COVER. A t this p oint we can also rev eal the signal structure of agent κ : Letting q ( α ) := log q ( α ) /q ( α ) , we choose it such that λ κ (0) = − ( n/ 2) ( q (0 . 998) + q (0 . 999) ) holds (with an arbitrary v alue for λ κ (1) ). A.1. Ba yesian P osterior in the NO Case: If w e are in the NO case, then all v ertex cov ers hav e large size | ˆ Σ | > 0 . 999 n . Since q ( α ) = log q ( α ) /q ( α ) = α log (14 / 9) − log (7 / 6) is a strictly increasing function of α , the expression q ( α ) /q ( α ) n also increases as α increases. Therefore, ( 12 ) can b e lo wer-bounded as follows: φ i, 2 = log P ˆ Σ ∈ ˆ Ξ q ( α ) n P ˆ Σ ∈ ˆ Ξ q ( α ) n + λ κ (0) > log P ˆ Σ ∈ ˆ Ξ q (0 . 999) q (0 . 999) · q ( α ) n P ˆ Σ ∈ ˆ Ξ q ( α ) n + λ κ (0) = nq (0 . 999) − n 2 ( q (0 . 998) + q (0 . 999) ) = n 2 ( q (0 . 999) − q (0 . 998) ) , (13) establishing φ i, 2 = Ω( n ) and µ i, 2 (0) = 1 / (1 + e φ i, 2 ) < exp( − Ω( n )) , so that a i, 2 = 1 . A.2. Ba yesian P osterior in the YES Case: If we are in the YES case, then there exists a small vertex cov er Σ ? with | Σ ? | = α ? n ≤ 0 . 85 n . W e will sho w that the total contribution from all large vertex co vers with | ˆ Σ | ≥ 0 . 998 n is dominated by the lik eliho o d of this small v ertex cov er. T o this end, w e use the following tail b ound for the sum of i.i.d. Bernoulli random v ariables: Theorem 7. L et s 1 , . . . , s n ∈ { 0 , 1 } b e i.i.d. binary r andom variables with P { s i = 1 } = p and let p ≤ α ≤ 1 . Then, Pr ( n X k =1 s k ≥ αn ) ≤ exp ( − nD K L ( α || p ) ) , (14) 30 Hązła et al. Bayesian Decision Making in Gr oups is Har d wher e the Kullback-Leibler divergence D K L ( · ) is given by: D K L ( α || p ) = α log α p + (1 − α ) log 1 − α 1 − p . (15) The important feature of formula ( 15 ) is that D K L ( α || p ) go es to log (1 /p ) as α go es to 1 . Hence, for ev ery δ > 0 we can choose α < 1 such that the right-hand side of ( 14 ) is equal to ( p + δ ) n . In particular, w e can upp er-b ound the likelihoo d of large v ertex cov ers with θ = 1 as follows: P 1 ( | Σ | = n X k =1 s k > (0 . 998) n ) ≤ exp( − nD K L (0 . 998 || 0 . 4)) = (0 . 4061 ... ) n < 0 . 41 n , (16) On the other hand, since | Σ ? | = α ? n ≤ 0 . 85 n , we hav e: P 1 { Σ = Σ ? } = p α ? (1 − p ) 1 − α ? n ≥ p 0 . 85 (1 − p ) 0 . 15 n = q (0 . 85) n = ( 0 . 425 . . . ) n > 0 . 42 n . (17) Therefore, in the YES case, conditioned on θ = 1 and Σ b eing a vertex cov er, the probability of ha ving a large v ertex cov er ( | Σ | > 0 . 998 n ) is exp onentially small. W e are now ready to upp er-b ound the log -b elief ratio in the YES case. Starting again from ( 12 ), w e get: φ i, 2 = log P ˆ Σ ∈ ˆ Ξ q ( α ) n P ˆ Σ ∈ ˆ Ξ q ( α ) n + λ κ (0) < log P ˆ Σ ∈ ˆ Ξ: α ≤ 0 . 998 q ( α ) n + P 1 {| Σ | > 0 . 998 n } P ˆ Σ ∈ ˆ Ξ: α ≤ 0 . 998 q ( α ) n ! + λ κ (0) < log P ˆ Σ ∈ ˆ Ξ: α ≤ 0 . 998 q (0 . 998) q (0 . 998) · q ( α ) n 1 + 0 . 41 0 . 42 n P ˆ Σ ∈ ˆ Ξ: α ≤ 0 . 998 q ( α ) n + λ κ (0) (18) = nq (0 . 998) + log 1 + 0 . 41 0 . 42 n − n 2 ( q (0 . 999) + q (0 . 998) ) < − Ω( n ) , where w e used ( 16 ) and ( 17 ) to establish ( 18 ). This implies that µ i, 2 (1) = e φ i, 2 / (1 + e φ i, 2 ) < exp( − Ω( n )) , and a i, 2 = 0 . F rom A.1 and A.2 we conclude that agent i cannot determine her binary action at time t w o unless she can solv e the NP-hard approximation of the VER TEX-COVER problem. Hązła et al. Bayesian Decision Making in Gr oups is Har d 31 App endix B: Pro of of Theorem 2 (EXA CT-CO VER Reduction) Our reduction is from a v arian t of the classical EXA CT-CO VER problem. An instance of EXA CT- CO VER consists of a set of n elements and a collection of sets ov er those elemen ts. The com- putational problem is to decide if there exists a sub collection that exactly co v ers the elemen ts, i.e., eac h element b elongs to exactly one set in the sub collection. W e use a restricted version of EXA CT-COVER known as “Restricted Exact Cov er by Three Sets” (RXC3). Pr oblem 3 (RX C3). Consider a set of n elements ˆ E n . Consider also a set ˆ T n of n subsets of ˆ E n , eac h of them of size three. F urthermore, assume that each element of ˆ E n b elongs to exactly three sets in ˆ T n . The RXC3 problem is to decide if there exists a subset ˆ T ⊆ ˆ T n of size | ˆ T | = n/ 3 such that it constitutes an exact co v er for ˆ E n , i.e., U τ ∈ ˆ T τ = ˆ E n . W e refer to instances with and without suc h an exact co ver as YES and NO cases, resp ectively . Note that we make an implicit assumption that n is divisible by three. It is known that RXC3 is NP-complete. Theorem 8 (Section 3 and App endix A in Gonzalez ( 1985 )) . RXC3 is NP-c omplete. Let ˆ G n := ( ˆ E n , ˆ T n ) be an instance of RX C3. W e enco de the structure of ˆ G n b y a tw o la yer net work (cf. Figure 2B ). The first lay er is comprised of n agents lab eled by the subsets τ ∈ ˆ T n . The second la yer is comprised of n agents lab eled by the elements ε ∈ ˆ E n . Each element agent ε observ es the b eliefs of three subset agen ts, corresp onding to the subsets that con tain it. W e denote these three subset agen ts by ε (1) , ε (2) , and ε (3) . The element agents receiv e non-informative priv ate signals. Eac h subset agent τ ∈ ˆ T n receiv es a binary priv ate signal, s τ ∈ { 0 , 1 } with the follo wing signal structure: P τ , 1 (1) = 1 / 2 =: p, P τ , 0 (1) = 1 / 3 =: p. Recall our log -lik eliho o d ratio notation from Subsection 2.2 . Let us define the resp ectiv e log - lik eliho o d ratios of the zero and one signals as follows: ` 1 := log p/p = log(3 / 2) , ` 0 := log (1 − p ) / (1 − p ) = log(3 / 4) . Under the ab o ve definitions, for each subset agent τ ∈ T n w e hav e: λ τ = s τ ( ` 1 − ` 0 ) + ` 0 . The netw ork contains t wo more agen ts called κ and i . Agent κ is observed b y all element agen ts. She receiv es a binary priv ate signal s κ with the follo wing signal structure: p ? := P κ, 1 (1) , ` ? 1 := log p ? /p ? , p ? := P κ, 0 (1) , ` ? 0 := log (1 − p ? ) / (1 − p ? ) . 32 Hązła et al. Bayesian Decision Making in Gr oups is Har d W e choose the signal structures such that ` ? 1 − ` ? 0 = 2( ` 1 − ` 0 ) . F or concreteness, let ` ? 0 := 2 ` 0 and ` ? 1 := 2 ` 1 . Finally , agent i do es not receiv e a priv ate signal but observes all element agen ts (see Figure 2B ). W e are intereste d in the decision problem of agent i at time t w o with the follo wing observ ation history: Ev ery element agent ε ∈ ˆ E n rep orts the same log -b elief ratio at time one: φ ε, 1 = 3 ` 1 + 2 ` 0 . (19) Consider the belief of an element agent ε at time one, giv en her observ ations of the subset agen ts ε (1) , ε (2) , ε (3) and agent κ . By ( 3 ) and using ` ? b = 2 ` b , b ∈ { 0 , 1 } , w e can compute the log -b elief ratio of ε at time one: φ ε, 1 = λ κ + 3 X j =1 λ ε ( j ) = s κ ( ` ? 1 − ` ? 0 ) + ` ? 0 + 3 ` 0 + 3 X j =1 s ε ( j ) ( ` 1 − ` 0 ) = ( ` 1 − ` 0 )(2 s κ + 3 X j =1 s ε ( j ) ) + 5 ` 0 . F rom her observ ations at time one giv en b y ( 19 ), agent i learns that the signals in the neighborho o d of eac h element agent satisfy the following: 2 s κ + 3 X j =1 s ε ( j ) = 3 . (20) W e denote the set of all signal profiles that satisfy ( 20 ) by: Σ = ( ( s κ , s τ 1 , . . . , s τ n ) ∈ { 0 , 1 } n +1 : 2 s κ + 3 X j =1 s ε ( j ) = 3 , for all ε ∈ ˆ E n ) . Consequen tly , the log -belief ratio of agen t i at time tw o is given by: φ i, 2 = log X ( s κ ,s τ 1 ,...,s τ n ) ∈ Σ ( p ? ) s κ (1 − p ? ) 1 − s κ ( p ) P n j =1 s τ j (1 − p ) n − P n j =1 s τ j X ( s κ ,s τ 1 ,...,s τ n ) ∈ Σ ( p ? ) s κ (1 − p ? ) 1 − s κ ( p ) P n j =1 s τ j (1 − p ) n − P n j =1 s τ j . (21) W e now pro ceed to characterize the solution set Σ , whic h determines the p osterior ratio p er ( 21 ). One p ossibilit y is to set s κ = 0 , then ( 20 ) implies that s ε ( j ) = 1 for all ε and j = 1 , 2 , 3 . This is equiv alent to having s τ = 1 for all subset agents τ . Therefore, (0 , 1 , 1 , . . . , 1) ∈ Σ . On the other hand, if s κ = 1 , then ( 20 ) implies that: 3 X j =1 s ε ( j ) = 1 , for every agent ε ∈ ˆ E n . In other words, the signal profiles of the subset agents ( s τ 1 , s τ 2 , . . . , s τ n ) sp ecify an exact set-cov er of ˆ E n . W e now in v estigate the Bay esian p osterior of agent i dep ending on the existence of an exact set co ver. Hązła et al. Bayesian Decision Making in Gr oups is Har d 33 B.1. Ba yesian P osterior in the NO Case: If we are in a NO case of the RXC3 problem, then the instance ˆ G n = ( ˆ E n , ˆ T n ) do es not ha ve an exact set co ver. Therefore, the solution set Σ is a singleton Σ = { (0 , 1 , . . . , 1) } and ( 21 ) b ecomes: φ i, 2 = log (1 − p ? )( p ) n (1 − p ? )( p ) n ! = 2 ` 0 + n` 1 = 2 ` 0 + n log (3 / 2) > Ω( n ) , and consequen tly µ i, 2 (0) = 1 / (1 + e φ i, 2 ) < exp( − Ω( n )) . B.2. Ba yesian P osterior in the YES Case: If we are in a YES case of the RXC3 problem, then there exists at least one exact cov er of ˆ E n consisting of n/ 3 sets from ˆ T n . Let ¯ s 0 = ( s κ , s τ 1 , . . . , s τ n ) ∈ Σ b e a signal configuration corresp onding to suc h an exact cov er. Let us also denote the corresp onding random profile of priv ate signals by ¯ s = ( s κ , s τ 1 , . . . , s τ n ) . The contribution of ¯ s to the Ba yesian p osterior of agent i can b e calculated as: P 1 { ¯ s = ¯ s 0 } = ( p ? )( p ) n/ 3 (1 − p ) 2 n/ 3 = ( p ? ) q n , P 0 { ¯ s = ¯ s 0 } = ( p ? )( p ) n/ 3 (1 − p ) 2 n/ 3 = ( p ? ) q n , where q := ( p ) 1 / 3 (1 − p ) 2 / 3 = 1 / 2 and q := ( p ) 1 / 3 (1 − p ) 2 / 3 ≈ 0 . 529134 . Let ˆ N ≥ 1 b e the n umber of exact cov ers of ˆ G n . Then, w e can use p = q to compute: φ i, 2 = log ˆ N · p ? · q n + (1 − p ? ) · p n ˆ N · p ? · q n + (1 − p ? ) · p n ! < log O ( ˆ N · p ? · q n ) ˆ N · p ? · q n ! ≤ n ˆ ` + O (1) ≤ − Ω( n ) , where ˆ ` := log ( q /q ) < 0 . Consequently , µ i, 2 (1) = exp( φ i, 2 ) / (1 + exp( φ i, 2 )) < exp( − Ω( n )) . All in all, from B.1 and B.2 we conclude that agen t i cannot determine whether her Bay esian p osterior concentrates on state zero or state one unless she can solve the NP-hard RXC3 EXACT- CO VER v ariant. App endix C: I.I.D. Signals F ollowing Subsection 3.5 , we explain ho w to mo dify our t w o reductions (VER TEX-CO VER and EXA CT-COVER), in App endices A and B , to work with i.i.d. priv ate signals for all agents. C.1. VERTEX-CO VER Recall that w e need to modify our construction suc h that all agen ts hav e signal structures of vertex agen ts, with p = 0 . 4 , p = 0 . 3 and resp ective log -likelihoo d ratios ` 1 = log ( p/p ) = log(4 / 3) , and ` 0 = log((1 − p ) / (1 − p )) = log(6 / 7) . The reduction relies on an auxiliary agent κ whose signal log -likelihoo d ratio is give n by λ κ (0) = − cn for some constant c > 0 . Since agen t κ is directly observed b y agent i , all w e need to do is to replace κ with a num b er of i.i.d. agents with the vertex agent signal structure pro viding a similar 34 Hązła et al. Bayesian Decision Making in Gr oups is Har d total contribution to log -b elief ratio. Clearly , this is achiev ed by taking n i = b cn/ | ` 0 |c agents, all of them broadcasting action zero (see Figure 3B ). W e also need to explain ho w to handle agents with non-informative signals, i.e., edge agents and agen t i . W e will leav e all those agents in place and equip them with vertex agent signal structure. W e will also indicate in the observ ation history that their actions at time zero (and therefore also priv ate signals) were all ones: a i, 0 = a ε 1 , 0 = . . . = a ε m , 0 = 1 . W e next add m auxiliary agents κ 1 , . . . , κ m suc h that each κ j is observed by its corresp onding edge agen t ε j , as w ell as b y agent i . Again, we let each κ j ha ve the vertex agent signal structure and we indicate that s κ j = a κ j , 0 = 0 . W e next verify that F act 1 contin ues to hold. Supp ose that an edge agent (called ε ) observes opp osite actions in her vertex agents (i.e. { a ε (1) , 0 , a ε (2) , 0 } = { 0 , 1 } ). Then the b elief of agen t ε at time one aggregates her priv ate signal (a one signal), the action of her auxiliary agent (a zero signal), as well as tw o opp osing signals of her v ertex agen ts. The resulting b elief of ε at time one puts more weigh t on state one: µ ε, 1 (1) = ¯ p 2 (1 − ¯ p ) 2 ¯ p 2 (1 − ¯ p ) 2 + p 2 (1 − p ) 2 = (0 . 4) 2 (0 . 6) 2 (0 . 4) 2 (0 . 6) 2 + (0 . 3) 2 (0 . 7) 2 > 1 2 . Similarly , if b oth vertex agen ts rep ort zero signals, then we see that aggregating three zero signals and one one signal results in a b elief that puts less weigh t on state one: µ ε, 1 (1) = ¯ p (1 − ¯ p ) 3 ¯ p (1 − ¯ p ) 3 + p (1 − p ) 3 = (0 . 4)(0 . 6) 3 (0 . 4)(0 . 6) 3 + (0 . 3)(0 . 7) 3 < 1 2 . Therefore, F act 1 still holds. The remaining steps of the reduction carry through as b efore, except that we need to accoun t for the effect of the new signals of agen ts ε j and κ j , as w ell as agent i ’s own priv ate signal. Since these signals amoun t to m + 1 ones and m zeros, their total effect in terms of log -likelihoo d ratio is equal to ` 1 + m ( ` 1 + ` 0 ) > 0 . W e can cancel out this net effect asymptotically by inclusion of n i = b ( ` 1 + m ( ` 1 − ` 0 )) / | ` 0 |c additional agents that are observed only b y agent i , each receiving a zero priv ate signal (similar to Figure 3B ). C.2. EXA CT-COVER In the EXACT-CO VER reduction we use an auxiliary agen t κ whose signal structure is different from those of the subset agents: More precisely , the log -lik eliho o d ratios of the subset agents are ` 0 and ` 1 , while for agen t κ they are ` ? 0 = 2 ` 0 and ` ? 1 = 2 ` 1 . In tuitively , we would like to replace agent κ with t wo agents who hav e the signal structure of the subset agen ts, and also ensure that the signals of these tw o agents agree. T o achiev e this, we use fiv e auxiliary agents κ 1 , κ 2 , κ 3 , κ 4 , and κ 5 with the signal structure of the subset agents. Supp ose every elemen t agent, ε j , instead of observing agent κ , observes the tw o agents κ 1 and κ 3 . Supp ose further that κ 4 observ es κ 1 and κ 2 ; and κ 5 observ es κ 2 and κ 3 . Finally , let agent i , whose decision is NP-hard, observe κ 4 and κ 5 (see Figure 3C ). Hązła et al. Bayesian Decision Making in Gr oups is Har d 35 The priv ate signals of κ 4 and κ 5 can b e set arbitrarily . Supp ose that the b elief rep orts of κ 4 and κ 5 at time t wo implies the follo wing log -b elief ratios: φ κ 4 , 2 = λ κ 4 + ` 0 + ` 1 and φ κ 5 , 2 = λ κ 5 + ` 0 + ` 1 . F rom observing κ 4 , agent i learns that the sum of the log -lik eliho o d ratios of the signals of κ 1 and κ 2 is ` 0 + ` 1 ; equiv alently , κ 1 and κ 2 ha ve received opp osite signals. Similarly , from observing κ 5 agen t i learns that κ 2 and κ 3 ha ve receiv ed opp osite signals. Therefore, signals of κ 1 and κ 3 m ust agree. Observing κ 1 and κ 3 has the same effect on b eliefs of the element agents as observing the single auxiliary agent κ with tw o times the signal strength. Note that agen t i is influenced b y what she learned ab out signals of λ κ 2 , λ κ 4 and λ κ 5 , but this influence is of the order O (1) and therefore do es not affect our analysis of Bay esian p osteriors in App endix B . As for the agen ts without priv ate signals, i.e., agent i and element agents, the mo difications are quite simple. Again, we assume that all these agents hav e the same signal structure as the subset agen ts, and that they rep ort b eliefs at time zero consistent with zero priv ate signals. This introduces a negative shift equal to ( n + 1) ` 0 in the log -b elief ratio of agen t i , which can b e asymptotically canceled by adding n i = b ( n + 1) | ` 0 | /` 1 c more auxiliary agen ts that rep ort ones as their signals and are observ ed only b y agent i (similar to Figure 3B ). App endix D: Noisy Actions As described in Subsection 3.6 , let us consider the follo wing noisy v ariant of the binary action mo del: F or each computed action a i,t , the neighboring agents observe the same action a 0 i,t = a i,t with probability 1 − δ and the flipp ed action a 0 i,t = 1 − a i,t with probability δ , for some 0 < δ < 1 / 2 . W e wan t to show that Theorem 1 still holds in this model, p ossibly with the constants in the size of the reduction and the exp( − Ω( n )) b elief approximation factor dep ending on δ . T o this end, we use the same VER TEX-COVER problem and the high-level idea as the pro of of Theorem 1 . Let us start with the general examination of the effect of noise on an agen t τ that receives a priv ate signal with signal structure P τ , 1 (1) = p and P τ , 0 (1) = p . F rom the p ersp ectiv e of an agent that observ es τ , separating the priv ate signal of τ and the error in its action do es not matter: All that matters is the likelihoo d that can b e inferred from observing a τ , 0 . The likelihoo ds of the tw o p ossible observ ations are as follows: p 0 := P 1 { a 0 τ , 0 = 1 } = P 1 { a τ , 0 = a 0 τ , 0 = 1 or a τ , 0 = 1 − a 0 τ , 0 = 0 } = p (1 − δ ) + (1 − p ) δ , p 0 := P 0 { a 0 τ , 0 = 0 } = p (1 − δ ) + (1 − p ) δ . (22) F rom ( 22 ), w e see that the “after-noise” signal structures are restricted to δ ≤ p 0 , p 0 ≤ 1 − δ , rather than ha ving the full range b et ween 0 and 1 . Accordingly , we start the VER TEX-COVER reduction b y sp ecifying the after-noise signal structures of the vertex agen ts as follows: p 0 = 1 / 4 + δ / 2 and 36 Hązła et al. Bayesian Decision Making in Gr oups is Har d p 0 = δ . It is easy to c hec k that since p 0 < p 0 < 1 / 2 and since p 0 (1 − p 0 ) < p 0 (1 − p 0 ) , an edge agen t ε observing t wo vertex agents, ε (1) and ε (2) , still satisfies the follo wing version of F act 1 : a ε, 1 = 1 if, and only if, a 0 ε (1) , 0 = 1 or a 0 ε (2) = 1 . (23) Note that a ε, 1 on the left-hand side is b efore-noise, but a 0 ε (1) , 0 and a 0 ε (2) , 0 on the righ t-hand side, are after-noise. W e will now pro ceed with the analysis of the reduction, enco ding vertex cov ers of the input graph ˆ G n,m in the after-noise actions a 0 τ , 0 of v ertex agents. Previously , for eac h edge of ˆ G n,m w e placed an edge agent ε observing t wo v ertex agents, ε (1) and ε (2) , corresp onding to its inciden t vertices in ˆ G n,m . This time, for each edge w e place k := k ( n, m, δ ) agen ts ε (1) , . . . , ε ( k ) , each of them observing the same tw o v ertex agents and rep orting the same noisy actions a 0 ε ( j ) , 1 = 1 , j = 1 , . . . , k to agent i (see Figure 3D ). Since b y ( 23 ) the b efore-noise actions a ε ( j ) , 1 ha ve all b een the same, after observing ε (1) , . . . , ε ( k ) agent i concludes that exactly one of the follo wing is true: • Nob ody among ε (1) , . . . , ε ( k ) has flipp ed her action: a ε ( j ) , 1 = a 0 ε ( j ) , 1 = 1 , j = 1 , . . . , k . • Ev eryb o dy in ε (1) , . . . , ε ( k ) has flipp ed her action: a ε ( j ) , 1 = 1 − a 0 ε ( j ) , 1 = 0 , j = 1 , . . . , k . In the second case, we say that “an err or has o ccurred in edge ε ”. W e no w pro ceed to show that for k large enough the probabilit y of an error o ccurring in some edge is so small that the analysis of the noisy mo del essentially reduces back to what we did in App endix A . T o this end, consider the log -b elief ratio of agent i at time tw o, as describ ed in ( 12 ), neglecting for the moment the influence of agen t κ . Let Σ := { τ ∈ ˆ V : a 0 τ , 0 = 1 } . F ollo wing App endix A , agen t i concludes that either Σ forms a vertex cov er of ˆ G n,m or an error has o ccurred in at least one edge. Let E denote the ev ent that at least one error has o ccurred and let ¬E denote its complement. If w e wan t to, for example, upp er-b ound φ i, 2 in the NO case, w e can write: φ i, 2 = log µ i, 2 (1) µ i, 2 (0) ≤ log P 1 { Σ is a vertex cov er ∧ ¬E } + P 1 {E | h i, 2 } P 0 { Σ is a vertex cov er ∧ ¬E } , (24) where w e drop the error probability term ( P 0 {E | h i, 2 } ) in the denominator to obtain an upp er- b ound. W e now note that, by union b ound and other elemen tary considerations, the error probability can b e b ounded by: P 1 {E | h i, 2 } ≤ mδ k δ k + (1 − δ ) k m − 1 = m δ k + (1 − δ ) k (1 − δ ) k m − 1 δ 1 − δ k (1 − δ ) km ≤ m 2 m δ 1 − δ k (1 − δ ) km ≤ (1 / 8 + δ / 4) n (1 − δ ) km = ( p 0 / 2) n (1 − δ ) km = (1 / 2) n P 1 {∀ τ : a 0 τ , 0 = 1 ∧ ¬E } ≤ (1 / 2) n P 1 { Σ is a vertex cov er ∧ ¬E } , (25) where in the first line w e use the fact that conditioned on observ ation history h i, 2 , for each edge either 0 or k flips has o ccurred. In the second line, we make m 2 m ( δ / (1 − δ ) ) k ≤ (1 / 8 + δ / 4) n b y c ho osing k to b e (p olynomially) large enough — recall δ / (1 − δ ) < 1 . Hązła et al. Bayesian Decision Making in Gr oups is Har d 37 T aken together, ( 24 ) and ( 25 ) imply that, up to a tin y exp( − Ω( n )) factor, the v alue of φ i, 2 is almost the same as that computed in ( 12 ). Therefore, w e can use the same computation as in ( 13 ) to establish a linear lo w er-b ound on φ i, 2 . The YES case is handled v ery similarly . Finally , it remains to account for agen t κ . This is done in basically the same wa y as in Sub- section 3.5 : W e replace agent κ with the strong log -lik eliho o d ratio λ κ (0) = − cn by cn δ 0 agen ts with the after-noise log -lik eliho o d ratio λ 0 κ j (0) = − δ 0 for appropriately small δ 0 = δ 0 ( δ ) , all rep orting action zero at time zero. This concludes the description of our mo dification in the noisy mo del. App endix E: Complexit y of Bay esian Decisions Using Algo rithm 1: IEIS Supp ose that agent i has reached her t -th decision in a general net work structure. Given her infor- mation at time t , for all s = ( s 1 , . . . , s n ) ∈ S 1 × . . . × S n and any j ∈ N τ i , τ = t + 1 , t, . . . , 1 she has to up date I ( j, t − τ , s ) in to I ( j, t + 1 − τ , s ) ⊂ I ( j, t − τ , s ) . If τ = t + 1 then agent j ∈ N τ i is b eing considered for the first time at the t -th step and I ( j, 0 , s ) = { s j } × Q k 6 = j S k is initialized without any calculations. How ever if τ ≤ t , then I ( j, t − τ , s ) can b e up dated in to I ( j, t + 1 − τ , s ) ⊂ I ( j, t − τ , s ) only by verifying the condition a k,t − τ ( s 0 ) = a k,t − τ ( s ) for every s 0 ∈ I ( j, t − τ , s ) and k ∈ N j : any s 0 ∈ I ( j, t − τ , s ) that violates this condition for some k ∈ N j is eliminated and I ( j, t + 1 − τ , s ) is th us obtained by pruning I ( j, t − τ , s ) . V erification of a k,t − τ ( s 0 ) = a k,t − τ ( s ) inv olv es calculations of a k,t − τ ( s 0 ) and a k,t − τ ( s ) according to ( 6 ). The latter requires the addition of card ( I ( k , t − τ , s )) pro duct terms u k ( a k , θ 0 ) P θ 0 ( s 0 ) ν ( θ 0 ) = u k ( a k , θ 0 ) P 1 ,θ 0 ( s 0 1 ) . . . P n,θ 0 ( s 0 n ) ν ( θ 0 ) for eac h s 0 ∈ I ( k , t − τ , s ) , θ 0 ∈ Θ , and a k ∈ A k to ev aluate the left hand-side of ( 6 ). Hence, we can estimate the total num b er of additions and multiplications required for calculation of each conditional action a k,t − τ ( s ) as A . ( n + 2) . m . card ( I ( k , t − τ , s )) , where m := card (Θ) and A = max k ∈ [ n ] card ( A k ) . Hence the total num b er of additions and multi- plications undertaken by agent i at time t for determining actions a k,t − τ ( s ) can b e estimated as follo ws: C 1 := A . ( n + 2) . card (Θ) . X j ∈ ¯ N t i X k ∈N j card ( I ( k , t − dist ( j, i ) , s )) ≤ A . ( n + 2) . n . M n − 1 . m, (26) where w e upp er-b ound the cardinalit y of the union of the higher-order neighborho o ds of agen t i by the total num b er of agents: card ( ¯ N t +1 i ) ≤ n and use the inclusion relationship I ( k , t − dist ( j, i ) , s ) ⊂ I ( k , 0 , s ) = { s k } × Q j 6 = k S j to upp er-b ound card ( I ( k , t − dist ( j, i ) , s )) by M n − 1 where M is the largest cardinalit y of finite signal spaces, S j , j ∈ [ n ] . As the ab o v e calculations are p erformed at ev ery signal profile s ∈ S 1 × . . . S n the total n umber of calculations (additions and multiplications) required for the Ba y esian decision at time t , denoted by C t , can b e b ounded as follows: A . M n ≤ C t ≤ A . ( n + 2) . n . M 2 n − 1 . m, (27) 38 Hązła et al. Bayesian Decision Making in Gr oups is Har d where on the right-hand side, we apply ( 26 ) for each of the M n signal profiles. In particular, the calculations grow exp onen tial in the n umber of agen ts n . Once agen t i calculates the actions a k,t − τ ( s ) for all k ∈ N j she can then up date the p ossible signal profiles I ( j, t − τ , s ) , following step 2(a)ii of Algorithm 1, to obtain I ( j, t + 1 − τ , s ) for all j ∈ ¯ N t i and any s ∈ S 1 × . . . × S n . This in turn enables her to calculate the conditional actions of her neighbors a j,t ( s ) at every signal profile and to eliminate any s at whic h the conditional action a j,t ( s ) do es not agree with the observ ed action a j,t , for some j ∈ N i . She can thus up date her list of p ossible signal profiles from I i,t to I i,t +1 and adopt the corresp onding Ba yesian b elief µ i,t +1 and action a i,t +1 . The latter in v olves an additional ( n + 2) mA · card ( I i,t +1 ) additions and multiplic ation whic h are, nonetheless, dominated b y the n umber of calculations required in ( 27 ) for the sim ulation of other agents’ actions at ev ery signal profile. App endix F: Computational Complexity of Algo rithm 2: IEIS-TRANSITIVE A ccording to (I2), in a transitive structure at time t agent i has access to the list of p ossible priv ate signals for each of her neighbors: S j,t , j ∈ N i giv en their observ ations up until that p oin t in time. The p ossible signal set S j,t for each agent j ∈ N i is calculated based on the actions taken b y others and observed by agent j un til time t − 1 together with p ossible priv ate signals that can explain her history of c hoices: a j, 0 , a j, 1 , and so on up until her most recent c hoice which is a j,t . At time t , agent i will hav e access to all the observ ations of every agent in her neighborho o d and can vet their most recent choices a j,t against their observ ations to eliminate the incompatible priv ate signals from the possible set S j,t and obtain an updated list of possible signals S j,t +1 for each of her neigh b ors j ∈ N i . This pruning is achiev ed by calculating a j,t ( s j ) given I j,t ( s j ) = { s j } × Q k ∈N j S j,t for each s j ∈ S j,t and removing any incompatible s j that violates the condition a j,t = a j,t ( s j ) ; thus obtaining the pruned set S j,t +1 . The calculation of a j,t ( s j ) given I j,t ( s j ) = { s j } × Q k ∈N j S j,t is p erformed according to ( 6 ) but the decomp osition of the p ossible signal profiles based on the relation I j,t ( s j ) = { s j } × Q k ∈N j S j,t together with the indep endence of priv ate signals across different agents help reduce the n um b er of additions and multiplications inv olv ed as follo ws: A j ( I j,t ( s j )) = arg max a j ∈A j X θ 0 ∈ Θ u j ( a j , θ 0 ) P s 0 ∈ I j,t ( s j ) P θ 0 ( s 0 ) ν ( θ 0 ) P θ 00 ∈ Θ P s 0 ∈ I j,t ( s j ) P θ 00 ( s 0 ) ν ( θ 00 ) = arg max a j ∈A j X θ 0 ∈ Θ u j ( a j , θ 0 ) P θ 0 ( s j ) Q k ∈N j P s k ∈ S k,t P θ 0 ( s k ) ν ( θ 0 ) P θ 00 ∈ Θ P θ 00 ( s j ) Q k ∈N j P s k ∈ S k,t P θ 00 ( s k ) ν ( θ 00 ) . Hązła et al. Bayesian Decision Making in Gr oups is Har d 39 Hence, the calculation of the conditionally feasible action a j,t ( s j ) for eac h s j ∈ S j,t can b e achiev ed through card (Θ) A P k ∈N j card ( S k,t ) additions and card (Θ) ( card ( N j ) + 2 ) A m ultiplications; sub- sequen tly , the total num b er of additions and m ultiplications required for agent i to update the p ossible priv ate signals of each of her neighbor can b e estimated as follows: A X j ∈N i card (Θ) card ( S j,t ) X k ∈N j card ( S k,t ) + card ( N j ) + 2 ≤ An 2 M 2 m + An 2 M m + 2 nM mA, (28) where M , n , m and A are as in ( 27 ). After up dating her lists for the p ossible signal pro- files of all her neighbors, the agen t can refine her list of possible signal profiles I i,t +1 = { s i } × Q j ∈N i S j,t +1 and determine her b elief µ i,t +1 and refined choice a i,t +1 . The latter is ac hieved through an extra card (Θ) A P j ∈N i card ( S j,t +1 ) additions and card (Θ) A ( card ( N i ) + 2 ) m ultiplications, whic h are dominated by the required calculations in ( 28 ). Most notably , the computations required of the agent for determining her Bay esian c hoices in a transitive net work increase p olynomially in the n umber of agents n , whereas in a general net work structure using IEIS these computations increase exp onen tially fast in the num b er of agents n . App endix G: Pro of of Prop osition 1 (Graphical Condition fo r T ransparency) The pro of follows b y induction on t , i.e. by considering the agents whose information reac h agent i for the first time at t . The claim is trivially true at time one, since agent i can alwa ys infer the lik eliho o ds of the priv ate signals of eac h of her neighbors by observing their b eliefs at time one. No w consider the b elief of agen t i at time t , the induction hypothesis implies that φ i,t − 1 = P k ∈ ¯ N t − 1 i λ k , as well as φ j,t − 1 = P k ∈ ¯ N t − 1 j λ k and φ j,t − 2 = P k ∈ ¯ N t − 2 j λ k for all j ∈ N i . T o form her b elief at time t (or equiv alen tly its log -b elief ratio φ i,t ), agent i should consider her most recen t information { φ j,t − 1 = P k ∈ ¯ N t − 1 j λ k , j ∈ N i } and use that to up date her current b elief φ i,t − 1 = P k ∈ ¯ N t − 1 i λ k . T o pro ve the induction claim, it suffices to sho w that agent i has enough information to calculate the sum of log -likelihoo d ratios of all signals in her t -radius ego-net, ¯ N t i ; i.e. to form φ i,t = P k ∈ ¯ N t i λ k . This is the b est p ossible b elief that she can hop e to ac hiev e at time t , and it is the same as her Ba yesian p osterior, had she direct access to the priv ate signals of all agents in her t -radius ego-net. T o this end, by using her knowledge of φ j,t − 1 and φ j,t − 2 she can form: ˆ φ j,t − 1 = φ j,t − 1 − φ j,t − 2 = X k ∈N t − 1 j λ k , for all j ∈ N i . Since, φ i,t − 1 = P k ∈ ¯ N t − 1 i λ k b y the induction hypothesis, the efficient b elief φ i,t = P k ∈ ¯ N t i λ k can b e calculated if and only if, ˆ φ i,t = φ i,t − φ i,t − 1 = X k ∈N t i λ k , (29) 40 Hązła et al. Bayesian Decision Making in Gr oups is Har d can b e computed. In the ab o v e formulation ˆ φ i,t is an inno v ation term, represen ting the information that agen t i learns from her most recent observ ations at time t . W e no w show that under the assumption that an y agent with m ultiple paths to an agent i is directly observed b y her, the inno v ation term in ( 29 ) can b e constructed from the kno wledge of φ j,t − 1 = P k ∈ ¯ N t − 1 j λ k , and φ j,t − 2 = P k ∈ ¯ N t − 2 j λ k for all j ∈ N i ; indeed, w e sho w that: ˆ φ i,t = X j ∈N i ˆ φ j,t − 1 − X k ∈N i ∩N t − 1 j φ k, 0 , for all t > 1 . (30) Consider any k ∈ N t i , these are all agents which are at distance exactly t , t > 1 , from agent i , and no closer to her. No such k ∈ N t i is a direct neigh b or of agent i and the structural assumption therefore implies that there is a unique neigh b or of agen t i , call this unique neigh b or j k ∈ N i , satisfying k ∈ N t − 1 j k . On the other hand, consider any j ∈ N i and some k ∈ N t − 1 j , contributing λ k to ˆ φ j,t − 1 . Suc h an agen t k is either a neighbor of i or else at distance exactly t > 1 from agent i and therefore k ∈ N t i , and element j w ould b e the unique neighbor j k ∈ N i satisfying k ∈ N t − 1 j k . Subsequently , using the notation ] for disjoin t unions, we can partition N t i as follo ws: N t i = ] j ∈N i ¯ N t − 1 j \ ¯ N t − 2 j ∪ N i , and therefore w e can rewrite the left-hand side of ( 29 ) as follows: ˆ φ i,t = X k ∈N t i λ k = X k ∈] j ∈N i ¯ N t − 1 j \ ( ¯ N t − 2 j ∪N i ) λ k = X j ∈N i X k ∈ ¯ N t − 1 j \ ( ¯ N t − 2 j ∪N i ) λ k = X j ∈N i X k ∈N t − 1 j λ k − X k ∈N i ∩N t − 1 j λ k = X j ∈N i ˆ φ j,t − 1 − X k ∈N i ∩N t − 1 j φ k, 0 , as claimed in ( 30 ), completing the pro of. Notes 1 . T echnically w e show ed coNP-hardness, i.e., our reduction mapp ed instances with small vertex co ver onto GROUP- DECISION in stances with θ = 0 and instances with only large vertex cov ers onto GROUP-DECISION with θ = 1 . Ho w ev er, due to the symmetric nature of GROUP-DECISION, NP-hardness is immediately obtained by inv erting the meanings of 0 and 1 lab els of states and priv ate signals. In particular, since GROUP-DECISION at t = 2 is b oth NP-hard and coNP-hard, it is likely to b e strictly harder than NP-complete (see Arora and Barak ( 2009 )). 2 . In the context of Subsection 3.1 , one might argue that in the absence of a common prior there is no fixed distribution of signals ov er which to obtain an a v erage-case hardness result. Notwithstanding, the worst-case issue remains relev ant b ecause the observ ation history is no w exponentially unlikely according to eac h agent’s own prior. Hązła et al. Bayesian Decision Making in Gr oups is Har d 41 3 . W e note that Algorithm 0 can b e implemen ted to use space that is polynomial in the num b er of agents and time t (assuming fixed state set Θ , signal sets S i and action sets A i ). In the binary action mo del this matches our PSP ACE- hardness results obtained in the follow-up pap er Hązła et al. ( 2019 ). 4 . This is a fundamen tal asp ect of inference problems in observ ational learning (in learning from other actors): similar to responsiveness that Ali ( 2018 ) defines as a prop ert y of the utilit y functions to determine whether play ers’ beliefs can b e inferred from their actions, tr anspar ency in our b elief exchange setup is defined as a prop erty of the graph structure (see Remark 2 on why transparency is a structural prop erty) whic h determines to what extent other pla yers’ priv ate signals can b e inferred from observing the neigh b oring b eliefs. A ckno wledgments This work was partially supp orted by aw ards ONR N00014-16-1-2227, NSF CCF-1665252 and AR O MURIs W911NF-12-1-0509, W911NF-19-0217 and b y a V annev ar Bush F ello wship. References Aaronson S (2005) The complexity of agreemen t. Pr o c e e dings of the thirty-seventh annual A CM symp osium on The ory of c omputing , 634–643 (ACM). A cemoglu D, Bimpikis K, Oz daglar A (2014) Dynamics of information exchange in endogenous so cial net- w orks. The or etic al Ec onomics 9(1):41–97. A cemoglu D, Dahleh MA, Lob el I, Ozdaglar A (2011) Bay esian learning in so cial netw orks. The R eview of Ec onomic Studies 78(4):1201–1236. A cemoglu D, Ozdaglar A (2011) Opinion dynamics and learning in so cial netw orks. Dynamic Games and Applic ations 1(1):3–49, ISSN 2153-0793. Ali SN (2018) Herding with costly information. Journal of Ec onomic The ory 175:713–729. Arieli I, Babichenk o Y, Mueller-F rank M (2019a) Naiv e learning through probability matc hing. Confer enc e on Ec onomics and Computation , 553 (ACM), ISBN 978-1-4503-6792-9. Arieli I, Babic henk o Y, Shlomov S (2019b) Robust non-Bay esian so cial learning. Confer enc e on Ec onomics and Computation , 549–550 (ACM), ISBN 978-1-4503-6792-9. Arora S, Barak B (2009) Computational Complexity: A Mo dern Appr o ach (New Y ork, NY, USA: Cam bridge Univ ersit y Press), 1st edition, ISBN 0521424267, 9780521424264. Arora S, Barak B, Brunnermeier M, Ge R (2011) Computational complexity and information asymmetry in financial products. Communic ations of the ACM 54(5):101–107, ISSN 0001-0782. Aumann RJ (1976) Agreeing to disagree. The annals of statistics 1236–1239. Bala V, Go yal S (1998) Learning from neighbours. The R eview of Ec onomic Studies 65(3):595–621. Banerjee A V (1992) A simple model of herd behavior. The Quarterly Journal of Ec onomics 107(3):797–817. 42 Hązła et al. Bayesian Decision Making in Gr oups is Har d Bikhc handani S, Hirshleifer D, W elch I (1998) Learning from the b ehavior of others: Conformity , fads, and informational cascades. The Journal of Ec onomic Persp e ctives 12(3):pp. 151–170, ISSN 08953309. Blac kw ell D, Dubins L (1962) Merging of opinions with increasing information. The Annals of Mathematic al Statistics 33:882 – 886. Bogdano v A, T revisan L, et al. (2006) A v erage-case complexit y . F oundations and T r ends in The or etic al Computer Scienc e 2(1):1–106. Dasaratha K, Golub B, Hak N (2018) So cial learning in a dynamic environmen t. arXiv pr eprint arXiv:1801.02042 . DeGro ot MH (1974) Reaching a consensus. Journal of Americ an Statistic al Asso ciation 69:118–121. DeMarzo PM, V ay anos D, Zwieb el J (2003) P ersuasion bias, so cial influence, and unidimensional opinions. The Quarterly Journal of Ec onomics 118:909–968. Eyster E, Rabin M (2014) Extensive imitation is irrational and harmful. The Quarterly Journal of Ec onomics 129(4):1861–1898. F uden b erg D, Maskin E (1986) The folk theorem in rep eated games with discounting or with incomplete information. Ec onometric a 54(3):533–554. Gale D, Kariv S (2003) Bay esian learning in so cial net works. Games and Ec onomic Behavior 45:329–346. Geanak oplos JD, P olemarchakis HM (1982) W e can’t disagree forever. Journal of Ec onomic The ory 28(1):192–200. Golub B, Jackson MO (2010) Naïve Learning in So cial Netw orks and the Wisdom of Crowds. Americ an Ec onomic Journal: Micr o e c onomics 2(1):112–149. Gonzalez TF (1985) Clustering to minimize the maximum intercluster distance. The or etic al Computer Scienc e 38:293–306, ISSN 0304-3975. Harel M, Mossel E, Strack P , T amuz O (2014) When more information reduces the sp eed of learning W orking pap er. Hązła J, Jadbabaie A, Mossel E, Rahimian MA (2019) Reasoning in Bay esian opinion exchange netw orks is PSP A CE-hard. Confer enc e on L e arning The ory , volume 99 of Pr o c e e dings of Machine L e arning R ese ar ch , 1614–1648 (PMLR). Jadbabaie A, Mola vi P , Sandroni A, T ah baz-Salehi A (2012) Non-Ba y esian so cial learning. Games and Ec onomic Behavior 76(1):210 – 225. Kanoria Y, T am uz O (2013) T ractable bay esian social learning on trees. Sele cte d Ar e as in Communic ations, IEEE Journal on 31(4):756–765. Khot S, Minzer D, Safra M (2018) Pseudorandom Sets in Grassmann Graph hav e Near-Perfect Expansion. ECCC T e chnic al R ep ort TR18-078 . Hązła et al. Bayesian Decision Making in Gr oups is Har d 43 Krishnam urth y V, Hoiles W (2014) Online reputation and p olling systems: Data incest, so cial learning, and rev ealed preferences. IEEE T r ansactions on Computational So cial Systems 1(3):164–179. K wisthout J (2011) The Computational Complexity of Probabilistic Inferenc e. T ec hnical Rep ort ICIS– R11003, Radboud Universit y Nijmegen. Lehrer E, Smorodinsky R (1996) Merging and learning. L e ctur e Notes-Mono gr aph Series 147–168. Li W, T an X (2018) Lo cally ba y esian learning in netw orks. W orking p ap er . Mola vi P , T ahbaz-Salehi A, Jadbabaie A (2018) A theory of non-ba yesian so cial learning. Ec onometric a 86(2):445–490. Mossel E, Mueller-F rank M, Sly A, T amuz O (2018) So cial learning equilibria. Pr o c e e dings of the 2018 ACM Confer enc e on Ec onomics and Computation , 639–639 (ACM). Mossel E, Olsman N, T amuz O (2016) Efficient ba y esian learning in so cial netw orks with gaussian estimators. 2016 54th Annual Al lerton Confer enc e on Communic ation, Contr ol, and Computing (Al lerton) , 425– 432 (IEEE). Mossel E, Sly A, T amuz O (2014) Asymptotic learning on bay esian so cial net works. Pr ob ability The ory and R elate d Fields 158(1-2):127–157, ISSN 0178-8051. Mossel E, Sly A, T amuz O (2015) Strategic learning and the top ology of so cial netw orks. Ec onometric a 83(5):1755–1794. Mossel E, T amuz O (2013) Making consensus tractable. ACM T r ansactions on Ec onomics and Computation 1(4):20. Mossel E, T amuz O (2017) Opinion exchange dynamics. Pr ob ability Surveys 14:155–204. Mueller-F rank M (2013) A general framework for rational learning in so cial netw orks. The or etic al Ec onomics 8(1):1–40. Mueller-F rank M, Neri C (2017) A general analysis of b oundedly rational learning in so cial net works. Available at SSRN 2933411 . P apadimitriou CH, T sitsiklis JN (1987) The complexity of mark ov decision pro cesses. Mathematics of op er- ations r ese ar ch 12(3):441–450. Rosen b erg D, Solan E, Vieille N (2009) Informational externalities and emergence of consensus. Games and Ec onomic Behavior 66(2):979–994. Smith L, Sørensen P (2000) Pathological outcomes of observ ational learning. Ec onometric a 68(2):371–398. V elupillai K (2000) Computable e c onomics: the A rne Ryde memorial le ctur es , volume 5 (Oxford Universit y Press on Demand).
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment