Merging operators aim at defining the beliefs/goals of a group of agents from the beliefs/goals of each member of the group. Whenever an agent of the group has preferences over the possible results of the merging process (i.e., the possible merged bases), she can try to rig the merging process by lying on her true beliefs/goals if this leads to better merged base according to her point of view. Obviously, strategy-proof operators are highly desirable in order to guarantee equity among agents even when some of them are not sincere. In this paper, we draw the strategy-proof landscape for many merging operators from the literature, including model-based ones and formula-based ones. Both the general case and several restrictions on the merging process are considered.
Merging operators aim at defining the beliefs/goals of a group of agents from the beliefs/goals of each member of the group. Though beliefs and goals are distinct notions, merging operators can typically be used for merging either beliefs or goals. Thus, most of the logical properties from the literature (Revesz, 1993(Revesz, , 1997;;Konieczny & Pino Pérez, 1998, 2002) for characterizing rational belief merging operators can be used for characterizing as well rational goal merging operators.
Whatever beliefs or goals are to be merged, there are numerous situations where agents have preferences on the possible results of the merging process (i.e., the merged bases). As far as goals are concerned, an agent is surely satisfied when her individual goals are chosen as the goals of the group. In the case of belief merging, an agent can be interested in imposing her beliefs to the group (i.e., “convincing” the other agents), especially because the result of a further decision stage at the group level may depend on the beliefs of the group.
So, as soon as an agent participates to a merging process, the strategy-proofness problem has to be considered. The question is: is it possible for a given agent to improve the result of the merging process with respect to her own point of view by lying on her true beliefs/goals, given that she knows (or at least she assumes) the beliefs/goals of each agent of the group and the way beliefs/goals are merged?
As an illustration, let us consider the following scenario of goal merging (that will be used as a running example in the rest of the paper):
Example 1 Three friends, Marie, Alain and Pierre want to spend their summer holidays together. They have to determine whether they will go to the seaside and/or to the mountains, or to stay at home, and also to determine whether they will take a long period of vacations or not. The goals of Marie are to go to the seaside and to the mountains if it is for a long period; otherwise she wants to go to the mountains, only, or to stay at home. The goals of Alain are to go to the seaside if it is for a long period; or to go to the mountains if it is for a short period. Finally, Pierre is only interested in going to the seaside for a long period, otherwise he prefers to stay at home. If one uses a common merging operator for defining the choice of the group, 1 then the goals of the group will be either to go to the seaside for a long period, or to go to the mountains or to stay at home for a short period. Accordingly, the group may choose to go to the seaside, only, for a long period, which is not among the goals of Marie. However, if Marie lies and claims that, for a short period, she wants to go to the mountains only, or to stay at home, then the result of the merging process will be different. Indeed, in this case, the goals of the group will be to go to the mountains for a short period, or to stay at home, which corresponds to the goals of Marie.
Similarly, the strategy-proofness issue has to be considered in many belief merging scenarios, just because rational decision making typically takes account for the “true” state of the world. When agents have conflicting beliefs about it, belief merging can be used to determine what is the “true” state of the world for the group; manipulating the belief merging process is a way for an agent to change the resulting beliefs at the group level so as to make them close to her own beliefs. As a consequence, the decisions made by the group may also change and become closer to those the agent would made alone. For instance, assume that the three friends agree that the mountains must be avoided when the weather is bad. If the beliefs of the group is that the weather is bad, then the decision to go to the mountains will be given up. If Pierre believes that the weather is bad, then he may be tempted to make the weather is bad accepted at the group level. Therefore, a collective decision will be not to go to the mountains.
There are several multi-agent settings in which some agents exchange information and must make individual decisions based on their beliefs. In many scenarios, agents are tempted to get an informational advantage over other agents, which can be achieved by gathering as much information as possible and by hiding their own ones. Indeed, being better informed may help an agent making better decisions than the other agents of the group. For instance, Shoham and Tennenholtz (2005) investigate non-cooperative computation: each agent delivers some piece of information (truthfully or not), all such pieces are used to compute the value of a (commonly-known) function, and this value is given back to the agents; the aim of each agent is to get the true value of the function, and if possible to be the only one to get it. In the work of Shoham and Tennenholtz (2005), information is considered at an abstract level; assuming that information represent beliefs and the function is a belief merging oper
This content is AI-processed based on open access ArXiv data.