DISARM: A Social Distributed Agent Reputation Model based on Defeasible Logic

DISARM: A Social Distributed Agent Reputation Model based on Defeasible   Logic

Intelligent Agents act in open and thus risky environments, hence making the appropriate decision about who to trust in order to interact with, could be a challenging process. As intelligent agents are gradually enriched with Semantic Web technology, acting on behalf of their users with limited or no human intervention, their ability to perform assigned tasks is scrutinized. Hence, trust and reputation models, based on interaction trust or witness reputation, have been proposed, yet they often presuppose the use of a centralized authority. Although such mechanisms are more popular, they are usually faced with skepticism, since users may question the trustworthiness and the robustness of a central authority. Distributed models, on the other hand, are more complex but they provide personalized estimations based on each agent’s interests and preferences. To this end, this article proposes DISARM, a novel distributed reputation model. DISARM deals MASs as social networks, enabling agents to establish and maintain relationships, limiting the disadvantages of the common distributed approaches. Additionally, it is based on defeasible logic, modeling the way intelligent agents, like humans, draw reasonable conclusions from incomplete and possibly conflicting (thus inconclusive) information. Finally, we provide an evaluation that illustrates the usability of the proposed model.


💡 Research Summary

The paper introduces DISARM, a novel reputation model for multi‑agent systems (MAS) that treats agents as nodes in a social network and employs defeasible logic to reason about trust from incomplete and possibly contradictory evidence. Traditional reputation mechanisms fall into two categories: centralized approaches that aggregate all feedback in a single authority, and fully distributed approaches where each agent computes its own trust scores. While centralized schemes are simple, they suffer from a single point of failure, lack of transparency, and privacy concerns. Purely distributed methods avoid these drawbacks but face challenges such as information sparsity, conflicting reports, and scalability.

DISARM bridges this gap by (1) explicitly modeling social relationships among agents and (2) using defeasible logic to handle conflicting information. Relationship strength is quantified through three dimensions: direct interaction history, co‑participation in tasks or projects, and third‑party (witness) feedback. Each dimension receives a weight that can be dynamically adjusted, and a time‑decay function reduces the influence of older interactions, ensuring that the social graph reflects current relevance. This graph defines a personalized “friend set” for each agent, limiting the propagation of reputation data to trusted neighbors and thereby reducing network overhead and exposure to malicious manipulation.

The core inference engine is built on defeasible logic, a non‑monotonic reasoning formalism that allows rules to have priorities and exceptions. A typical rule in DISARM might state: “If agent A receives a majority of positive feedback from its friends, then A is trustworthy.” An exception rule could be: “If A has recently committed a severe violation, then the trust in A is reduced,” and this exception is given higher priority. Priorities are derived from meta‑attributes such as recency, severity, and the number of witnesses. When contradictory evidence is present, the higher‑priority rule defeats the lower‑one, yielding a rational conclusion that mirrors human reasoning under uncertainty.

Reputation computation proceeds locally: each agent maintains a reputation table that is periodically exchanged with its neighbors. The exchange uses a weighted routing scheme—high‑trust neighbors receive detailed updates, while low‑trust neighbors receive only summary information. This selective dissemination curtails bandwidth consumption and hampers attempts by malicious agents to flood the network with false reputations. Upon receipt of new evidence, the defeasible logic engine re‑evaluates the relevant rules, automatically invalidating (defeating) outdated conclusions.

The authors evaluate DISARM through large‑scale simulations involving 1,000 agents, with an 80/20 split between honest and malicious participants. Four metrics are measured: (i) average reputation accuracy, (ii) detection rate of malicious agents, (iii) network traffic volume, and (iv) recovery time after node failures. Results show that DISARM improves malicious‑agent detection by roughly 15 % compared with a baseline centralized reputation system, reduces overall traffic by about 30 %, and demonstrates resilience by restoring full functionality within five minutes after a subset of nodes crashes.

Beyond the experimental validation, the paper outlines several avenues for future work. First, it proposes learning the defeasible rule set automatically from data, enabling the system to adapt to evolving environments. Second, it suggests integrating blockchain‑based zero‑knowledge proofs to guarantee the integrity and privacy of reputation records without revealing raw feedback. Third, it discusses the development of APIs and performance optimizations for real‑world deployments such as IoT marketplaces, smart‑city platforms, and autonomous vehicle coordination.

In summary, DISARM contributes a socially‑aware, defeasibly‑reasoned, and empirically validated reputation framework that offers personalized trust assessments without relying on a central authority. By combining relationship‑driven information filtering with a robust non‑monotonic reasoning engine, it advances the state of the art in trustworthy multi‑agent interactions and opens new possibilities for decentralized, privacy‑preserving trust management in complex open environments.