LDP: An Identity-Aware Protocol for Multi-Agent LLM Systems

LDP: An Identity-Aware Protocol for Multi-Agent LLM Systems
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

As multi-agent AI systems grow in complexity, the protocols connecting them constrain their capabilities. Current protocols such as A2A and MCP do not expose model-level properties as first-class primitives, ignoring properties fundamental to effective delegation: model identity, reasoning profile, quality calibration, and cost characteristics. We present the LLM Delegate Protocol (LDP), an AI-native communication protocol introducing five mechanisms: (1) rich delegate identity cards with quality hints and reasoning profiles; (2) progressive payload modes with negotiation and fallback; (3) governed sessions with persistent context; (4) structured provenance tracking confidence and verification status; (5) trust domains enforcing security boundaries at the protocol level. We implement LDP as a plugin for the JamJet agent runtime and evaluate against A2A and random baselines using local Ollama models and LLM-as-judge evaluation. Identity-aware routing achieves ~12x lower latency on easy tasks through delegate specialization, though it does not improve aggregate quality in our small delegate pool; semantic frame payloads reduce token count by 37% (p=0.031) with no observed quality loss; governed sessions eliminate 39% token overhead at 10 rounds; and noisy provenance degrades synthesis quality below the no-provenance baseline, arguing that confidence metadata is harmful without verification. Simulated analyses show architectural advantages in attack detection (96% vs. 6%) and failure recovery (100% vs. 35% completion). This paper contributes a protocol design, reference implementation, and initial evidence that AI-native protocol primitives enable more efficient and governable delegation.


💡 Research Summary

The paper introduces the LLM Delegate Protocol (LDP), a novel communication protocol for multi‑agent large language model (LLM) systems that makes model‑level properties first‑class protocol primitives. Existing protocols such as Google’s Agent‑to‑Agent (A2A) and Anthropic’s Model Context Protocol (MCP) expose only high‑level agent descriptors (name, description, skill list) and lack metadata about model identity, reasoning style, quality calibration, and cost. LDP addresses this gap by defining five core mechanisms: (1) rich Delegate Identity Cards containing over twenty fields (model family, version, parameter count, context window, latency and cost hints, reasoning profile, etc.); (2) progressive payload modes ranging from plain text to semantic frames, embedding hints, semantic graphs, latent capsules, and cache slices, with automatic negotiation and fallback; (3) governed sessions that negotiate persistent context, budget, priority, and audit level, eliminating the need to resend full conversation history on each round; (4) structured provenance metadata attached to every result (producer, model version, payload mode, self‑reported confidence, verification status); and (5) trust domains that enforce per‑message signatures, domain compatibility checks, and policy validation (cost limits, jurisdiction, data‑handling rules).

Implementation is realized as an external plugin for the Rust‑based JamJet agent runtime. The plugin registers as a ProtocolAdapter, reusing JamJet’s existing discovery and invocation infrastructure while extending the AgentCard structure with “ldp.*” labels to store identity fields. Session lifecycle is managed internally, providing a stateless façade to the host while maintaining server‑side context for multi‑round interactions. Two model adapters were added: an Ollama adapter for local inference (supporting Qwen, Llama, Gemma, Phi families) and a Google Gemini adapter for the LLM‑as‑judge evaluation.

The authors evaluate LDP against A2A and a random‑selection baseline across six research questions (RQ1‑RQ6) using three local Ollama delegates (qwen3‑8b, qwen2.5‑coder, llama3.2‑3b) and Gemini 2.5 Flash as the judge. Key findings:

  • RQ1 – Routing Quality: Identity‑aware routing dramatically reduces latency on easy tasks (≈12× faster) by sending them to lightweight, low‑cost models, though overall quality does not improve in the small three‑model pool, indicating that benefits scale with delegate diversity.
  • RQ2 – Payload Efficiency: Switching from raw text to semantic‑frame payloads cuts token usage by 37 % (p = 0.031, d = ‑0.7) without measurable quality loss, confirming that structured inputs are more compact yet equally expressive for current LLMs.
  • RQ3 – Provenance Value: Adding noisy provenance (self‑reported confidence without verification) actually degrades synthesis quality, suggesting that provenance metadata must be coupled with a verification step to be useful.
  • RQ4 – Session Efficiency: Governed sessions eliminate 39 % of token overhead in a 10‑round delegation scenario compared with stateless re‑invocation, demonstrating substantial cost savings for iterative workflows.
  • RQ5 – Security Boundaries (Simulation): Trust‑domain enforcement detects unauthorized delegation attempts with 96 % success versus 6 % for bearer‑token‑only authentication, highlighting the protocol’s ability to embed fine‑grained policy controls.
  • RQ6 – Fallback Reliability (Simulation): Automatic payload‑mode fallback ensures 100 % task completion under simulated communication failures, compared with 35 % for a non‑fallback baseline.

The paper concludes that exposing model‑level identity and negotiation primitives at the protocol layer yields measurable gains in efficiency, governance, and security for multi‑agent LLM systems. While the current empirical work is limited to a modest delegate pool and only the first two payload modes, the authors outline future directions: evaluating higher‑order payload modes (semantic graphs, latent capsules), scaling to larger, more heterogeneous model fleets, testing multi‑party room coordination, and conducting real‑world adversarial security experiments. LDP thus represents a promising step toward AI‑native, interoperable agent ecosystems where delegation decisions are informed, cost‑aware, and securely governed.


Comments & Academic Discussion

Loading comments...

Leave a Comment