All Vehicles Can Lie: Efficient Adversarial Defense in Fully Untrusted-Vehicle Collaborative Perception via Pseudo-Random Bayesian Inference

All Vehicles Can Lie: Efficient Adversarial Defense in Fully Untrusted-Vehicle Collaborative Perception via Pseudo-Random Bayesian Inference
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Collaborative perception (CP) enables multiple vehicles to augment their individual perception capacities through the exchange of feature-level sensory data. However, this fusion mechanism is inherently vulnerable to adversarial attacks, especially in fully untrusted-vehicle environments. Existing defense approaches often assume a trusted ego vehicle as a reference or incorporate additional binary classifiers. These assumptions limit their practicality in real-world deployments due to the questionable trustworthiness of ego vehicles, the requirement for real-time detection, and the need for generalizability across diverse scenarios. To address these challenges, we propose a novel Pseudo-Random Bayesian Inference (PRBI) framework, a first efficient defense method tailored for fully untrusted-vehicle CP. PRBI detects adversarial behavior by leveraging temporal perceptual discrepancies, using the reliable perception from the preceding frame as a dynamic reference. Additionally, it employs a pseudo-random grouping strategy that requires only two verifications per frame, while applying Bayesian inference to estimate both the number and identities of malicious vehicles. Theoretical analysis has proven the convergence and stability of the proposed PRBI framework. Extensive experiments show that PRBI requires only 2.5 verifications per frame on average, outperforming existing methods significantly, and restores detection precision to between 79.4% and 86.9% of pre-attack levels.


💡 Research Summary

The paper addresses a critical security gap in vehicle‑to‑vehicle (V2V) collaborative perception (CP) where every participant, including the ego vehicle, may be compromised. Existing defenses either assume a trustworthy ego or require a number of verifications that grows linearly with fleet size, making them unsuitable for real‑time, large‑scale deployments. To overcome these limitations, the authors propose the Pseudo‑Random Bayesian Inference (PRBI) framework, an efficient, assumption‑free defense method that requires only two verifications per frame regardless of the number of vehicles.

The key insight behind PRBI is the observation that LiDAR‑based perception outputs exhibit high temporal consistency in benign scenarios (average Jaccard similarity ≈ 0.8 between consecutive frames) but drop sharply under adversarial attacks (≤ 0.3). This stable gap allows the system to treat the previous frame’s perception as a dynamic, self‑referential reference without any external trust.

PRBI operates in four steps:

  1. Initialization – Two n‑dimensional counters (normal and abnormal) are set to zero for all n vehicles.

  2. Soft Sampling & Consistency Validation – For each new frame, the received feature maps are randomly split into two groups (pseudo‑random sampling). Each group’s fused output is compared to the reference (previous frame) using Jaccard similarity. If the similarity falls below a pre‑defined threshold ε, the group is marked “abnormal” and the abnormal counter for each member is incremented; otherwise the normal counter is incremented. This process yields exactly two group‑level verifications per frame.

  3. Attacker Evaluation – The accumulated counters are used to estimate the number of malicious vehicles k and the posterior benign probability p_i for each vehicle i. The probability of drawing an all‑benign group under random sampling is analytically approximated as P′_ideal ≈ 2^{n‑k} / 2^{n}. By measuring the empirical all‑benign rate η, the framework solves k ≈ log₂(1/η). Bayesian inference then combines the estimated k with per‑vehicle counts to compute p_i, and the m vehicles with the lowest p_i are selected as suspected attackers.

  4. Hypothesis Testing & Defense Perception – A T‑test checks whether the estimates have converged. If convergence is confirmed, the identified malicious set M is excluded from subsequent fusion; otherwise the process repeats, using the current benign vehicles as collaborators for the next frame.

The authors provide rigorous theoretical analysis: (a) they prove that binary grouping statistically mirrors random sampling without replacement, establishing the lower bound of two verifications per frame; (b) they demonstrate convergence and stability of the Bayesian update using Markov chain arguments; and (c) they show that the rounding operation used to infer k is unbiased in expectation.

Experimental validation covers a wide range of configurations: number of vehicles n = 5–10, malicious vehicle count k = 1–4, and three representative attacks (PGD, C&W, LiDAR injection). Across all settings, PRBI achieves an average of 2.5 verifications per frame while restoring detection precision to 79.4 %–86.9 % of the pre‑attack baseline. This represents a 30 %–70 % reduction in verification overhead compared with prior sampling‑based methods, and the method maintains real‑time performance (~30 fps). Additional tests under varying weather (clear, rain, fog) and traffic density confirm robustness to environmental changes.

Strengths of PRBI include:

  • No ego‑trust assumption – the method works even when the ego vehicle is malicious.
  • Constant verification cost – scalability is achieved because the number of checks does not depend on fleet size.
  • Probabilistic attacker identification – Bayesian inference provides a principled estimate of both the number of attackers and each vehicle’s trustworthiness, enabling fast isolation.

Limitations are acknowledged: the similarity threshold ε may need adaptation to sudden scene changes (e.g., abrupt lighting or motion), and the assumption that the initial frame is completely benign could lead to early false positives in highly dynamic environments. The authors suggest adaptive ε tuning, multi‑frame averaging, or auxiliary sensor fusion as possible mitigations.

In summary, PRBI constitutes the first defense framework that simultaneously offers (i) full‑trust‑free operation, (ii) constant‑time verification, and (iii) theoretically guaranteed convergence for adversarial detection in fully untrusted‑vehicle collaborative perception. Its blend of empirical temporal consistency, pseudo‑random grouping, and Bayesian inference makes it a compelling solution for secure, real‑time autonomous driving fleets.


Comments & Academic Discussion

Loading comments...

Leave a Comment