When Feasibility of Fairness Audits Relies on Willingness to Share Data: Examining User Acceptance of Multi-Party Computation Protocols for Fairness Monitoring

When Feasibility of Fairness Audits Relies on Willingness to Share Data: Examining User Acceptance of Multi-Party Computation Protocols for Fairness Monitoring
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Fairness monitoring is critical for detecting algorithmic bias, as mandated by the EU AI Act. Since such monitoring requires sensitive user data (e.g., ethnicity), the AI Act permits its processing only with strict privacy measures, such as multi-party computation (MPC), in compliance with the GDPR. However, the effectiveness of such secure monitoring protocols ultimately depends on people’s willingness to share their data. Little is known about how different MPC protocol designs shape user acceptance. To address this, we conducted an online survey with 833 participants in Europe, examining user acceptance of various MPC protocol designs for fairness monitoring. Findings suggest that users prioritized risk-related attributes (e.g., privacy protection mechanism) in direct evaluation but benefit-related attributes (e.g., fairness objective) in simulated choices, with acceptance shaped by their fairness and privacy orientations. We derive implications for deploying and communicating privacy-preserving protocols in ways that foster informed consent and align with user expectations.


💡 Research Summary

The paper investigates how the design of multi‑party computation (MPC) protocols influences user willingness to share sensitive data for fairness monitoring under the EU AI Act and GDPR. Because fairness audits of high‑risk AI systems (e.g., algorithmic hiring) require access to protected attributes such as ethnicity or gender, the AI Act permits processing only when “state‑of‑the‑art security and privacy‑preserving measures” are employed. MPC is attractive because it enables exact computation of fairness metrics without exposing raw data, yet its practical feasibility hinges on end‑user consent.

To explore this, the authors conducted a large‑scale online survey with 833 current job seekers across EU member states. They framed the study using Privacy Calculus Theory, categorising protocol design attributes into three groups: benefit‑related (fairness objective, monetary incentive), risk‑related (privacy protection mechanism, data storage model), and hybrid (type of collected information, monitoring actor, data use). The survey comprised two parts. First, participants directly ranked the importance of each attribute, revealing which factors they perceive as most critical. Second, a conjoint‑analysis experiment presented realistic protocol scenarios that combined different attribute levels; participants chose the scenario they would accept, allowing the researchers to infer revealed preferences.

The direct ranking showed that users prioritize risk‑related attributes—especially the type of sensitive data collected and the specific privacy‑preserving mechanism (e.g., distributed storage, encryption). In contrast, the scenario‑based choices highlighted benefit‑related attributes: the stated fairness goal and any monetary compensation were the strongest drivers of acceptance. This divergence suggests that while users are cognitively aware of privacy risks, their actual behavioural decisions are swayed by perceived social benefit and personal reward.

Regression analyses examined how individual differences modulate these preferences. Participants with a strong fairness orientation (high concern for non‑discriminatory outcomes) placed greater weight on the fairness objective and were more responsive to monetary incentives. Conversely, privacy‑oriented participants (high privacy concern, active use of privacy safeguards, demand for transparency) focused on risk‑related attributes and were less influenced by incentives. Notably, broader data requests and more extensive data‑use statements paradoxically increased willingness to share, likely because respondents interpreted larger contributions as more impactful for achieving fairness. Trust also varied by actor: respondents showed slightly higher trust in research institutions than commercial entities, underscoring the importance of transparent third‑party certification.

The authors derive several practical implications. First, designers of MPC‑based fairness audits should communicate risk‑related details (encryption, distributed storage, limited retention) clearly to satisfy the privacy calculus. Second, they should foreground the societal benefit—explicitly stating the fairness objective—and consider modest monetary incentives to boost participation. Third, trust‑building measures such as independent audits, clear governance structures, and user‑controlled consent interfaces can bridge the gap between perceived risk and perceived benefit.

Overall, the study contributes (1) a nuanced, human‑centred understanding of the trade‑offs users make between privacy risk and fairness benefit in the context of MPC, (2) evidence that individual fairness and privacy orientations shape those trade‑offs, and (3) actionable guidance for policymakers, AI providers, and researchers seeking to implement GDPR‑compliant, privacy‑preserving fairness monitoring that achieves meaningful user consent.


Comments & Academic Discussion

Loading comments...

Leave a Comment