Selecting for Less Discriminatory Algorithms: A Relational Search Framework for Navigating Fairness-Accuracy Trade-offs in Practice

Selecting for Less Discriminatory Algorithms: A Relational Search Framework for Navigating Fairness-Accuracy Trade-offs in Practice
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

As machine learning models are increasingly embedded into society through high-stakes decision-making, selecting the right algorithm for a given task, audience, and sector presents a critical challenge, particularly in the context of fairness. Traditional assessments of model fairness have often framed fairness as an objective mathematical property, treating model selection as an optimization problem under idealized informational conditions. This overlooks model multiplicity as a consideration–that multiple models can deliver similar performance while exhibiting different fairness characteristics. Legal scholars have engaged this challenge through the concept of Less Discriminatory Algorithms (LDAs), which frames model selection as a civil rights obligation. In real-world deployment, this normative challenge is bounded by constraints on fairness experimentation, e.g., regulatory standards, institutional priorities, and resource capacity. Against these considerations, the paper revisits Lee and Floridi (2021)’s relational fairness approach using updated 2021 Home Mortgage Disclosure Act (HMDA) data, and proposes an expansion of the scope of the LDA search process. We argue that extending the LDA search horizontally, considering fairness across model families themselves, provides a lightweight complement, or alternative, to within-model hyperparameter optimization, when operationalizing fairness in non-experimental, resource constrained settings. Fairness metrics alone offer useful, but insufficient signals to accurately evaluate candidate LDAs. Rather, by using a horizontal LDA search approach with the relational trade-off framework, we demonstrate a responsible minimum viable LDA search on real-world lending outcomes. Organizations can modify this approach to systematically compare, evaluate, and select LDAs that optimize fairness and accuracy in a sector-based contextualized manner.


💡 Research Summary

As machine learning models become deeply integrated into high-stakes societal decision-making processes, the tension between predictive accuracy and algorithmic fairness has emerged as a paramount challenge. This paper addresses a critical gap in current algorithmic auditing: the tendency to treat fairness as a static mathematical property subject to intra-model hyperparameter optimization. The authors argue that this “vertical search” approach overlooks “model multiplicity”—the phenomenon where different model architectures can achieve comparable accuracy while exhibiting significantly different fairness profiles.

Drawing inspiration from legal scholarship regarding “Less Discriminable Algorithms” (LDAs), the paper reframes model selection from a mere optimization task to a civil rights obligation. The core contribution of this research is the introduction of a “Horizontal Search” framework. Unlike traditional methods that focus on fine-tuning parameters within a single model family, the proposed framework advocates for comparing across different model families. This approach is designed to be a “minimum viable” strategy for organizations operating under real-world constraints, such as limited computational resources, regulatory boundaries, and institutional priorities.

To validate this framework, the researchers utilized updated 2021 Home Mortgage Disclosure Act (HMDA) data, applying the relational trade-off framework to real-world lending outcomes. The empirical evidence demonstrates that a horizontal search—comparing diverse model families—serves as an efficient and lightweight alternative to exhaustive hyperparameter tuning. By expanding the search scope horizontally, organizations can systematically identify algorithms that satisfy the legal and ethical requirements of being “less discriminatory” without the prohibitive costs of deep vertical optimization.

Ultimately, the paper concludes that fairness metrics alone are insufficient for responsible deployment. Instead, a relational approach that considers the context-specific trade-offs between accuracy and fairness across various model types is essential. This framework provides a scalable and systematic methodology for sectors like finance and law to select algorithms that optimize both performance and social equity, ensuring that AI deployment aligns with broader societal values and regulatory standards.


Comments & Academic Discussion

Loading comments...

Leave a Comment