Fairness Across Fields: Comparing Software Engineering and Human Sciences Perspectives

Background. As digital technologies increasingly shape social domains such as healthcare, public safety, entertainment, and education, software engineering has engaged with ethical and political concerns primarily through the notion of algorithmic fa…

Authors: Lucas Valenca, Ronnie de Souza Santos

Fairness Acr oss Fields: Comparing Soware Engineering and Human Sciences Perspectives Lucas V alença lucas.rodriguesvalen@ucalgary .ca University of Calgary Calgary, Alberta, Canada Ronnie de Souza Santos ronnie.desouzasantos@ucalgary .ca University of Calgary Calgary, Alberta, Canada Abstract Background. As digital technologies increasingly shape social domains such as healthcare, public safety , entertainment, and edu- cation, software engineering has engaged with ethical and political concerns primarily through the notion of algorithmic fairness. Aim. This study challenges the limits of software engineering approaches to fairness by analyzing how fairness is conceptualized in the hu- man sciences. Methodology . W e conducted two secondary studies, exploring 45 articles on algorithmic fairness in software engineer- ing and 25 articles on fairness from the humanities, and compared their ndings to assess cross-disciplinary insights for ethical tech- nological development. Results. The analysis shows that software engineering predominantly denes fairness thr ough formal and statistical notions focused on outcome distribution, whereas the humanities emphasize historically situated perspectives grounded in structural inequalities and p ower relations, with dierences also evident in associated social benets, proposed practices, and identi- ed challenges. Conclusion. Perspe ctives from the human sciences can meaningfully contribute to software engineering by promoting situated understandings of fairness that move beyond technical approaches and better account for the societal impacts of technolo- gies. Ke ywords software fairness, software engineering, human sciences A CM Reference Format: Lucas V alença and Ronnie de Souza Santos. 2018. Fairness Across Fields: Comparing Software Engineering and Human Sciences Perspectives. In Companion Procee dings of the 34th ACM Symposium on the Foundations of Software Engineering (FSE ’26), July 5 - 9, 2026 Montreal, Canada. A CM, New Y ork, N Y , USA, 10 pages. https://doi.org/XXXXXXX.XXXXXXX 1 Introduction Fairness has be come a central concern in contemporar y software en- gineering, particularly with the growing use of articial intelligence systems that make or support decisions with direct social conse- quences [ 6 , 50 ]. These systems are increasingly deployed in domains Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distribute d for prot or commer cial advantage and that copies bear this notice and the full citation on the rst page. Copyrights for components of this work owned by others than the author(s) must be honor ed. Abstracting with credit is permitted. T o copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specic permission and /or a fe e. Request permissions from permissions@acm.org. Conference’17, W ashington, DC, USA © 2018 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-1-4503-XXXX -X/2018/06 https://doi.org/XXXXXXX.XXXXXXX such as healthcare, hiring, education, and nance, where their out- puts shape opportunities, access, and life outcomes [ 6 , 20 , 33 , 50 ]. As a result, expectations of software quality hav e expanded beyond technical correctness to include ethical accountability and social responsibility [ 6 , 19 ]. In this context, fairness is often treated as a quality attribute that seeks to ensure equitable system behavior across so cial groups, aiming to prevent systematic advantage or disadvantage associated with characteristics such as gender , race, or socioeconomic status [ 19 , 20 , 56 ]. Within software engineering, fairness is commonly operational- ized through computational denitions that can be measured, tested, and veried [ 50 , 56 ]. This orientation has le d to a proliferation of for- mal notions such as group, individual, and causal fairness, alongside metrics, testing strategies, and bias mitigation te chniques designed to assess and correct algorithmic b ehavior [ 6 , 12 , 20 , 50 , 56 ]. While these approaches provide actionable mechanisms for engineering practice, they tend to frame fairness as a technical pr operty of data, models, or systems. In doing so, they often abstract away the social relations, institutional contexts, and pow er dynamics within which AI systems are designed, deployed, and interpreted [ 19 ]. As a con- sequence, fairness mechanisms may dene human judgment and decision-making in software engineering without fully accounting for their broader socio-technical implications. In contrast, the human and social sciences hav e long explored fairness as a core ethical and social concept [ 3 , 17 , 26 , 43 , 54 ]. Dis- ciplines such as sociology , law , philosophy , and communication conceptualize fairness as context-dependent, culturally situated, and closely tied to questions of justice, participation, and legitimacy [ 3 , 26 , 54 ]. From these perspectives, fairness cannot be reduced to a single metric or formal property , but emerges through social processes, institutional arrangements, and collective deliberation. The contrast between these perspectives highlights a disciplinary gap: while software engineering emphasizes op erationalization and verication, the human sciences emphasize interpretation, nor- mativity , and lived experience [ 6 , 9 , 19 , 20 , 33 , 53 ]. Bridging this gap is important for understanding how AI-based fairness mech- anisms me diate human reasoning and responsibility in software engineering practice. This study addresses this challenge by comparing and contrast- ing how fairness is dened and approached in software engineering and in the human and social sciences. Specically , we investigate the following research question: RQ. How does the concept of fairness in software engineering dier from its interpretation in the human and social sciences? T o answer this question, we conducted a com- parative analysis that integrates a systematic tertiar y review of software engineering literature with a rapid r eview of work fr om Conference’17, July 2017, W ashington, DC, USA Lucas V alença and Ronnie de Souza Santos the human sciences. Our ndings indicate that software engineer- ing research predominantly frames fairness as a measurable and testable property associated with bias mitigation and algorithmic evaluation, wher eas the human sciences conceptualize fairness as a multidimensional social value encompassing justice, inclusion, and participation. By synthesizing and contrasting these perspectives, this study contributes to human-centered AI resear ch in software engineer- ing by clarifying how technical fairness mechanisms interact with human judgment and social values, and by identifying conceptual limitations that arise when fairness is treated solely as a compu- tational attribute. The study further oers directions for aligning software engineering approaches with broader socio-ethical under- standings of fairness, supporting more r eective and responsible AI-mediated software practices. 2 Background Software fairness has emerged as a non-functional requirement in contemporary software engineering, gaining imp ortance with the rise of articial intelligence and machine learning systems. Fair- ness is increasingly treated as a property that can be dened, mea- sured, and veried alongside performance, reliability , or security [ 12 , 50 , 56 ]. Foundational work has organized fairness denitions into families such as gr oup, individual, and causal fairness, each describing how systems should behave equitably toward dierent users or social groups [ 56 ]. These formalizations enable fairness to be operationalized through quantitative methods that detect and mitigate disparities in algorithmic outcomes, leading to its concep- tualization as a measurable quality attribute integrate d into testing and verication activities [ 12 , 50 ]. Howev er , such approaches often abstract fairness from its ethical and social underpinnings, reducing complex moral questions to computational criteria. In response, scholars have described fairness as a socio-technical construct link- ing technical design decisions to so cial accountability , introducing the notion of fairness debt to capture the ethical consequences of biased data and design practices that accumulate over time [ 19 ]. Despite this growing awareness, empirical studies indicate that fairness remains a secondary quality goal in practice, often over- shadowed by accuracy and performance priorities [ 20 ]. Research on the human and organizational dimensions of soft- ware engineering further shows that fairness concerns extend be- yond algorithms and datasets. Engineers frequently discuss fairness in relation to recruitment, compensation, collaboration, and repre- sentation, emphasizing that it is embedde d in the social structures of softwar e work [ 49 ]. These perspe ctives align with calls to sit- uate fairness within broader ethical frameworks encompassing transparency , responsibility , and respect for diversity [ 53 ]. Inter- disciplinary studies highlight that fairness is shaped by the legal, cultural, and institutional contexts of software development, indicat- ing that equitable systems depend as much on social organization as on algorithmic design [ 19 , 20 , 50 ]. Overall, fairness in software engineering emerges as a dynamic and interpretive attribute forme d through the interaction of technical mechanisms, human judgment, and societal structures, r equiring both quantitative verication and qualitative reection to be meaningfully realized. 3 Method This research employed two complementary re view methodologies to synthesize fairness research across disciplines. A systematic ter- tiary review in software engineering was used to map how fairness is dened, operationalized, and evaluated, fo cusing exclusively on existing systematic re views following established guidance [ 31 ]. This approach is appropriate for software engineering, where the growing volume of systematic reviews on fairness, articial intel- ligence, machine learning, and responsible software de velopment has produced a substantial body of secondary evidence, enabling a broad synthesis of shared denitions, recurring metrics, and methodological gaps. In parallel, a rapid review was conducted in the humanities and social sciences to synthesize evidence on al- gorithmic fairness using systematic yet simplie d procedures suited to fast moving research areas [ 10 ]. This method supp orts timely capture of inuential theoretical and empirical discussions in elds where fairness debates evolve quickly in response to technological and social developments, enabling cr oss disciplinary comparison between technical and sociote chnical perspectives. 3.1 Research Questions W e dene d the same research questions for b oth systematic re- views to ensure conceptual alignment and comparability between the engineering and humanities analyses. The study addresses the following questions: RQ1. How is fairness dened and conceptual- ized? RQ2. What metrics and criteria are used to evaluate fairness? RQ3. What social b enets are associated with fair practices, pr ocesses, and te chnologies? RQ4. What approaches, methods, and practices are proposed to promote fairness in technologies? RQ5. What limitations and challenges are identied in the eorts to achieve fairness in tech- nologies? Howev er , during the humanities rapid review , it became evident that RQ2 ( What metrics and criteria are used to evaluate fairness? ) was not applicable, as the notion of metrics pertains pri- marily to technical and engine ering frameworks rather than the more conceptual analyses typical of the human and social sciences. 3.2 Search Strategy T o conduct the combined systematic reviews, we developed two complementary search strategies targeting dierent disciplinary scopes, while sharing a common conceptual focus on algorithmic fairness. The rst review focused on fairness within the software en- gineering eld, mapping denitions, evaluation criteria, approaches and limitations. For this r eview , we used conducted both manual and automatic searches in Google Scholar , A CM Digital Librar y , IEEE Xplore, Scopus, ScienceDirect, and Sage Journals, cov ering studies published since 2017. The search string used was: (“software fairness” OR “fairness in AI” OR “fairness in ML” OR “algorithmic discrimination” OR “algorithmic bias” OR “software discrimination” OR “software bias” OR “fairness in software engineering”) AND (“systematic revie w” OR “sys- tematic literature review” OR “literature re view” OR “sur vey” OR “mapping study”) AND (“software engineering”). The second review fo cused on human and social science per- spectives on algorithmic fairness, aiming to identify how fairness is conceptualized beyond engineering domains. An automatic search was conducted in Google Scholar , and no specic timeline was dened, as we focused on relevance to the theme, as suggested in Fairness Across Fields: Comparing Soware Engineering and Human Sciences Perspectives Conference’17, July 2017, W ashington, DC, USA the guidelines for rapid r eviews. The following string guided the search: ("fairness" OR "justice" OR " ethics" OR "inequality" OR "dis- crimination" OR "algorithmic discrimination" OR "injustice" OR "social impact*" OR "societal impact*" OR "algorithmic harm*" OR "algorithmic inequality" OR "unfairness") AND ("algorithm*" OR "articial intelligence" OR " AI" OR "machine learning" OR "ML" OR "generative AI" OR " AI system*" OR “software”) AND ("social science*" OR "sociology" OR "edu- cation" OR "law" OR "political science " OR "anthropology" OR "economics" OR "psychology" OR " communication" OR "media studies" OR "public policy" OR "science, technology and society" OR "STS" OR "sociotechnical" OR "critical data studies" OR "social theory") 3.3 Selection Process and Data Extraction W e established shared exclusion criteria and adapted inclusion cri- teria to the spe cic objectives and contexts of each study . Both inclusion and exclusion criteria are presented in T ables 1 . After applying these criteria, a total of 70 studies were selected, with 45 from software engineering and 25 from the humanities. The software engineering studies were published between 2017 and 2025, while the studies from the human and social sciences that met our inclusion and exclusion criteria ranged from 2016 to 2025. For each included study , we developed structured extraction tables to systematically capture relevant information used to answer our research questions. The extracted data included core concepts and denitions of fairness, associated characteristics and dimensions, examples of applications and methods, and contextual details il- lustrating how fairness was operationalized and evaluated across domains. T able 1: Inclusion and exclusion criteria for b oth reviews (software engine ering and humanities). ID Description Inclusion Criteria (IC) Software Engineering Review IC1 The study is a literature review that addr esses denitions of fairness in software engi- neering, including, but not limited to, factors that ground these denitions, how these denitions relate to algorithmic discrimination, and the criteria used to e valuate fairness. Humanities Rapid Review IC1 The study explores social impacts and dynamics of algorithms through the lenses of the human sciences (e.g., communication, sociology , anthropology , law), including, but not limited to, algorithmic discrimination, fairness, big data, and other sociotechnical dynamics enacted by algorithms and technologies. IC2 The study develops critiques of approaches for fair and ethical technologies and algo- rithms proposed by engineers. Exclusion Criteria (EC) Software Engineering Review EC1 The study does not conduct a literature review of fairness or fairness testing. EC2 The study cannot be downloaded through the University of Calgary proxy . EC3 The study is not conducted in English. EC4 Duplicated studies: if one or more works have duplicates, only the most recent ones are considered. If a conference paper and its journal extension are selected, only the latter is included. EC5 The study is not conducted in the software engineering eld. EC6 The study was published before 2017. Humanities Rapid Review EC1 The study does not explore social impacts of algorithms and technologies. EC2 The study cannot be downloaded through the University of Calgary proxy . EC3 The study is not conducted in English. EC4 Duplicated studies: if one or more works have duplicates, only the most recent ones are considered. If a conference paper and its journal extension are selected, only the latter is included. EC5 The study is not conducted in human science elds, including, but not limited to, communication, sociology , anthropology , and law . 3.4 Data Analysis and Synthesis The ndings were synthesized using complementary quantitative and qualitative approaches. Descriptive statistics provided an ov erview of publication patterns, disciplinary focus, and temporal trends across the selected studies [ 21 ]. These summaries helped identify how fairness research has evolved within software engine ering and the humanities. Thematic synthesis was then applied to interpr et how fairness was dened, measured, and theorize d in each eld [ 15 ]. Codes derived from the studies were grouped into categories and rened into analytical themes, revealing convergences and di- vergences between technical and social interpretations of fairness. These analyses oered b oth scop e and depth, linking empirical patterns with conceptual insights. 3.5 Threats to V alidity This study is subject to common validity threats asso ciated with systematic reviews, including potential bias in study selection, data- base coverage, and researcher interpretation. T o mitigate these risks, multiple databases wer e searched across both domains and cross-veried with r eference lists to ensure completeness. Thematic synthesis followed established guidelines for qualitative analysis in software engineering [ 15 ], with one re viewer conducting the initial coding and another reviewing and rening it through biweekly one-hour discussion sessions until no disagreements remained. The integration of quantitative and qualitative synthesis provided methodological triangulation, strengthening r eliability , although variations in reporting quality may still limit the compr ehensive- ness of the evidence. 4 Findings In this section, we present the results from the systematic reviews conducted to understand fairness in the elds of software engineer- ing and human sciences. 4.1 Software Engineering T ertiary Literature Review Our tertiary review analyzed 45 software engineering papers and shows that fairness research is largely centered on AI and ML decision-making systems and on formal and statistical fairness metrics, which account for approximately half of the studies. Bias and algorithmic discrimination appear frequently as core topics, with bias often treated as a condition that does not necessarily lead to discrimination. Fairness is commonly discussed alongside accountability and transparency within responsible AI and science, technology , and society perspectives, even when discrimination is not the primary fo cus. Only a small number of studies address fair- ness testing, application-specic contexts, or critical perspectives on prevailing fairness practices. Below , w e pr esent the evidence col- lected from the identied pap ers to answer the research questions. 4.1.1 How is fairness defined and conceptualized? The most com- mon denition of fairness, which was pr esent in the 45 analyzed texts, consists in the absence of discrimination through the distri- bution of results . Distribution of results, in the context of digital systems, is tied to decision-making systems. Those systems have be- come very common due to the development of AI and ML. In other Conference’17, July 2017, W ashington, DC, USA Lucas V alença and Ronnie de Souza Santos words, fairness is often understood as the absence of discrimination in the nal decisions of AI and ML systems. Those decision-making processes are a result of sev eral classications (e.g., sharing adv er- tising based on a person’s data, such as gender and country). More than that, the result is itself a classication ( e.g., classifying a person as guilty or non-guilty through a trial-assessment to ol). Fairness studies add one more layer to this: they focus on classifying if a system of classications is fair or not. The denition of fairness as a formal, mathematical and statistical property of the software was found in 19 researches, this denition is justied b ecause algorithms “speak mathematics" , so it becomes imp ortant to mathematically “dene aspects of society’s fundamentally vague notions of fairness and justice" [ 42 ]. Continuing this formal comprehension of fairness, [ 41 ] argue that fairness is an abstract notion that needs to be made concrete. The authors argue that AI and machine learning systems can only be considered trustw orthy if their fairness can be veried and validated. They emphasize the ne ed for a systematic approach to dene fairness within the scope of the developing system, de- scribe it as a software attribute that can be integrated into later phases of the development process, and ensure it can be tracke d and veried throughout both development and operation. T able 2: Software engineering articles per fairness denition Denition of fairness Articles Fairness concerns the absence of bias/discrimination through distribution of results — outcomes should not favor or harm individuals or groups unjustly . P01–P45 Fairness is dened by how decisions are made — the process must be impartial, consistent, and re- spectful. P01, P02, P05, P12, P14, P15, P16, P17, P22, P23, P26, P27, P30, P34, P38, P39, P40, P43, P45 Fairness is a formal property of the system. P03, P04, P05, P08, P09, P10, P11, P13, P17, P20, P24, P25, P26, P29, P32, P37, P40, P41, P45 4.1.2 What metrics and criteria are used to evaluate fairness? The denitions presented in section 4.1.1 underpin the premise that fairness must b e measurable and testable, which leads software engineers to dene criteria to evaluate whether a system is fair . These criteria are grounded in binar y classier evaluation met- rics, such as true positive , positive predicted value, true positive rate, and false positive rate. From these criteria, several fairness metrics emerge. The goal of this research is not to discuss metrics individually , but rather to understand what these sets of metrics represent. In other words, instead of asking “what metrics do en- gineers create?” , we aim to understand “what do the criteria that underlie fairness metrics mean?” . In this sense, fairness metrics reect four primordial meanings: fair treatment of individuals , fair treatment of groups , fairness thr ough awareness , and fairness through unawareness . The rst two determine whether criteria focus on individuals or groups, while fairness through unawareness assumes that ignoring sensitive attributes will pr event discrimination and fairness through awareness explicitly engages with such attributes to achieve fair outcomes. These baselines are closely linked to the concepts of fair process and fair distribution of results. The most common metrics include equalized odds, demographic parity , equal opportunity , predictive equality , predictive parity , and counterfac- tual fairness, which impose constraints on outcome distributions across gr oups. Counterfactual fairness further requires that an indi- vidual’s prediction remain unchanged if a protected attribute wer e hypothetically altered. Overall, these denitions show that fairness metrics focus on the distribution of results and statistical parity between groups, without necessarily addressing the underlying causes of discrimination. T able 3: Software engine ering articles per fairness metric Fairness metric Articles Group Fairness P01–P45 Individual Fairness P01, P02, P03, P04, P05, P06, P07, P08, P09, P10, P11, P13, P14, P15, P16, P17, P20, P21, P22, P24, P25, P26, P27, P28, P29, P32, P33, P34, P35, P36, P37, P38, P39, P40, P41, P42, P43, P45 Fairness through A wareness P01, P03, P04, P05, P06, P07, P08, P09, P11, P12, P13, P14, P16, P21, P22, P23, P24, P25, P26, P27, P28, P29, P32, P34, P36, P37, P38, P39, P40, P41, P42, P43, P45 Demographic Parity P01, P03, P04, P05, P06, P07, P08, P09, P11, P13, P14, P16, P21, P22, P23, P24, P25, P26, P27, P31, P32, P34, P35, P36, P37, P38, P40, P41, P42, P43, P45 Equalized Odds P01, P04, P05, P06, P07, P08, P09, P11, P13, P14, P16, P21, P23, P24, P25, P26, P27, P31, P32, P34, P35, P36, P37, P38, P41, P42, P45 Equal Opportunity P01, P04, P05, P06, P07, P08, P09, P11, P14, P16, P21, P22, P24, P25, P26, P27, P31, P32, P34, P35, P36, P37, P38, P41, P42, P43 Fairness through Unawareness P01, P03, P04, P05, P06, P07, P08, P11, P13, P14, P16, P17, P21, P24, P25, P27, P32, P34, P35, P36, P37, P42, P45 Causal or Counterfactual Fairness P01, P03, P04, P05, P08, P09, P11, P12, P13, P14, P16, P21, P24, P25, P34, P36, P37, P40, P41, P43, P45 Predictive Equality P03, P07, P09, P11, P12, P13, P14, P16, P24, P27, P38, P40, P41, P42 Predictive Parity P03, P09, P11, P12, P14, P16, P24, P27, P38, P40, P41, P42 Conditional Demographic Parity P03, P05, P07, P11, P13, P14, P16, P21, P24, P36, P37 Disparate Impact P09, P12, P16, P22, P27, P31, P32, P33, P34, P35, P42 Calibration P03, P05, P14, P16, P21, P29, P41, P42 Treatment Equality P03, P06, P07, P11, P14, P16, P21, P24 Intersectional Fairness P01, P08, P11, P14, P16, P29, P41 No Proxy Discrimination P01, P03, P04, P07, P14, P16 Balance for Negative/Positive Class P03, P07, P11, P14, P16 No Unresolved Discrimination P03, P04, P07, P14, P16 W ell Calibration P03, P07, P14, P16, P40 Overall Accuracy Equality P03, P07, P14, P16 Generalized Entropy Index P07, P09, P22 Preference-based Fairness P24, P45 Accuracy Rate Dierence P26 Bounded Group Loss P26 Dierential Fairness P29 Informational Fairness P45 Interactional Fairness P45 4.1.3 What social benefits are associated with fair practices, pro- cesses and technologies? In the literature analyzed through the sys- tematic review , three main perceived so cial benets of fair systems were identied. The most frequently discussed b enet was general bias and discrimination mitigation , which is consistent with the proposals for fairness denitions and metrics. The mitigation of discrimination is often linked to understandings of discrimination framed through distributive or procedural perspectives. The sec- ond most common benet was trust and reliability , showing that fairness is linked to incr eased user condence . This is, of course, as- sociated with discrimination mitigation, but also with factors such as auditability and transparency , which are sometimes linked to the overall fairness of a software system. Finally , legal compliance was noted as an imp ortant benet, indicating that fairness contributes to Fairness Across Fields: Comparing Soware Engineering and Human Sciences Perspectives Conference’17, July 2017, W ashington, DC, USA aligning systems with regulatory requirements and reducing legal risks, which may be explained by the inuence of legal principles and frameworks that often guide fairness denitions. 4.1.4 What approaches, methods, and practices are proposed to promote fairness in technologies? The literature reports a diverse set of approaches, techniques, and procedures to promote fairness in systems. The most common approaches consist of te chniques based on algorithm/model manipulation , often categorized into pre- processing, in-processing, and post-processing, which addr ess bias before model training, during training, or by adjusting model out- puts through actions such as reweighting data, modifying decision boundaries, or applying fairness constraints. Complementing these, techniques based on data collection or manipulation fo cus on im- proving the quality , representativ eness, and diversity of datasets, including the construction of datasets with dierent demographic groups and the generation of synthetic data for underr epresented groups. Fairness toolkits operationalize these technical approaches in a more plug-and-play style by providing built-in techniques aligned with dierent fairness objectives. T aking a less technical route, human-in-the-loop approaches emphasize human participa- tion as a fundamental element in the de velopment of algorithmic systems to support ethical and fair practices, while also recognizing the limits of supercial implementations that may fail to addr ess deeper socio-technical impacts. Connected to this, some studies discuss the imp ortance of diversity in development teams . Other practices focus on integrating fairness into the design process , in- cluding fairness requirements elicitation , stakeholder inclusion, and the literacy and e ducation of both users and developers . The liter- ature also reports ethical and institutional frameworks , including approaches aimed at tracing the causes and eects of discrimination within systems, as well as legal r egulation, transparency , explain- ability , and auditing practices that operate at organizational and institutional levels. T able 4: Software engineering articles per so cial b enets as- sociated with fair practices, processes and systems Perceived benet of a fair system Article General bias/discrimination mitigation P01, P02, P03, P06, P07, P09, P10, P11, P12, P14, P17, P18, P19, P21, P24, P25, P26, P27, P28, P29, P31, P37, P38, P41, P42, P43, P45 Trust and reliability P02, P05, P08, P15, P28, P30, P31, P34, P38, P41, P42, P43, P45 Legal compliance P02, P07, P08, P10, P11, P14, P34 4.1.5 What limitations and challenges are identified in the eorts to achieve fairness in technologies? The last question in this tertiary review focuses on the reported limitations and challenges of deni- tions and approaches for fairness. The most frequently mentioned challenge is the contextual aspect of fairness , with studies arguing that fairness in machine learning cannot be understoo d or applied in isolation fr om the context in which decisions ar e made. A lack of consensus on fairness, bias, and their metrics is also reported, as there is no universal agr eement on what fairness means, how bias should be dened, or which metrics best capture these concepts, making it dicult to justify sp ecic choices in practice. Several studies highlight the need for sensitive and vast data , noting challenges related to privacy , consent, and security , and the resulting diculty T able 5: Software engineering articles per approach, te ch- nique and procedure for promoting fairness Approaches, techniques and procedures Articles T echniques based on algorithm/model manip- ulation P01, P03, P04, P05, P06, P07, P08, P09, P10, P11, P12, P13, P14, P15, P16, P17, P19, P21, P22, P23, P24, P25, P26, P27, P28, P29, P31, P32, P33, P34, P35, P36, P37, P38, P40, P41, P42, P43, P44 T echniques based on data collection or manip- ulation P04, P05, P06, P07, P10, P11, P12, P13, P14, P16, P17, P19, P21, P22, P24, P25, P26, P27, P28, P29, P31, P32, P33, P34, P35, P36, P37, P38, P40, P41, P42, P43, P44 Human-in-the-loop P04, P10, P11, P12, P13, P23, P26, P33, P36, P39, P43, P44, P45 Fairness toolkits P04, P12, P13, P22, P27, P28, P29, P36, P40, P43, P44 Institutional and ethical frameworks P01, P02, P04, P12, P13, P23, P30, P38, P40 Transparency and explainability P15, P18, P30, P31, P38, P45 Algorithmic legal regulation P01, P18, P30, P38, P40 Integrate fairness into the design process P18, P36, P39, P44 Auditing pr ocesses P04, P18, P29 Diversity in development team P18, P22, P43 Fairness requirements elicitation P04, P17 User/developer literacy and education P18 of implementing approaches that rely on the distribution of results according to sensitive attributes. The limitations of technical and statistical approaches are also emphasized, as these appr oaches are often seen as insucient to address the root causes of discrimina- tion, even when contextual factors are acknowledged. The trade-o between fairness and performance appears as a recurring challenge, including the impossibility to satisfy every metric , conicts between ethical goals and eciency , and the computational complexity and re- source usage required by fairness techniques. Additional challenges include the limited interpr etability and transparency of black-box models , the complexity of complying with heterogeneous legal r egu- lations , and the insucient consideration of intersectionalities due to simplications and false binaries in fairness approaches. Some studies also report developer bias , which may b e perceived as “ ob- jective” and reinfor ce discrimination under the guise of a “neutral” technology , as well as the abstract nature of fairness and diculty of operationalization through coding practices. Finally , a small numb er of studies mention the rapid pace of AI development , which discour- ages careful fairness assessment, and the lack of studies in sp ecic domains as additional challenges. 4.2 Humanities Systematic Literature Review W e identied 25 studies in humanities-related domains that recently discussed fairness and t the purposes of this study . 4.2.1 How is fairness defined and conceptualized? The most com- mon denition of fairness found in this study understands it as a sociopolitical and historically situated construct shap ed by power and structural inequalities . This perspective emphasizes that un- fairness and discrimination are not isolated technical problems but are diused through institutional infrastructures, governance poli- cies, language, culture, and interpersonal dynamics, and are deeply connected to historical processes such as colonial power relations that continue to inuence contemporary forms of discrimination, including algorithmic injustices. From this view , anchoring fair- ness solely in formal or distributional denitions is insucient, as articial intelligence systems are built within social structures that produce and maintain hierarchies and power relations. As a result, human science disciplines often shift the focus from asking Conference’17, July 2017, W ashington, DC, USA Lucas V alença and Ronnie de Souza Santos whether an AI system is fair to asking ho w technologies reshape power and sociotechnical dynamics. A second denition, pr esent in fewer studies, conceptualizes fairness as the outcome of legal anti- discrimination guidelines, institutional governance , and value-based frameworks , where fairness is closely tied to legal standards, data protection, and gov ernance practices that regulate the development and use of AI systems. The third conceptualization tr eats fairness as a normative and ethical principle , in which moral reasoning guides judgments about algorithmic decisions by considering intentions, social eects, and ethical rules, as well as epistemic uncertainties and normative concerns arising from algorithmic decision making. T able 6: Humanities articles per fairness denition Denition of fairness in human sciences dis- ciplines Articles A socio-political and historically situate d construct shaped by power and structural inequalities. P01, P02, P03, P04, P05, P07, P08, P09, P10, P11, P12, P13, P14, P15, P16, P17, P19, P20, P21, P22, P23, P24, P25 The outcome of legal anti-discrimination guide- lines, institutional gov ernance, and value-based frameworks. P05, P06, P08, P14, P17, P18 A normative and ethical principle, with moral rea- soning guiding judgments about the eects of al- gorithmic decisions. P06, P09, P13 4.2.2 What social benefits are associated with fair practices, pro- cesses and technologies? The most frequently reported b enet con- sists in the reduction of structural injustices , identied in all 25 analyzed articles. In contrast to the softwar e engineering review , where discrimination reduction is often tied to outcomes, this liter- ature emphasizes that r educing discrimination cannot be limited to the distribution of results pr oduced by algorithms. An algorithm that complies with fairness metrics may still be considered unfair or unethical, as structural injustices are repr oduced through tech- nologies in ways that involve historical and colonial inuences, institutional and legal actors, and broader r elations of power and control. A se cond widely reported benet is democratic participation and collective agency of technologies , present in 19 articles, which emphasizes restoring agency and control to p eople and workers in contexts where automation and platform based systems have dispossessed them, often under ne oliberal forms of governance. Equitable access and recognition , discussed in 13 articles, is closely related to these benets and focuses on the recognition of diverse forms of knowledge produced by minority groups that technolo- gies may erase, raising questions about which forms of knowledge are prioritized and which are marginalize d. Finally , disruption of entrenched power dynamics refers to fairness as a means of challeng- ing structural and normative foundations of inequality , where fair systems do more than distribute rights equally and instead trans- form social norms, redress harm, and build collective pow er among groups historically excluded fr om society and from technological systems. 4.2.3 What approaches, methods, and practices are proposed to promote fairness in te chnologies? The human and social sciences literature reports a set of approaches aimed at promoting fairness beyond te chnical interventions. Emb edding multiple p erspectives in system design , present in 16 studies, seeks to prevent technolo- gies from reproducing the narrow views of dominant groups by T able 7: Humanities articles per social benets associated with fair practices, processes and systems Perceived benet of a fair system Article Reduction of structural injustices P01–P25 Democratic participation and collective agency over technologies P01, P02, P03, P04, P10, P12, P13, P14, P16, P17, P18 P19, P20, P21, P22, P23, P24 Equitable access and recognition P01, P02, P04, P07, P13, P14, P15, P16, P17, P18, P19, P20, P21 Disrupt entrenched power dynamics P02, P04, P07, P09, P10, P14, P19, P20, P21, P22, P25 incorporating feminist, intersectional, and decolonial persp ectives into design pr ocesses, often through interdisciplinary collaboration, to enable plural understandings of justice. Closely aligned with this, investigating the intersecting structures of inequality , oppression and exploitation , reported in 8 articles, emphasizes analyzing how technologies reproduce colonial, racial, gendered, and capitalist power relations rather than treating bias as an isolate d te chni- cal issue. Legal framew orks , discussed in 10 studies, are viewed as mechanisms to translate ethical principles into enforceable norms, particularly thr ough the combination of anti-discrimination law and data pr otection regulations, and are often connected to promot- ing transparency and accountability ov er technologies , identied in 11 articles, which extends beyond explainable models to include visibility into institutional de cision making, funding structures, and labor conditions. Participatory practices , also present in 11 studies, focus on including those most aecte d by technologies in their design, evaluation, and governance, framing fairness as a colle c- tive and relational process that challenges corporate dominance. These approaches are supported by e ducation and literacy about digital systems , identied in 7 articles, which stress that fairness requires understanding not only ho w technologies work but also their social and ethical implications. Finally , ethical refusal , found in 7 studies, argues that some technologies, such as surveillance systems, cannot be made fair at all, positioning fairness not only as a matter of design choices but also as a question of whether certain technologies should be developed in the rst place. T able 8: Humanities articles per approach, technique and procedure proposed to promote fairness Approaches, techniques and procedures Articles Embedding multiple perspectives in system design P01, P02, P07, P09, P10, P11, P12, P14, P15, P16, P19, P20, P21, P23, P24, P25 Participatory practices P02, P03, P04, P07, P13, P14, P15, P19, P21, P22, P24 Legal frameworks P01, P02, P04, P05, P06, P08, P09, P14, P17, P18 Investigating the intersecting structures of in- equality , oppression and exploitation P02, P07, P16, P19, P20, P21, P22, P25 Promoting transparency and accountability over technologies P02, P08, P09, P12, P16, P18, P21, P22 Education and literacy about digital systems P01, P02, P10, P12, P13, P14, P19 Ethical refusal P02, P03, P13, P15, P19, P20, P22 4.2.4 What limitations and challenges are identified in the eorts to achieve fairness? The main challenge identied concerns the fail- ure of AI fairness eorts to recognize and address the complexity of social life . Present in 20 articles, this critique shows how AI practi- tioners often treat sociotechnical problems as technical and formal Fairness Across Fields: Comparing Soware Engineering and Human Sciences Perspectives Conference’17, July 2017, W ashington, DC, USA issues, reducing the complexity of lived material e xperiences into quantiable categories and treating fairness interventions as mere xes without questioning why bias exists or how it is perpetuated and complexied by technologies. Closely related to this, reproduce hierarchical and historical structures of discrimination and power , identied in 10 articles, argues that AI fairness practices often repli- cate the same structural inequalities and pow er r elations the y claim to address, mirroring liberal anti-discrimination approaches that individualize injustice, ignore structural causes, and pr otect exist- ing hierarchies of race, gender , and capital. Non-transparency and accountability gaps , discussed in 13 articles, further reinfor ce this reproduction when harms cannot b e attributed to responsible ac- tors and structural discrimination is framed as a technical failure, requiring responses that go beyond explainable AI towar d institu- tional transparency and participatory governance. These gaps ar e intensied by the lack of comprehensive regulation , reporte d in 3 studies, which note that existing legal frameworks ar e insucient to address sociotechnical complexities and often prioritize innova- tion and prot thr ough corporate self-governance and voluntary ethical codes, while failing to account for broader so cial dynamics, civil liberties, and privacy concerns. T able 9: Humanities articles p er limitations and challenges are identied in the eorts to achiev e fairness Perceived benet of a fair system Article Failure of AI-practitioner eorts to recognize and address the complexity of sociotechnical dynamics P01, P02, P03, P04, P06, P07, P10, P12, P13, P14, P15, P16, P17, P19, P20, P21, P22, P23, P24, P25 T echnical fairness practices reproduces hierar- chical structures of discrimination and power P01, P02, P13, P16, P19, P20, P21, P22, P24, P25 Non-transparency and acountability gaps P06, P07, P08, P09, P10, P11, P12, P14, P19, P20, P21, P22, 24, P25 Lack of comprehensive regulation P08, P14, P25 4.3 Cross-Domain Integration: Software Engineering and Humanities Perspe ctives Across both domains, the ndings reveal that softwar e engineering and the humanities approach fairness through fundamentally dif- ferent yet potentially complementary lenses. Software engineering research denes fairness as a measurable property of systems, em- phasizing formal metrics, veriability , and performance trade os to ensure equitable outcomes. In contrast, humanities scholarship frames fairness as a sociopolitical and historically situated construct rooted in power , inequality , and justice . While the engine ering per- spective operationalizes fairness into quantiable procedures, the humanities perspective shows how such formalization can obscure the structural and historical causes of discrimination. O verall, these perspectives point to both the necessity and the limitation of re- ducing fairness to technical optimization, indicating that fairness in software systems depends on connecting measurable criteria with contextual and ethical understanding that situates technology within broader social realities. T able 10 summarizes our analysis. 5 Discussion In this section we answer our research questions, compare our ndings with the literature and discuss the implications of our ndings. T able 10: Conceptual characterization of fairness across do- mains Domain Conceptual Ori- entation Characterization of Fairness Software En- gineering T echnical, mea- surable, and algorithmic per- spective Metric-oriented, system-focused, verication-driven, model-centric, performance-aware, data-dependent, and compliance-focused. Fairness is treated as a quan- tiable quality attribute integrated into system design, measurable through formal metrics, and bounded by performance and compliance constraints. Human Sci- ences Ethical, con- textual, and justice-oriented perspective Justice-oriented, context-aware, power-focused, value- driven, participatory , intersectional, and critical- reective. Fairness is framed as a historically , ethically , and p olitically situate d value that demands contex- tual, participatory , and critical engagement. Intersection Socio-technical, responsible, and integrated per- spective Accountability-focused, transparency-driven, trust- enhancing, diversity-aware, education-centered, ethics- integrated, and contextually measurable. Fairness is conceptualized as a co-constructed socio-technical practice linking systems and society, combining mea- surable rigor with contextual and ethical awareness. 5.1 Answering the Research Questions Our rst RQ was How is fairness dened and conceptualize d? In software engineering, fairness is dened as a measurable property of systems, emphasizing balanced outcomes and veriable perfor- mance. In the humanities, fairness is viewed as a social and political construct rooted in justice and p ower . The rst treats fairness as technical optimization; the second as contextual transformation. The second RQ was What metrics and criteria are used to evaluate fairness? In software engineering, fairness is evaluated through statistical measures such as demographic parity and equal oppor- tunity . These metrics help detect bias but conne fairness to what can be quantied, overlooking broader social realities. Our third RQ asked What so cial benets are associated with fair practices, pro- cesses, and technologies? Both domains link fairness to reducing discrimination and building trust. Engineering emphasizes relia- bility , transpar ency , and compliance, while the humanities focus on justice, equity , and empowerment. The fourth RQ was What approaches, methods, and practices are proposed to promote fairness in technologies? Engineering promotes algorithmic adjustments, fairness toolkits, and human-in-the-loop design. The humanities emphasize participator y design, inter disciplinary collaboration, and ethical r eection on whether technologies should exist at all. Fi- nally , the fth RQ was What limitations and challenges are identie d in the eorts to achieve fairness in technologies? Engine ering faces challenges with data quality , metric conicts, and performance trade-os. The humanities highlight structural inequities, weak regulation, and the reduction of social problems to technical ones. Both agree that fairness requires attention to context, ethics, and human impact. 5.2 Comparing and Contrasting Findings After exploring fairness in both software engineering and the hu- man sciences, we discuss the limits of engineering approaches to fairness to expand these denitions and inspire new perspectives for building fair systems. Fairness in softwar e engineering is tied to decision-making systems that classify and organize the world. A classication system assumes completeness and total cov erage of what it describes ([ 5 ]), but such completeness is impossible in sociotechnical contexts. A system may appear “fair” for assigning Conference’17, July 2017, W ashington, DC, USA Lucas V alença and Ronnie de Souza Santos result A to person B, yet that fairness cannot b e assume d across dierent times, contexts, and institutional r ealities. Reducing fair- ness to the distribution of results hides the situated and material conditions that shape those outcomes, which raises the question of what forms of sociotechnical interaction b ecome invisible through such simplication. This limitation becomes clear when examining real-w orld cases. In Rio de Janeiro, several Uber drivers were killed after navigation al- gorithms sent them through ar eas contr olled by militias [ 13 , 16 , 44 ]. This situation cannot be explained through fairness as distribution of results, since no metric captures ho w an algorithm interacts with local violence and geography . Algorithms shape urban experience without accounting for it, overlooking sociopolitical conditions such as public safety . As [ 27 , 58 ] observe, AI practitioners often fail to consider what it truly means to design fair technologies, re- producing existing power structures and discrimination. A fairness logic centered only on balance d results therefore obscures the social realities within which technologies operate. Procedural fairness app ears mor e inclusiv e but remains conned to mathematical logic. As [ 40 ] notes, assessing fairness depends on context and perception, yet treating context as external assumes that society exists apart from engineering. In practice, pr ocess be- comes a statistical mechanism that se eks fair features and balances accuracy while ignoring the social consequences of those choices. This view risks absolving algorithms of responsibility by claiming fairness when sensitive attributes are remov ed ([ 22 ]), which is the baseline for fairness through unawareness. How ever , as [ 48 ] ar- gues, sensitive data such as race or gender relate to other attributes like education or income, and ignoring them conceals rather than removes bias, r einforcing existing inequalities. These fairness conceptions focus mainly on discrimination yet fail to addr ess the complex so ciotechnical interactions that generate it. Even within their technical scope, they do not produce systems free from discrimination ([ 23 , 27 , 57 ]). [ 59 ] and [ 39 ] highlight the need to engage actors beyond organizational boundaries, while [ 47 ] identies design and historical biases as sour ces of fairness debts. Howev er , as long as fairness is framed through cause and eect models, algorithms simply carry forward existing inequities. Fol- lowing [ 32 ], replacing causes with interacting actors oers a more material understanding of fairness as a collective and contextual process rather than a technical adjustment. Formal denitions of fairness extend these limits by reducing fair- ness to a veriable system property . Although algorithms “speak mathematics” ([ 42 ]), they are more than se quences of logic and computation. As [ 41 ] observes, embedding fairness within the de- velopment process allows auditing and verication but disconnects fairness from the material realities in which technologies operate. Software engineers dene fairness through what they can formalize, stripping away ethical and political meaning in fav or of op erational clarity . Fairness in this conte xt serves the logic of a system that values only what can be codie d and tested, leaving out the social and moral dimensions that make technologies just or unjust. Ultimately , dening fairness as a formal, procedural, or distribu- tive property re veals how computer science kno wledge practices operate as practices of absence ([ 35 ]), excluding the complexities of human experience in favor of measurable criteria. Metrics such as demographic parity and conditional statistical parity ([ 1 , 30 ]) reduce lived experiences to variables and disregar d the structural conditions that precede algorithmic decisions. Sensitive attributes like race, gender , and class are not simply data p oints but mark- ers of material life that algorithms erase. As shown in univ ersity admissions systems, such simplications can reproduce inequality rather than remedy it. Scholars from the humanities therefore call for ethical refusal, questioning whether some technologies should be built at all ([ 52 , 57 ]), and for a structural view of fairness that confronts the hierarchies and systems of power technologies often reproduce ([ 62 ]). 5.3 Implications for Research and Practice For research, our results indicate the nee d to mov e beyond viewing fairness as a xed or measurable system property . Existing models often rely on formal and procedural denitions centered on classi- cation and distribution, despite the fact that sociote chnical contexts are dynamic and incomplete. Future research should therefore focus on how fairness is produced within networks of people, data, and institutions, paying attention to the relationships between techni- cal mechanisms and the material, social, and historical conditions in which technologies are dev eloped and used. Approaches that combine quantitative evaluation with qualitative and participatory inquiry can support more interdisciplinar y and context-grounded fairness research. For practice, our ndings indicate that fairness cannot be achieved solely through audits, metrics, or formal ver- ication. Developers and organizations nee d to account for how algorithmic decisions interact with users’ lived realities, includ- ing geography , safety , and inequality . Treating fairness as a purely mathematical concern obscures the social and p olitical dimensions of technology . Fairness should instead be approached as an ongoing process involving negotiation, accountability , and r eection, which includes engaging diverse communities, documenting trade-os transparently , and recognizing the ethical consequences of tech- nical systems. In some cases, achieving fairness may also require refusing to design or deploy technologies that perpetuate harm. 6 Conclusions and Future W ork This study compared how fairness is understood and practiced in software engineering and in the human sciences. The analysis of seventy studies revealed a divide between the formal, metric-base d denitions prevalent in software engineering and the socially situ- ated interpretations emphasized in the humanities. While software engineering often treats fairness as a measurable attribute aimed at bias mitigation and accountability , the human sciences frame it as a political and relational concept tied to power and social struc- tures. Bridging these perspectives requires rethinking fairness not only as a system pr operty but also as a situate d practice shape d by context and values. Our future work will extend this analysis by colle cting the experiences of softwar e practitioners, enabling a tripartite comparison between software engineering literature, software engineering practice, and the social sciences. This inte- gration aims to advance fairness as an ethical principle guiding the design and governance of sociotechnical systems. Fairness Across Fields: Comparing Soware Engineering and Human Sciences Perspectives Conference’17, July 2017, W ashington, DC, USA 7 Data A vailability The papers analyzed in this scoping study are available at https: //gshare.com/s/83d6c294f2f6e4b2630d References [1] Agathe Balayn, Christoph Lo, and Geert-Jan Houben. 2021. Managing bias and unfairness in data for decision support: a survey of machine learning and data engineering approaches to identify and mitigate bias and unfairness within data management and analytics systems. 30, 5 (2021), 739–768. doi:10.1007/s00778- 021- 00671- 8 [2] Chelsea Barabas, Colin Doyle , Jb Rubinovitz, and Karthik Dinakar . 2020. Studying up: reorienting the study of algorithmic fairness around issues of p ower . In Proceedings of the 2020 Conference on Fairness, Accountability , and Transparency (Barcelona Spain, 2020-01-27). A CM, 167–176. doi:10.1145/3351095.3372859 [3] Joachim Baumann and Michele Loi. 2023. Fairness and risk: an ethical argument for a group fairness denition insurers can use. Philosophy & Technology 36, 3 (2023), 45. [4] T uba Bircan and Mustafa F. Özbilgin. 2025. Unmasking inequalities of the code: Disentangling the nexus of AI and inequality . 211 (2025), 123925. doi:10.1016/j. techfore.2024.123925 [5] Georey C. Bowker and Susan Leigh Star . 2008. Sorting things out: classication and its consequences (1. pap erback ed., 8. print ed.). MIT Press. [6] Y uriy Brun and Alexandra Meliou. 2018. Software fairness. In Proceedings of the 2018 26th A CM joint meeting on european software engineering conference and symposium on the foundations of software engineering . 754–759. [7] Michel Callon. 1987. So ciety in the making: The study of technology as a tool for sociological analysis. 1 (1987), 77–97. [8] Michel Callon. 1987. So ciety in the making: The study of technology as a tool for sociological analysis. The social construction of technological systems: New directions in the sociology and histor y of technology (1987), 83–103. [9] Alycia N Carey and Xintao Wu. 2022. The causal fairness eld guide: Perspectives from social and formal sciences. Frontiers in big Data 5 (2022), 892837. [10] Bruno Cartaxo, Gustavo Pinto, and Sergio Soares. 2020. Rapid Reviews in Software Engineering. doi:10.48550/ARXIV .2003.10006 V ersion Number: 1. [11] Simon Caton and Christian Haas. 2024. Fairness in Machine Learning: A Survey . 56, 7 (2024), 1–38. doi:10.1145/3616865 [12] Zhenpeng Chen, Jie M Zhang, Max Hort, Mark Harman, and Federica Sarro. 2024. Fairness testing: A comprehensive sur vey and analysis of trends. ACM Transactions on Software Engineering and Metho dology 33, 5 (2024), 1–59. [13] CNN Brasil. 2024. Motorista de app morre após tiroteio no RJ; passageiro levou vítima ao hospital. https://www.cnnbrasil.com.br/nacional/motorista- de- app- morre- apos- tiroteio- no- rj- passageiro- levou- vitima- ao- hospital/ [14] Ludovic Coupaye. 2017. Cadeia operatória, transe ctos e teorias: algumas reexões e sugestões sobre o p ercurso de um método clássico. Técnica e transformação: perspectivas antropológicas. Rio de Janeiro: ABA Publicações (2017), 475–494. [15] Daniela S Cruzes and T ore Dyba. 2011. Recommended steps for thematic synthesis in software engineering. In 2011 international symposium on empirical software engineering and measurement . IEEE, 275–284. [16] Danilo Vieira and Mariana Cardoso. 2024. Motorista de aplicativo alvo de tiros que mataram turista baiana diz que não houve ordem de parada por tracantes. https://g1.glob o.com/rj/rio- de- janeiro/noticia/2024/12/30/motorista- de- aplicativo- alvo- de- tiros- que- mataram- turista- baiana- diz- que- nao- houve- ordem- de- parada- por- tracantes.ghtml [17] Jim Dator , D Pratt, and Y Se o. 2006. What Is Fairness? Fairness, Globalization, and Public Institutions 19 (2006). [18] Jenny L. Davis, Apryl Williams, and Michael W . Y ang. 2021. Algorithmic repara- tion. 8, 2 (2021), 20539517211044808. doi:10.1177/20539517211044808 [19] Ronnie de Souza Santos, Felipe Fronchetti, Sávio Freire, and Rodrigo Spinola. 2025. Software fairness debt: Building a research agenda for addressing bias in AI systems. ACM Transactions on Software Engineering and Metho dology 34, 5 (2025), 1–21. [20] Carmine Ferrara, Giulia Sellitto, Filomena Ferrucci, Fabio Palomba, and Andrea De Lucia. 2024. Fairness-aware machine learning engineering: how far ar e we? Empirical software engineering 29, 1 (2024), 9. [21] Darren George and Paul Mallery . 2018. Descriptive statistics. In IBM SPSS Statistics 25 Step by Step . Routle dge, 126–134. [22] Rubén González-Sendino, Emilio Serrano, Javier Bajo, and Paulo Novais. 2024. A Review of Bias and Fairness in Articial Intelligence. 9, 1 (2024), 5. doi:10.9781/ ijimai.2023.11.001 Publisher: Univ ersidad Internacional de La Rioja. [23] Daniel Greene, Anna Lauren Homann, and Luke Stark. 2019. Better, Nicer, Clearer , Fairer: A Critical Assessment of the Movement for Ethical Articial Intelligence and Machine Learning. doi:10.24251/HICSS.2019.258 [24] Philipp Hacker. 2018. T eaching fairness to articial intelligence: Existing and novel strategies against algorithmic discrimination under EU law . 55 (2018), 1143–1185. Issue Issue 4. doi:10.54648/COLA2018095 [25] Alex Hanna, Remi Denton, Andre w Smart, and Jamila Smith-Loud. 2020. T owards a critical race methodology in algorithmic fairness. In Procee dings of the 2020 Conference on Fairness, Accountability , and Transparency (Barcelona Spain, 2020- 01-27). ACM, 501–512. doi:10.1145/3351095.3372826 [26] Sefa Hayibor . 2017. Is fair treatment enough? Augmenting the fairness-based perspective on stakeholder b ehaviour . Journal of Business Ethics 140, 1 (2017), 43–64. [27] Anna Lauren Homann. 2019. Where fairness fails: data, algorithms, and the lim- its of antidiscrimination discourse. 22, 7 (2019), 900–915. doi:10.1080/1369118X. 2019.1573912 [28] Pratyusha Kalluri. 2020. Don’t ask if articial intelligence is go od or fair , ask how it shifts power . 583, 7815 (2020), 169–169. doi:10.1038/d41586- 020- 02003- 2 [29] Davinder K aur , Suleyman Uslu, Kaley J. Rittichier , and Arjan Durresi. 2023. Trust- worthy Articial Intelligence: A Review . 55, 2 (2023), 1–38. doi:10.1145/3491209 [30] T ahsin Alamgir Kheya, Mohamed Reda Bouadjenek, and Sunil Aryal. 2024. The Pursuit of Fairness in Articial Intelligence Models: A Survey . doi:10.48550/ ARXIV .2403.17333 V ersion Number: 1. [31] Barbara Kitchenham, Rialette Pretorius, David Budgen, O. Pearl Brereton, Mark T urner , Mahmo od Niazi, and Stephen Linkman. 2010. Systematic literature reviews in software engineering – A tertiary study . 52, 8 (2010), 792–805. doi:10. 1016/j.infsof.2010.03.006 [32] Bruno Latour . 2005. Reassembling the social: A n introduction to actor-network- theory . Oxford university press. [33] Matheus de Morais Leça and Ronnie de Souza Santos. 2025. T owards User-Focused Cross-Domain T esting: Disentangling Accessibility , Usability , and Fairness. arXiv preprint arXiv:2501.06424 (2025). [34] André Lemos. 2021. Datacação da vida. 21, 2 (2021), 193–202. doi:10.15448/1984- 7289.2021.2.39638 [35] James W . Malazita and Korryn Resetar. 2019. Infrastructures of abstraction: how computer science education produces anti-political subjects. 30, 4 (2019), 300–312. doi:10.1080/14626268.2019.1682616 [36] Brent Daniel Mittelstadt, Patrick Allo , Mariarosaria T addeo, Sandra W achter , and Luciano Floridi. 2016. The ethics of algorithms: Mapping the debate. 3, 2 (2016), 2053951716679679. doi:10.1177/2053951716679679 [37] Annemarie Mol and John Law . 2020. Complexities: An Introduction . Duke Univer- sity Press, 1–23. doi:10.1515/9780822383550- 001 [38] Paola Panarese, Marta Margherita Grasso, and Claudia Solinas. 2025. Algorithmic bias, fairness, and inclusivity: a multilevel framework for justice-oriented AI. (2025). doi:10.1007/s00146- 025- 02451- 2 [39] Emmanouil Papagiannidis, Patrick Mikalef, and Kieran Conb oy . 2025. Responsible articial intelligence governance: A re view and research framework. 34, 2 (2025), 101885. doi:10.1016/j.jsis.2024.101885 Publisher: Elsevier BV. [40] Dana Pessach and Er ez Shmueli. 2023. A Review on Fairness in Machine Learning. 55, 3 (2023), 1–44. doi:10.1145/3494672 Publisher: Association for Computing Machinery (ACM). [41] Nga Pham, Hung Pham-Ngoc, and Anh Nguyen-Duc. 2023. Fairness Requirement in AI Engineering – A Review on Current Research and Future Directions . Springer International Publishing, 3–13. doi:10.1007/978- 3- 031- 32436- 9_1 ISSN: 2195- 4968, 2195-4976. [42] Ricardo Trainotti Rab onato and Lilian Berton. 2025. A systematic review of fairness in machine learning. 5, 3 (2025), 1943–1954. doi:10.1007/s43681- 024- 00577- 5 Publisher: Springer Science and Business Media LLC. [43] John Rawls. 1958. Justice as fairness. The philosophical re view 67, 2 (1958), 164–194. [44] Roberta de Souza. 2024. Motorista de aplicativo morr e baleado após entrar por engano em comunidade de Campo Grande; passageiros também foram alvejados. https://oglobo.globo.com/rio/noticia/2024/11/17/motorista- de- aplicativo- morre- baleado- apos- entrar- por- engano- em- comunidade- de- campo- grande- passageiros- tambem- foram- alvejados.ghtml [45] T ahereh Saheb. 2023. “Ethically contentious aspects of articial intelligence surveillance: a social science perspe ctive ”. 3, 2 (2023), 369–379. doi:10.1007/ s43681- 022- 00196- y [46] Christian Sandvig, Kevin Hamilton, K arrie Karahalios, and Cedric Langbort. 2016. When the Algorithm Itself Is a Racist: Diagnosing Ethical Harm in the Basic Components of Software. International Journal of Communication 10 (2016), 4972–4990. https://ijoc.org/index.php/ijoc/article/view/6182 [47] Ronnie de Souza Santos, Felipe Fronchetti, Savio Freire, and Rodrigo Spinola. 2024. Software Fairness Debt. doi:10.48550/ARXIV .2405.02490 V ersion Number: 3. [48] Pola Schwöbel and Peter Remmers. 2022. The Long Arc of Fairness: Formalisa- tions and Ethical Discourse. In 2022 A CM Conference on Fairness Accountabil- ity and Transparency (Seoul Republic of Korea, 2022-06-21). ACM, 2179–2188. doi:10.1145/3531146.3534635 [49] Emeralda Sesari, Fe derica Sarro, and A yushi Rastogi. 2024. Understanding fairness in software engineering: Insights from stack exchange sites. In Proceedings of the 18th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement . 269–280. Conference’17, July 2017, W ashington, DC, USA Lucas V alença and Ronnie de Souza Santos [50] Ezekiel Soremekun, Mike Papadakis, Maxime Cordy , and Yves Le T raon. 2022. Software fairness: An analysis and survey . Comput. Surveys (2022). [51] Ezekiel Soremekun, Mike Papadakis, Maxime Cordy , and Yves Le T raon. 2022. Software Fairness: An Analysis and Survey . doi:10.48550/ARXIV .2205.08809 V ersion Number: 1. [52] Luke Stark. 2023. Breaking Up (with) AI Ethics. 95, 2 (2023), 365–379. doi:10. 1215/00029831- 10575148 [53] Miroslaw Staron, Silvia Abraháo, Alexander Serebrenik, Birgit Penzenstadler , Jennifer Horko, and Chetan Honnenahalli. 2024. Laws, Ethics, and Fairness in Software Engineering. IEEE Software 42, 1 (2024), 110–113. [54] Daniel V arona and Juan Luis Suárez. 2022. Discrimination, bias, fairness, and trustworthy AI. Applied Sciences 12, 12 (2022), 5826. [55] Sahil V erma and Julia Rubin. 2018. Fairness denitions explained. In Proceedings of the International W orkshop on Software Fairness (Gothenburg Sweden, 2018-05-29). ACM, 1–7. doi:10.1145/3194770.3194776 [56] Sahil V erma and Julia Rubin. 2018. Fairness denitions explained. In Proceedings of the international workshop on software fairness . 1–7. [57] Lindsay W einberg. 2022. Rethinking Fairness: An Interdisciplinary Survey of Critiques of Hegemonic ML Fairness Approaches. (2022). doi:10.48550/ARXIV . 2205.04460 Publisher: arXiv V ersion Number: 1. [58] Sarah Myers W est. 2020. Redistribution and Rekognition: A Feminist Critique of Algorithmic Fairness. 6, 2 (2020). doi:10.28968/cftt.v6i2.33043 [59] Jintang Xue, Y un-Cheng W ang, Chengwei W ei, Xiaofeng Liu, Jonghye W oo, and C. C. Jay Kuo. 2023. Bias and Fairness in Chatbots: An Overview. doi:10.48550/ ARXIV .2309.08836 V ersion Number: 2. [60] Soa Yfantidou, Marios Constantinides, Dimitris Spathis, Athena V akali, Daniele Quercia, and Fahim Kawsar . 2023. The State of Algorithmic Fairness in Mobile Human-Computer Interaction. In Proceedings of the 25th International Conference on Mobile Human-Computer Interaction (Athens Greece, 2023-09-26). A CM, 1–7. doi:10.1145/3565066.3608685 [61] Mike Zajko. 2022. Articial intelligence, algorithms, and social inequality: Sociological contributions to contemporar y debates. 16, 3 (2022), e12962. doi:10.1111/soc4.12962 [62] Mike Zajko. 2021. Conser vative AI and social inequality: conceptualizing alterna- tives to bias through social theory . 36, 3 (2021), 1047–1056. doi:10.1007/s00146- 021- 01153- 9

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment