Studying the Transfer of Biases from Programmers to Programs
It is generally agreed that one origin of machine bias is resulting from characteristics within the dataset on which the algorithms are trained, i.e., the data does not warrant a generalized inference. We, however, hypothesize that a different mechanism', hitherto not articulated in the literature, may also be responsible for machine's bias, namely that biases may originate from (i) the programmers' cultural background, such as education or line of work, or (ii) the contextual programming environment, such as software requirements or developer tools. Combining an experimental and comparative design, we studied the effects of cultural metaphors and contextual metaphors, and tested whether each of these would transfer’ from the programmer to program, thus constituting a machine bias. The results show (i) that cultural metaphors influence the programmer’s choices and (ii) that induced' contextual metaphors can be used to moderate or exacerbate the effects of the cultural metaphors. This supports our hypothesis that biases in automated systems do not always originate from within the machine's training data. Instead, machines may also replicate’ and `reproduce’ biases from the programmers’ cultural background by the transfer of cultural metaphors into the programming process. Implications for academia and professional practice range from the micro programming-level to the macro national-regulations or educational level, and span across all societal domains where software-based systems are operating such as the popular AI-based automated decision support systems.
💡 Research Summary
The paper “Studying the Transfer of Biases from Programmers to Programs” investigates a largely overlooked source of algorithmic bias: the cultural and contextual influences of the programmers themselves. While the literature has extensively documented how biased training data can lead to unfair AI outcomes, the authors hypothesize that programmers’ cultural backgrounds (education, profession, personal interests) and the immediate programming context (requirements, tools, metaphors) can also be transferred into software artifacts, thereby creating bias independent of data.
To test this hypothesis, the authors designed a mixed experimental‑comparative study with two main research questions: (1) Are cultural and contextual biases transferred from programmers to the programs they write? (2) Can programmers be deliberately primed with a new bias, and does this induced bias subsequently appear in their code?
The experimental material consists of a “bias‑revealing test” and a fictional “Philosopher” story used for priming. The bias‑revealing test forces participants to choose one of three rationales—“harmony/equality/fairness,” “aesthetics/arts,” or “order/continuity”—when answering a programming‑related question. The story embeds the same three metaphors, allowing the researchers to manipulate participants’ exposure to a particular metaphor (contextual priming).
Participants were recruited from three distinct academic/professional backgrounds: social sciences, natural sciences, and arts/culture. This stratification was intended to capture differing cultural metaphors. The study also included both professional programmers and novices, reflecting the growing prevalence of “lay” programmers in IoT and visual‑language environments. All participants completed a series of control questions (word‑association, letter‑placement, life‑aspect ranking) to verify that the priming manipulation was successful and that participants’ background classifications were accurate.
Statistical analysis of the responses showed that (i) cultural metaphors significantly influenced participants’ choices in the bias‑revealing test, confirming that programmers’ background biases can be detected even in a simple, abstract programming task; (ii) contextual priming either amplified or mitigated these effects depending on the metaphor presented, demonstrating that bias can be experimentally induced and transferred to the program artifact; and (iii) even participants with formal programming education were susceptible to priming, challenging the common belief that technical training immunizes developers against bias.
Methodologically, the study’s strengths lie in its multi‑layered design, the use of a novel cognitive task to surface hidden biases, and the inclusion of a diverse participant pool. However, the authors acknowledge several limitations. The artificial programming task does not capture the full complexity of real‑world software development (e.g., code quality, maintainability, testing). The set of metaphors is limited to three abstract themes, which may not represent the full spectrum of cultural biases (such as gender or racial stereotypes). Moreover, priming effects were measured immediately after exposure, leaving open the question of how durable these transferred biases are over time.
The paper suggests future work should involve field studies on actual development projects, expand the range of bias types examined, and conduct longitudinal assessments to gauge persistence. Practical implications include incorporating bias‑awareness training into computer science curricula, adding bias‑checking stages (code reviews, automated detection tools) to development pipelines, and informing policy makers about the need for regulations that consider programmer‑originated bias alongside data‑originated bias.
In conclusion, the authors provide empirical evidence that programmer‑originated cultural and contextual metaphors can be transferred into software, constituting a novel mechanism for AI bias. This contribution broadens the discourse on algorithmic fairness and opens new avenues for research, education, and governance aimed at mitigating bias throughout the entire software lifecycle.
Comments & Academic Discussion
Loading comments...
Leave a Comment