Risk, Data, Alignment: Making Credit Scoring Work in Kenya
Credit scoring is an increasingly central and contested domain of data and AI governance, frequently framed as a neutral and objective method of assessing risk across diverse economic and political contexts. Based on a nine-month ethnography of credit scoring practices in Nairobi, Kenya, we examined the sociotechnical and institutional work of data science in digital lending. While established regional telcos and banks are leveraging proprietary data to develop digital loan products, algorithmic credit scoring is being transformed by new actors, techniques, and shifting regulations. Our findings show how practitioners construct alternative data using technical and legal workarounds, formulate risk through multiple interpretations, and negotiate model performance via technical and political means. We argue that algorithmic credit scoring is accomplished through the ongoing work of alignment that stabilizes risk under conditions of persistent uncertainty, taking epistemic, modeling, and contextual forms. Extending work on alignment in HCI, we show how it operates as a two-way translation, where models are made “safe for worlds” while those worlds are reshaped to be “safe for models.”
💡 Research Summary
The paper “Risk, Data, Alignment: Making Credit Scoring Work in Kenya” presents a nine‑month ethnographic study of digital lending practices in Nairobi, focusing on how algorithmic credit‑scoring systems are built, deployed, and negotiated in a context of high default rates, aggressive collection practices, and evolving regulation. The authors trace three major actors—large telcos, banks, and fintech startups—and show how each leverages “alternative data” (mobile‑money transactions, airtime recharge logs, location traces) to construct credit‑risk models in the absence of traditional credit histories. Because Kenyan data‑protection law is still nascent, firms employ technical and legal work‑arounds such as anonymisation, pseudonymisation, and the creation of synthetic user profiles to sidestep privacy constraints while still extracting predictive signals.
A central contribution is the concept of “alignment” as a two‑way translation between models and the social world. Drawing on Fujimura’s and Dourish’s alignment theory, the authors argue that risk is not a static, measurable quantity but an entangled construct that includes local credit cultures, trust networks, and everyday behavioural patterns. Practitioners continuously negotiate the meaning of risk across data scientists, product managers, regulators, and frontline loan officers. This negotiation produces three layers of alignment: epistemic (deciding which data can meaningfully capture risk), modeling (designing algorithms, validation pipelines, and performance metrics), and contextual (shaping legal, institutional, and cultural environments to accommodate the model).
The paper demonstrates that alignment work simultaneously makes models “safe for worlds” (i.e., compliant, performant, and market‑ready) and makes worlds “safe for models” (i.e., by reshaping regulations, data‑sharing agreements, and borrower practices). This dual process mitigates the twin challenges of risk (high interest rates, aggressive debt collection) and uncertainty (data scarcity, shifting policy, unpredictable borrower behaviour). The authors show that performance negotiations involve trade‑offs among accuracy, default‑rate reduction, regulatory compliance, and market share, resolved through a mix of technical tuning (feature engineering, ensemble methods) and political bargaining (interpretations of data‑protection statutes, partnership formation).
Importantly, the study highlights power asymmetries: dominant telcos can lock in data monopolies, reinforcing their market position and potentially undermining financial inclusion and privacy. The authors call for policy interventions that increase transparency of alignment processes, provide clear guidelines for alternative‑data use, and protect data sovereignty. They suggest that regulators should mandate audit trails for data pipelines, enforce consent mechanisms, and create shared data‑governance frameworks that balance innovation with consumer rights.
Finally, the paper situates Kenyan credit scoring as a laboratory for broader high‑stakes AI systems (criminal justice, welfare, healthcare). By exposing how risk, uncertainty, and alignment co‑produce each other, the work offers a transferable analytical lens for studying algorithmic governance in other domains and regions. The authors conclude with a research agenda that includes developing metrics for alignment efficacy, exploring cross‑sectoral alignment dynamics, and designing participatory mechanisms that give borrowers a voice in how their data are transformed into credit scores.
Comments & Academic Discussion
Loading comments...
Leave a Comment