Funding AI for Good: A Call for Meaningful Engagement

Funding AI for Good: A Call for Meaningful Engagement
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Artificial Intelligence for Social Good (AI4SG) is a growing area that explores AI’s potential to address social issues, such as public health. Yet prior work has shown limited evidence of its tangible benefits for intended communities, and projects frequently face real-world deployment and sustainability challenges. While existing HCI literature on AI4SG initiatives primarily focuses on the mechanisms of funded projects and their outcomes, much less attention has been given to the upstream funding agendas that influence project approaches. In this work, we conducted a reflexive thematic analysis of 35 funding documents, representing about $410 million USD in total investments. We uncovered a spectrum of conceptual framings of AI4SG and the approaches that funding rhetoric promoted: from biasing towards technology capacities (more techno-centric) to emphasizing contextual understanding of the social problems at hand alongside technology capacities (more balanced). Drawing on our findings on how funding documents construct AI4SG, we offer recommendations for funders to embed more balanced approaches in future funding call designs. We further discuss implications for how the HCI community can positively shape AI4SG funding design processes.


💡 Research Summary

The paper investigates how funding agendas shape Artificial Intelligence for Social Good (AI4SG) initiatives by conducting a reflexive thematic analysis of 35 publicly available funding documents—19 calls for proposals and 16 grant announcements—representing roughly $410 million in investments. Drawing on prior HCI work that highlights the challenges of AI4SG projects (limited tangible benefits, deployment and sustainability issues) and on scholarship from CSCW, ICTD, and HCI4D that distinguishes “techno‑centric” from “balanced” approaches, the authors develop an analytical framework that maps funding rhetoric onto two intersecting axes: (1) the degree to which AI is portrayed as a self‑contained solution versus a tool that must be coupled with deep contextual understanding, and (2) the way success is defined and the extent of community involvement required.

The analysis reveals a spectrum of framings. Documents that lean toward a techno‑centric stance emphasize AI’s technical capacities, specify outcomes in terms of model accuracy, data volume, or prototype delivery, and treat community participation as a late‑stage consultation. In contrast, “balanced” documents explicitly acknowledge both the promise and the risks of AI, call for interdisciplinary collaboration, require early‑stage co‑design with local stakeholders, and adopt broader impact metrics such as community satisfaction, policy change, and long‑term service sustainability. The authors note that most funders—largely private tech firms, foundations, and U.S. government agencies—define “expertise” narrowly, privileging technical institutions and marginalizing community organizations.

Based on these findings, the paper offers concrete recommendations for future funding call design: (1) embed explicit requirements for contextual analysis and community co‑design; (2) allocate dedicated budget lines for post‑deployment maintenance, capacity building, and relationship management; (3) broaden evaluation criteria to weight social impact alongside technical deliverables; (4) provide training or resources that enable project teams to conduct meaningful community engagement; and (5) establish transparent feedback mechanisms between funders, researchers, and community partners.

The authors also articulate how HCI scholars can play a “study‑up” role: by collaborating with funders during agenda‑setting, by critically examining and reshaping AI imaginaries embedded in funding texts, and by advocating for more equitable power distributions in AI4SG ecosystems. The paper contributes three main insights: (i) it foregrounds funding documents as discursive instruments that pre‑shape project trajectories, (ii) it identifies both techno‑centric and balanced patterns within those documents, and (iii) it outlines a roadmap for HCI‑informed interventions that could steer AI4SG funding toward more socially responsible outcomes.

Limitations include the exclusive focus on Western, primarily U.S., funding sources and the reliance on publicly available texts, which may omit informal or internal decision‑making processes. Future work is suggested to expand the corpus to global contexts, link funding rhetoric to actual project outcomes, and conduct longitudinal studies of how revised funding designs affect real‑world impact.


Comments & Academic Discussion

Loading comments...

Leave a Comment