Irresponsible AI: big tech's influence on AI research and associated impacts
Reading time: 5 minute
...
📝 Original Info
Title: Irresponsible AI: big tech’s influence on AI research and associated impacts
ArXiv ID: 2512.03077
Date: 2025-11-27
Authors: Alex Hernandez-Garcia, Alexandra Volokhova, Ezekiel Williams, Dounia Shaaban Kabakibo
📝 Abstract
The accelerated development, deployment and adoption of artificial intelligence systems has been fuelled by the increasing involvement of big tech. This has been accompanied by increasing ethical concerns and intensified societal and environmental impacts. In this article, we review and discuss how these phenomena are deeply entangled. First, we examine the growing and disproportionate influence of big tech in AI research and argue that its drive for scaling and general-purpose systems is fundamentally at odds with the responsible, ethical, and sustainable development of AI. Second, we review key current environmental and societal negative impacts of AI and trace their connections to big tech and its underlying economic incentives. Finally, we argue that while it is important to develop technical and regulatory approaches to these challenges, these alone are insufficient to counter the distortion introduced by big tech's influence. We thus review and propose alternative strategies that build on the responsibility of implicated actors and collective action.
💡 Deep Analysis
📄 Full Content
Irresponsible AI: big tech’s influence on AI research
and associated impacts
Alex Hernandez-Garcia ∗
Mila, Université de Montréal
Montreal, Québec, Canada
alex.hernandez-garcia@mila.quebec
Alexandra Volokhova
Mila, Université de Montréal
Montreal, Québec, Canada
alexandra.volokhova@mila.quebec
Ezekiel Williams
Mila, Université de Montréal
Montreal, Québec, Canada
ezekiel.williams@mila.quebec
Dounia Shaaban Kabakibo
Mila, Université de Montréal
Montreal, Québec, Canada
dounia.shaaban.kabakibo@umontreal.ca
Abstract
The accelerated development, deployment and adoption of artificial intelligence
systems has been fuelled by the increasing involvement of big tech. This has
been accompanied by increasing ethical concerns and intensified societal and
environmental impacts. In this article, we review and discuss how these phenomena
are deeply entangled. First, we examine the growing and disproportionate influence
of big tech in AI research and argue that its drive for scaling and general-purpose
systems is fundamentally at odds with the responsible, ethical, and sustainable
development of AI. Second, we review key current environmental and societal
negative impacts of AI and trace their connections to big tech and its underlying
economic incentives. Finally, we argue that while it is important to develop
technical and regulatory approaches to these challenges, these alone are insufficient
to counter the distortion introduced by big tech’s influence. We thus review and
propose alternative strategies that build on the responsibility of implicated actors
and collective action.
1
Introduction
In recent years, the technology known as artificial intelligence (AI) has shifted from being predomi-
nantly an academic study subject, to making recurrent headlines in mainstream media and becoming
a conversation topic for many. While AI remains a topic of little interest for a large fraction of the
world’s population, it sparks enthusiasm for the opportunities it may open among others. Meanwhile,
the accelerated deployment and adoption of AI is responsible for tangible societal and environmental
impacts (Bender & Hanna, 2025).
The transition of AI from academia into the public sphere has gone hand-in-hand with the corporate
world, in particular “big tech” (large tech companies like Google, Meta, Amazon, and Microsoft),
taking over and setting the agenda for many aspects of AI research, development, application, and
even regulatory decision making for the field (Whittaker, 2021; Jurowetzki et al., 2021; Ahmed et al.,
2023; Gizi´nski et al., 2024). Importantly, big tech has also attempted to influence the narrative around
responsible AI development, with many large corporate players writing their own responsible AI
guidelines, and engaging in AI ethics-related research (Jobin et al., 2019; Young et al., 2022; Bughin,
2025).
∗All authors have significantly contributed to this article. The order does not indicate amount of contribution.
39th Conference on Neural Information Processing Systems (NeurIPS 2025)
Workshop on Algorithmic Collective Action (ACA@NeurIPS 2025).
arXiv:2512.03077v1 [cs.CY] 27 Nov 2025
Nonetheless, far from advancing the responsible, ethical, and sustainable development of AI, big
corporations have rather significantly contributed to the negative impacts of AI in the world: envi-
ronmental harm due to increased demands for energy and resources (Crawford, 2021; Desroches
et al., 2025), increased surveillance (Zuboff, 2023), loss of privacy (Véliz, 2021), infringement upon
intellectual property (Jiang et al., 2023), spread of mis/dis-information (Bontridder & Poullet, 2021;
Raman et al., 2024), increased inequality (Adams, 2024; Kim, 2021), degradation of labour rights
(Altenried, 2020; Crawford, 2021; Bender & Hanna, 2025), etc. While the role of big tech in these
negative impacts is often discussed in certain academic and non-academic circles (Abdalla & Abdalla,
2021; Young et al., 2022; Verdegem, 2022; Bender & Hanna, 2025), big tech’s ever-growing influence
is often overlooked within the larger AI research community, likely due to its pervasive presence
(Whittaker, 2021).
In this paper, we review literature from diverse fields with the aim of 1) summarising the influence
that big tech has on AI research, 2) examining the link between the negative impacts of AI and big
tech’s influence, 3) discussing why big tech, and corporate tech more broadly, is incentivized and
structured to favour irresponsible AI (iAI), and 4) suggesting potential ways in which researchers
could counter this influence and thus support responsible AI efforts. Consequently, we echo the many
voices asserting that technical solutions alone, without addressing factors such as corporate influence
and industry incentives, will fail to result in truly responsible AI (Greene et al., 2019; Verdegem,
2022; Kalluri, 2020; Adams, 2024). We tailor this paper particularly for AI researchers, motivated
both by our insider experience that these topics c