This paper replicates, extends, and refutes conclusions made in a study published in PLoS ONE ("Even Good Bots Fight"), which claimed to identify substantial levels of conflict between automated software agents (or bots) in Wikipedia using purely quantitative methods. By applying an integrative mixed-methods approach drawing on trace ethnography, we place these alleged cases of bot-bot conflict into context and arrive at a better understanding of these interactions. We found that overwhelmingly, the interactions previously characterized as problematic instances of conflict are typically better characterized as routine, productive, even collaborative work. These results challenge past work and show the importance of qualitative/quantitative collaboration. In our paper, we present quantitative metrics and qualitative heuristics for operationalizing bot-bot conflict. We give thick descriptions of kinds of events that present as bot-bot reverts, helping distinguish conflict from non-conflict. We computationally classify these kinds of events through patterns in edit summaries. By interpreting found/trace data in the socio-technical contexts in which people give that data meaning, we gain more from quantitative measurements, drawing deeper understandings about the governance of algorithmic systems in Wikipedia. We have also released our data collection, processing, and analysis pipeline, to facilitate computational reproducibility of our findings and to help other researchers interested in conducting similar mixed-method scholarship in other platforms and contexts.
Between March 5th and 25th, 2013, one of the darkest periods in the robot history of Wikipedia occurred. An automated software agent called Addbot, operated by Adam Shoreland -an employee of the Berlin-based Wikimedia Deutschland foundationcommitted the most aggressive bot-on-bot revert conflict event ever recorded. In a flurry of inefficiency and inefficacy, Addbot reverted 146,614 contributions other bots had made to English Wikipedia. It removed links between different language versions of Wikipedia
Geiger & Halfaker articles, which had been automatically curated and updated by dozens of different bots for years. During a 20-day rampage, the bot annihilated their work and the work of their maintainers. The fact that such a massively destructive act could take place without being stopped is evidence that Wikipedia had failed to govern bot behaviors and that bots in Wikipedia are out of control.
Or this is what we might conclude if we were to analyze activity from Wikipedia’s database of edits without understanding the broader socio-technical context of what those traces mean in the Wikipedian community. If we operationalize “conflict” as one user account removing content previously added by a different user account, it is easy to find many cases between bots matching this criteria. This approach 1 was chosen by the authors of a paper published in PLoS ONE titled “Even Good Bots Fight: The Case of Wikipedia” [75] (or “EGBF”), which received major media coverage for reporting a massive number of cases where bots were reverting each other’s edits, then concluding that Wikipedia had serious problems with how it regulates automated agents. However, if we look closely at Addbot’s actions as they played out in the Wikipedian community, we instead see this case (and many more) as a productive, efficient automated operation performed by upstanding member of the Wikipedia community.
Addbot’s removal of “interwiki language links” came after a massive effort coordinated through Wikidata 2 , a meta-level database created to centralize language-independent data. For example, the links appearing on the sidebar of the “Japan” article on the English Wikipedia to “Japon” on the French Wikipedia and “日本” on the Japanese Wikipedia were once manually curated on the “Japan” page on the English Wikipedia. First human Wikipedians, then bots curated a long list of hidden links stored in the page text itself, which was parsed by the MediaWiki platform to display these links to readers. But with Wikidata, these links between all language versions were centrally curated, making the hidden links in each page’s text redundant. Addbot’s task was to remove these hidden links for pages that had their interwiki links migrated to Wikidata.
On Feb 14th, 2013, Adam Shoreland filed a proposal with the English Wikipedia’s Bot Approvals Group to have Addbot take on this task, in line with English Wikipedia’s Bots policy. It was the 32nd task he had applied for under this bot’s account. According to Adam’s request for approval, “The bot will use the wikidata api to determine [if] a EN article is already included on wikidata. If it is the bot will then find the list of links that are included along side it on wikidata and remove those from the article.” 3 Within two days, various members of the Bot Approvals Group discussed the bot and then approved it for a full-scale run on English Wikipedia. Adam also made sure that the communities in other language versions of Wikipedia as well as Wikidata also approved of this task. To coordinate and bring visibility to the bot’s work, he created a centralized wiki page for the link removal project, which was used to track approvals across various language versions of Wikipedia, as well as give information about when and where the bot would operate. Adam’s application for the bot’s operation was approved by many different wiki communities’ Bot Approvals Groups, as well as given a global approval for wikis where the communities were too small to maintain a formal bot approval process on their own.
If we look at Addbot’s actions out of context, it can appear to be the kind of out-of-control automated agent that indicates deep issues with Wikipedia’s infamously open and decentralized governance model. Yet with the proper context, Shoreland’s use of Addbot to remove redundant interwiki links is one of the best examples of a large-scale, well-coordinated, community-approved bot operation in an open, peer production platform. The bot’s source code and operational status Operationalizing Conflict & Coordination Between Automated Software Agents 49:3 are clearly documented; it is operated in line with the various bot policies that different language versions can set for themselves; it is run by a known maintainer; and it is performing a massive task that would be a waste of human attention. The bot ran for a short period and completed the work as planned with no major objections or issue.
1.2 Even good bots fi
This content is AI-processed based on open access ArXiv data.