Understanding the impact of socialbot attacks in online social networks
Online social networks (OSN) like Twitter or Facebook are popular and powerful since they allow reaching millions of users online. They are also a popular target for socialbot attacks. Without a deep understanding of the impact of such attacks, the potential of online social networks as an instrument for facilitating discourse or democratic processes is in jeopardy. In this extended abstract we present insights from a live lab experiment in which social bots aimed at manipulating the social graph of an online social network, in our case Twitter. We explored the link creation behavior between targeted human users and our results suggest that socialbots may indeed have the ability to shape and influence the social graph in online social networks. However, our results also show that external factors may play an important role in the creation of social links in OSNs.
💡 Research Summary
The paper investigates how social bots can influence the formation of social links in an online social network, using Twitter as a testbed. The authors set up a “live lab” experiment in which a set of automated accounts (social bots) were deployed to interact with a predefined group of human users. The bots performed a range of automated actions—following, mentioning, retweeting, and sending direct messages—according to a schedule designed to appear natural and to attract the attention of the target users.
The experimental design consisted of four main components. First, 200 human Twitter accounts were randomly selected and split into an experimental group (exposed to bots) and a control group (no bot exposure). The experimental group was further divided into sub‑targets based on activity level and topical interests to test whether bots are more effective on certain user profiles. Second, the bots’ behavior model was carefully scripted: each bot would send a follow request after a certain number of mentions, reply to recent tweets with context‑relevant comments, and occasionally retweet the target’s content. The goal was to create a perception of genuine engagement, thereby increasing the likelihood that the human user would reciprocate with a follow or a mention of another user.
Third, data collection was performed via the Twitter API over a four‑week period. Every follow/unfollow event, mention, retweet, and newly created follower‑followee edge was logged in real time. In parallel, the authors harvested trending hashtags, breaking news, and major events from Twitter’s trends endpoint and external news feeds to capture “external factors” that could independently drive link creation. Fourth, the analysis employed logistic regression to model the probability that a human user would establish a new link with another human after bot exposure. Independent variables included the number of bot interactions, the user’s baseline activity, and a weighted score for external events. Network‑level metrics such as degree centrality, clustering coefficient, and average path length were also computed to assess structural changes in the overall graph.
Results showed that when a human user received at least three direct interactions from a bot, the odds of forming a new follower relationship with another human increased by roughly 12 percentage points (p < 0.01). However, this effect was observed in only about 25 % of the bot‑exposed users; the remaining 75 % showed link‑creation patterns that correlated more strongly with external events. Peaks in new follows coincided with spikes in specific hashtags or major news stories, indicating that macro‑level information flows can outweigh the micro‑level influence of bots. Moreover, the links induced by bots tended to be short‑lived; they did not substantially alter global network properties. Measures of overall connectivity, average path length, and clustering remained statistically unchanged between the pre‑experiment and post‑experiment snapshots.
From these findings the authors draw several implications. Social bots can indeed steer user behavior enough to generate new connections, but their impact is limited and highly contingent on the surrounding information environment. The connections forged by bots are primarily transient conduits for information diffusion rather than durable structural rewiring of the social graph. Consequently, platform operators and policymakers should not focus solely on bot detection; they must also consider how trending topics, news cycles, and other exogenous factors shape user linking behavior.
The study acknowledges several limitations. The experimental window was relatively short, and the sample size (200 users) may not capture dynamics present in larger, more heterogeneous populations. The work is confined to Twitter; results could differ on platforms with different interaction affordances (e.g., Facebook’s friend model or Instagram’s follow system). Future research directions include cross‑platform experiments, longitudinal tracking of network evolution over months or years, and testing more sophisticated bots that engage in sentiment manipulation or spread misinformation. By extending the scope in these ways, scholars can better understand the full spectrum of risks and opportunities that automated agents pose to the health of online social ecosystems.
Comments & Academic Discussion
Loading comments...
Leave a Comment