Identifying and Understanding User Reactions to Deceptive and Trusted Social News Sources

Identifying and Understanding User Reactions to Deceptive and Trusted   Social News Sources
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

In the age of social news, it is important to understand the types of reactions that are evoked from news sources with various levels of credibility. In the present work we seek to better understand how users react to trusted and deceptive news sources across two popular, and very different, social media platforms. To that end, (1) we develop a model to classify user reactions into one of nine types, such as answer, elaboration, and question, etc, and (2) we measure the speed and the type of reaction for trusted and deceptive news sources for 10.8M Twitter posts and 6.2M Reddit comments. We show that there are significant differences in the speed and the type of reactions between trusted and deceptive news sources on Twitter, but far smaller differences on Reddit.


💡 Research Summary

This paper presents a large-scale comparative analysis of user reactions to trusted and deceptive news sources across two major social media platforms: Twitter and Reddit. The core objectives are to classify the types of user reactions and to measure the speed and distribution of these reactions in response to content from sources with varying levels of credibility.

The researchers first developed a linguistically-infused neural network model to automatically classify user reactions (e.g., tweets, comments) into one of nine discourse act types: Answer, Elaboration, Question, Agreement, Appreciation, Disagreement, Humor, Negative Reaction, and Other. The model architecture combines a text sequence sub-network (using GloVe embeddings and convolutional layers) with a sub-network processing LIWC (Linguistic Inquiry and Word Count) features from both the reaction and its parent post. Trained on a manually annotated Reddit dataset, this content-based model achieved performance comparable to more complex models that use additional metadata.

For the main analysis, data was collected over a 13-month period (Jan 2016 - Jan 2017). The dataset included 251 trusted sources and 216 deceptive sources, further categorized into clickbait, conspiracy, propaganda, and disinformation. This yielded approximately 10.8 million Twitter posts (tweets that @mentioned or retweeted the sources) and 6.2 million Reddit comments (direct replies to posts linking to the sources).

The key findings reveal stark platform-specific differences:

  • On Twitter: Clear and significant differences were observed between reactions to trusted and deceptive sources. Deceptive sources (especially disinformation) elicited a much higher proportion of “Appreciation” reactions and a lower proportion of “Elaboration” reactions compared to trusted sources. Furthermore, reactions to deceptive news, particularly “Appreciation,” tended to be more concentrated within the first few hours after posting compared to reactions to trusted news.
  • On Reddit: The distribution of reaction types was remarkably similar for trusted and deceptive sources. While statistically significant differences existed (e.g., slightly more questions and appreciation for deceptive news), the magnitude of these differences was far less pronounced than on Twitter. The reaction delays on Reddit were also more spread out over time compared to the more immediate concentration seen on Twitter.

The study interprets these disparities as stemming from fundamental differences in platform design and user consumption patterns. Twitter is organized around following individual accounts (including news sources), making source identity prominent. In contrast, Reddit is organized around topics (subreddits), where the popularity of a post, rather than the identity of the source who shared it, dictates visibility. This suggests that the platform’s technological infrastructure and user interface significantly mediate how users collectively react to news of varying credibility. The paper concludes by emphasizing the necessity of considering platform context in studies of misinformation and suggests future work analyzing bot vs. human behavior, extending to other platforms/languages, and investigating finer-grained credibility categories.


Comments & Academic Discussion

Loading comments...

Leave a Comment