Supply vs. Demand in Community-Based Fact-Checking on Social Media

Supply vs. Demand in Community-Based Fact-Checking on Social Media
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Fact-checking ecosystems on social media depend on the interplay between what users want checked and what contributors are willing to supply. Prior research has largely examined these forces in isolation, yet it remains unclear to what extent supply meets demand. We address this gap with an empirical analysis of a unique dataset of 1.1 million fact-checks and fact-checking requests from X’s Community Notes platform between June 2024 and May 2025. We find that requests disproportionately target highly visible posts - those with more views and engagement and authored by influential accounts - whereas fact-checks are distributed more broadly across languages, sentiments, and topics. Using a quasi-experimental survival analysis, we further estimate the effect of displaying requests on subsequent note creation. Results show that requests significantly accelerate contributions from Top Writers. Altogether, our findings highlight a gap between the content that attracts requests for fact-checking and the content that ultimately receives fact-checks, while showing that user requests can steer contributors toward greater alignment. These insights carry important implications for platform governance and future research on online misinformation.


💡 Research Summary

This paper investigates the alignment between user‑driven demand and contributor‑driven supply in a large‑scale community fact‑checking system, using X’s (formerly Twitter) Community Notes platform as a case study. While prior work has examined either the supply side (what kinds of content fact‑checkers choose to verify) or the demand side (public attitudes toward fact‑checking) in isolation, the interaction between these two forces has remained largely unobserved in real‑world settings.

The authors collected a unique dataset covering the period from June 24 2024 (the launch of the request feature) to May 19 2025. The raw dump contains 672,732 notes and 5,898,267 requests. After filtering for posts that could be retrieved via the X API, they obtained 558,190 notes and an equal number of request events, linked to 711,914 distinct posts. For each post they automatically annotated language (using a 27‑billion‑parameter Google model), sentiment (positive, negative, neutral) and topic (politics, health, entertainment, etc.). They also extracted engagement metrics (views, likes, retweets) and author influence (follower count).

RQ 1 – Supply vs. Demand Alignment
Descriptive analyses reveal a pronounced mismatch. Requests concentrate on highly visible content: posts with the greatest number of views, highest engagement rates, and authored by influential accounts. In contrast, the distribution of notes is far more heterogeneous. Fact‑checks appear across many languages (including non‑English languages such as Spanish, Korean, Arabic), span the full sentiment spectrum, and cover a broad set of topics beyond politics. This suggests that contributors allocate effort more evenly across the information ecosystem, whereas user demand is skewed toward the most prominent posts.

RQ 2 – Effect of Requests on Contributor Activity
To assess causality, the authors employ a quasi‑experimental survival analysis. They compare the time until a note is first created for posts that displayed a request call‑out versus those that did not, controlling for post characteristics. A Cox proportional‑hazards model shows that the hazard of note creation increases by 42 % (HR = 1.42, 95 % CI 1.31–1.54, p < 0.01) when a request is shown. The effect is driven primarily by “Top Writers” – contributors with a history of producing highly rated notes – indicating that requests act as a salient signal that accelerates high‑quality contributions.

Implications
The findings have several practical and theoretical implications. First, the request mechanism can be leveraged to narrow the supply‑demand gap, ensuring that the most contested high‑visibility posts receive faster verification. Second, because requests may be concentrated among particular political or demographic groups, platform designers should consider diversifying the criteria for request display to avoid systematic bias. Third, supporting Top Writers through incentives or visibility boosts can amplify the positive impact of user‑generated demand. Finally, the study underscores the need for future work that links request content with note outcomes, examines long‑term effects on misinformation spread, and explores cross‑platform generalizability.

In sum, this research provides the first large‑scale empirical evidence that (1) user‑initiated fact‑checking requests target a narrow, high‑visibility slice of the content ecosystem, (2) community‑generated fact‑checks are more broadly distributed, and (3) displaying requests meaningfully speeds up contributions from the most influential volunteers. These insights advance our understanding of how crowdsourced verification systems operate in practice and offer concrete guidance for platform governance and the design of more responsive misinformation‑countering infrastructures.


Comments & Academic Discussion

Loading comments...

Leave a Comment