Constructing Algorithmic Authority: How Multi-Channel Networks (MCNs) Govern Live-Streaming Labor in China
This study examines the discursive construction of algorithms and its role in labor management in Chinese live-streaming industry by focusing on how intermediary organizations (Multi-Channel Networks, MCNs) actively construct, stabilize, and deploy particular interpretations of platform algorithms as instruments of labor management. Drawing on a nine-month ethnographic fieldwork and 44 interviews with live-streamers, former live-streamers, and MCN staff, we examine how MCNs produce and circulate structured interpretations of platform algorithms across organizational settings. We show that MCNs articulate two asymmetric yet interconnected forms of algorithmic interpretations. Internally, MCNs managers approach algorithms as volatile and uncertain systems and adopt probabilistic strategies to manage performance and risk. Externally, in interactions with streamers, MCNs circulate simplified and prescriptive algorithmic narratives that frame platform systems as transparent, fair, and responsive to individual effort. These organizationally produced algorithmic interpretations are embedded into training materials, live-streaming performance metrics, and everyday management practices. Through these mechanisms, streamers internalize responsibility for outcomes, intensify self-discipline, and increase investments in equipment, performing skills, and routines to maintain streamer-audience relationship, while accountability for unpredictable outcomes is increasingly shifted away from managers and platforms. This study contributes to CSCW and platform labor research by demonstrating how discursively constructed algorithmic knowledge can function as an intermediary infrastructure of soft control, shaping how platform labor is regulated, moralized, and governed in practice.
💡 Research Summary
This paper investigates how Multi‑Channel Networks (MCNs), as intermediary organizations in China’s booming live‑streaming industry, construct and deploy algorithmic authority to govern streamers’ labor. Drawing on nine months of ethnographic fieldwork and 44 semi‑structured interviews with active and former streamers as well as MCN staff, the authors identify a dual‑frame strategy that separates internal and external algorithmic narratives. Internally, MCN managers treat platform algorithms as volatile, probabilistic systems. They develop risk‑management tools such as data dashboards, scenario simulations, and probabilistic performance models, which they use to allocate resources, set internal KPIs, and absorb algorithmic uncertainty at the organizational level. Externally, MCNs present a simplified, prescriptive narrative to streamers that portrays the algorithm as transparent, fair, and responsive to individual effort. This narrative is embedded in formal training manuals, workshops, daily feedback sessions, and a set of “algorithmic guidelines” that promise higher rankings in exchange for better content quality, equipment upgrades, and audience engagement tactics.
The paper shows how these contrasting narratives become institutionalized through concrete mechanisms: (1) training programs that frame algorithmic logic as merit‑based; (2) performance metrics that map viewer counts, comment volume, and conversion rates onto an “algorithmic score” visible to streamers; (3) aesthetic and behavioral standards that prescribe camera angles, speaking styles, and content formats deemed “platform‑friendly”; and (4) financial arrangements where streamers remit a share of revenue to the MCN and pay for “optimization consulting.” These mechanisms shift responsibility for success onto the individual streamer, intensifying self‑discipline, equipment investment, and routine refinement, while the MCN and the platform externalize the risk of unpredictable algorithmic changes.
Streamer responses follow a trajectory: initial compliance driven by uncertainty avoidance, growing skepticism when promised outcomes fail to materialize, and eventual disengagement—either by leaving the MCN, migrating to other platforms, or deliberately resisting algorithmic prescriptions to cultivate a distinct niche. This pattern illustrates how algorithmic authority is both accepted and contested over time.
The authors contribute to CSCW and HCI literature in three ways. First, they foreground intermediary organizations as sites where algorithmic authority is constructed, expanding the focus beyond platform‑centric analyses. Second, they demonstrate that discursively constructed algorithmic narratives function as instruments of soft control, shaping labor management without overt surveillance. Third, they offer design and policy implications, urging attention to the epistemic power of intermediaries, the need for greater algorithmic transparency, and protective measures for gig workers facing mediated algorithmic governance.
Overall, the study reveals that algorithms are not merely technical black boxes; they are socially produced objects whose meanings are negotiated, stabilized, and weaponized by organizational discourse. MCNs, by controlling the flow of algorithmic knowledge, enable platforms to distance themselves from direct labor management while still extracting value from streamers. This insight enriches our understanding of platform labor, highlighting the importance of examining the “middle” actors that translate platform policies into everyday work practices.
Comments & Academic Discussion
Loading comments...
Leave a Comment