The Fundamentals of Policy Crowdsourcing

Reading time: 5 minute
...

📝 Original Info

  • Title: The Fundamentals of Policy Crowdsourcing
  • ArXiv ID: 1802.04143
  • Date: 2018-02-13
  • Authors: The excerpt does not contain the author list.

📝 Abstract

What is the state of the research on crowdsourcing for policy making? This article begins to answer this question by collecting, categorizing, and situating an extensive body of the extant research investigating policy crowdsourcing, within a new framework built on fundamental typologies from each field. We first define seven universal characteristics of the three general crowdsourcing techniques (virtual labor markets, tournament crowdsourcing, open collaboration), to examine the relative trade-offs of each modality. We then compare these three types of crowdsourcing to the different stages of the policy cycle, in order to situate the literature spanning both domains. We finally discuss research trends in crowdsourcing for public policy, and highlight the research gaps and overlaps in the literature. KEYWORDS: crowdsourcing, policy cycle, crowdsourcing trade-offs, policy processes, policy stages, virtual labor markets, tournament crowdsourcing, open collaboration

💡 Deep Analysis

Figure 1

📄 Full Content

Crowdsourcing (Howe 2006(Howe , 2008;;Brabham 2008) involves organizations using IT to engage crowds comprised of groups and individuals for the purpose of completing tasks, solving problems, or generating ideas. In the last decade, many organizations have turned to crowdsourcing to engage with consumers, accelerate their innovation cycles, to search for new ideas, and to create knowledge (Afuah & Tucci, 2012;Majchrzak et al. 2012;Bayus 2013;Brabham 2013a). As crowdsourcing has become an increasingly popular method for business organizations to gather IT-mediated input from individuals, the phenomenon has also spread to non-commercial contexts too. Recently, crowdsourcing has begun to be applied to different aspects of policy making, for example in the transportation (Nash 2009) and urban planning domains (Seltzer & Mahmoudi 2013). Yet, despite the advancing use of crowdsourcing in general, and its recent application in policy contexts, to our knowledge, research has yet to emerge that systematically investigates both domains simultaneously. This article is an attempt to address this salient gap in our knowledge. To do so, we introduce two fundamental typologies, one each from the crowdsourcing and policy literatures, which we then merge to form a new systematic framework suitable to address all applications of policy crowdsourcing. We then employ can agree to execute work in exchange for monetary compensation (Horton 2010;Horton & Chilton 2010;Wolfson & Lease 2011;Irani & Silberman 2013). These endeavors are generally thought to exemplify the 'production model' aspect of crowdsourcing (Brabham 2008), where workers undertake microtasks for pay. Microtasks, such as the translation of documents, the tagging of photos, and transcribing audio (Narula et al. 2011), are generally considered to represent forms of human computation (Iperiotis & Paritosh 2011;Michelucci 2013), where human intelligence is asked to undertake tasks that are not currently achievable through artificial intelligence.

The size of the overall crowd available at these VLMs is massive, with Crowdflower for example, having over five million potential laborers available. Microtasking through VLMs can therefore be completed rapidly (if need be) through the massively parallel scale available on such platforms. The participants in these VLM crowds generally undertake tasks independent of one another, and thus do not form official groups, or work as teams, through the intermediary platforms. Further, the laborers in VLMs are largely anonymous (Lease et al. 2012) with respect to their offline identities.

A separate form of crowdsourcing is known as tournament crowdsourcing (TC) or ideas competitions (Piller & Walcher 2006;Blohm et al. 2011;Schweitzer et al. 2012). In TC, organizations post their problems to IT-mediated crowds on platforms such as Innocentive, Eyeka, and Kaggle (Afuah & Tucci 2012) or through in-house platforms such as Challenge.gov (Brabham 2013b). These platforms generally attract and maintain more or less specialized crowds premised on the platform’s specific focus; for example, Eyeka’s crowd creates advertising collateral for brands, while the crowd at Kaggle focuses on data science solutions (Ben Taieb & Hyndman 2013;Roth & Kimani 2013). When applied to innovation, these platforms have been termed open innovation platforms (Sawhney et al. 2003), and represent both the idea generation and problem solving aspects of crowdsourcing (Brabham 2008;Morgan & Wang 2010).

The numbers of participants at these sites is smaller than at VLMs (for example, Kaggle has approximately 140,000 available, compared with the millions on Crowdflower), and the individual participants can choose not to be anonymous at these sites. Fixed amounts of prize money, and fixed numbers of prizes, are generally offered to the crowd for the best solutions submitted, and prizes can range from a few hundred dollars to a million dollars or more. † Some TC intermediaries require that their crowds submit independent solutions to competitions (e.g., eYeka), while others such as TopCoder allow or even encourage team formation and, thus, within-crowd collaboration in competitions.

Open Collaboration (OCs) † http://www.innocentive.com/files/node/casestudy/case--study--prize4life.pdf .

In the open collaboration model of crowdsourcing, organizations post their problems or opportunities to the public at large through IT (Crump 2011;Small 2012;Adi, Erickson & Lilleker 2014). Contributions from the crowds in these endeavors are voluntary and thus do not generally entail monetary exchange. Starting an enterprise wiki (Jackson & Klobas 2013) or using social media (Kietzmann et al. 2011) like Facebook and Twitter (Gruzd & Roy 2014;Sutton, et al. 2014) to garner contributions, are prime examples of this type of crowdsourcing.

The scale of the crowds available to these types of endeavors can vary significantly depending on the reach and engagement of the IT used, and the effica

📸 Image Gallery

cover.png

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut