Ethical Guidelines for the Construction of Digital Nudges

Under certain circumstances, humans tend to behave in irrational ways, leading to situations in which they make undesirable choices. The concept of digital nudging addresses these limitations of bounded rationality by establishing a libertarian pater…

Authors: ** 제공되지 않음 (논문에 저자 정보가 명시되지 않았습니다) **

Ethical Guideli nes f or the Constructi on o f Digital Nudge s Christian Meske Ireti Amojo Freie Univeristät Berl in Freie Universität Berl in christia n.meske@fu - berlin.de i reti.amojo@fu - berlin.de Abstract Under certain circumst ances, h umans tend to behave in irrational ways, leading to sit uations in which they make undesirable choices. The concept of digital nudging addresses these limitations of bounded rationality by establishing a libertarian paternalist alternative to nudge users in virtual environments towards their own preferential choices. Thereby, choice architectures are designed to address biases and heuristics involved in cognitive thi nking . As research on digi tal nudging has become increasingly popular in the Informati on Systems community , an increasing necessity for ethical guidelines ha s emerged around this concept to safeguard its legitimization i n distinction to e.g. persuasion or manipulation. However, r ef lecting on ethical debates regarding digital nudging in academia, we f ind that current conceptualizations are s care. This is where on the basis of existing literature, we provide a conceptualization of ethical guidelines for the design of digital nudges, and thereby aim to ensure the applicability of nudging mechanisms in virtual environments. 1. Introduction In recent years, “digital nudging” has become an important research focu s in the information systems (IS) community (see e.g. [1 -4 ]; [ 54 - 61 ]) . Tha ler and Sunstein first conceptualized the idea of nudging as a form of overt and predictable behavior change in their work [5 ] . They state that nud ging methods, e.g. encouraging prosocial behavior, can be surprisingly effective, while being libertarian, me aning that they leave people emancipated in their freedom of choice by not excluding any possible choice, n or introducing economic incentives to extrinsically alter behavior [5] . Critics argue that nudging is manipulative in that it undermines people’s autonomy [6] , by influencing the decision - making pro cess and t aking advan tage of predictable irrational reactions that result from heuristics, biases and ps ychological mechanisms ([7]; [8 - 9]) . I ndividual s are susceptible to be influenced in their decision - mak ing processes as they fall back on heuristics, biases, and psychological mechanisms [2]. Digital nudging is implemented in virtual environments to e.g. simplify the information and option overf low. However, whenever I S design aims to i nfluence human behavior, IS scholars should also address ethical concerns ([10 - 12]). Against this background, the ethical legit imization of nudging is controversially discussed in literature. It is important to discuss and define nudging in distinction to other regulatory m echanisms such as manipulati on ( infringement of autonomy ) , to better apprehend the ethi cal debate on nudging. Due to the difficulty to disti nguish both concepts and the premise that general ethical abstractions often lead to insufficie nt and perplexing results [1 3 ], we hereby ai m to concretize the ethical debate on nudging. More specifically, w ith the increasing importance of IS at present, and the increasing use of e. g. artificial intelligence, it is im portant to ex tend the debate on e thical legitimization s of nudges to the digital context. Yet, according to Renaud and Zimmermann [14] , despite the growing importance of ethical discussi ons in IS, research on ethics is only slowly emerging . Wit h t his wor k, we aim to contribute by provid ing ready to use ethical guideline s , which transf er knowledge deri ved from definitory and ethical discussions in (offline) nudging t o the online context, in order to support designers of digital nudges . Thereby, we align with the diffe rentiat ed vi ew o n et h ics and morality as s uggested by Stahl [15] and further discussed by [16] . In literature , ethics and morality are of ten used interchangeably [1 7- 18 ] and are jointly described as “cus toms, good practices or expected behavior” ( [19] , p. 145). However, the di fferentiation between mo ralit y, un derstoo d as rules or norms within a society and ethi cs, conceptual ized as the justification of the se morals [15] , provides a better generalizability. Namely, that nudge designers with different understandings of morality m ay still follow the same universal ethical frameworks and theref ore collectively find applicability of our guidelines [19] . Mor eover , as part of our contribution , the guideline s may also provide a standard upon which des igners can be h eld accou ntable f or. The paper is str uctu red as follows: in section 2, we will provide the background and basic understandings of the nudging concept by emphasiz ing on ethical Proceedings of the 53rd Hawaii International Conference on System Sciences | 2020 Page 3928 URI: https://hdl.handle.net/10125/64222 978-0-9981331-3-3 (CC BY-NC-ND 4.0) aspects of nudging in regard to libertarian paternalism and manipulation . Thereaf ter, i n section 3, we wil l present our r esearch design, before we work towards ethical guidelines i n digital contexts, which are derived from existing literature in section 4. In section 5 , we will present our discussion , followed by a summary and outlook to future research in section 6. 2. Theoretical background 2.1 Bounded rationality and digital nudging A short introduct ion into behavioral scienc e, and existing cognit ive processes guiding human decision - making pr ovide a basic unde rstanding of ethic al discourses in nudg ing. Bounded ration ality and the underlying idea of decision - maki ng heur istics is b ased on the assumptions that humans dispose over two interconnecting cognitiv e systems: t he u nreflective (s ystem 1) and the r eflective ( s ystem 2) [5 ; 20 ] . Humans most regul a rly use unreflective (automatic) thinking. It is fast, effortless, relies on cognitive heuristics, and biases [ 20 - 21] , and enables individuals to process multiple thoughts simultaneously. The reflective system, in contrast, is slow, and effortful as it com prises conscious and deliberate thinking [20 - 21 ] . Here, individuals process information sequentiall y, allowing them to critically reflect and analyze the information. Such ‘bounded rationality’ [20] , is the underlying basis for the concept of nudging. In t his context, a nudge can be defined as “(...) any aspect of the choice architecture that alters people’s behavior in a pr edictable way wit hout forbi dding any options or significantl y changing their economic incentives ” ( [5] , p. 6). It is always intended to ei ther alter a perso n’s unreflecti ve thinking and consequent automatic behavior, or trigger the decision - makin g processes i n the reflective system to guide conscious decisions, therefor e altering choice [5] . Transferred to the virtual environment, digital nudging emphasizes on the medium and digital instruments through which nudges can influence choice architectures. It can be defined as the “use of user - interface design elements to guide people’s behavior in digital choice environments” [12, p. 1] or a “ a subtle form of using design, information and interaction elements to guide user behavior in digital environments, without restricting the individual’s f reedom of choice ” [1, p. 3] . The tools and techniques used in digital nudging affect the same psychological mechanisms (i.e. biases, heuristics) as in the offline context. Since digital interfaces are always created by a choice architect and do not appear randomly, it is crucial to understand t he design of digital (user) interfaces as the environment which influences decisions [2] . Zhang and Xu [26] found that transparency and the attitude towards privacy settings can be improved, showing the effectiveness of digital privacy nudges in mobile apps. Similarly, Weinmann et al. [12] acknowledged that opt - in o r op - out default sett ings have a big infl uence on online user behavior. D efault - setting research further reveals that nudges can (1) positively influence environmentally sound decis ion - making during [27] or (2) decrease (online) procrastination behavior in students [28] . M ore recent studies on online privacy assessed that the combination of different claims and supporting arguments is most promising to nudge an incr ease in privacy concerns [2] . Meanwhile , providing information about the incompatibility of g oods during the purch asing process improves the online shopping behavior [29] , while digital nudging may also influenc e users to reflect on their own sharing behavior and potentially related privacy concerns [30] . To this day, d igital nudging receives le ss attention in enter prise sett ings, though it has great potential in working environments [32] . Kretzer and Maed che [3] pre - defined so - called “social nudges” which targeted a variety of different effects such as cohesion, busines s function, geographical d istances, and hierarc hy. In an experimental setting , th ey were then able to show a positive effect of their social nudges on the decision - making of user s (e.g. enterprise report recommendations ) [3] . 2.2 Regulatory mechanisms and manipulation The a cadem ic discourse on regulatory mechanisms involves (1) architecture (or code), as an alteration of the physical or digital environment to make alternative decisions less appealing, (2) libertarian paternalism, as a mechanism to positively influence rational decision - making, and (3) mandated disclosur e (or noti ce) entailing the distribution of facts to influence informed decision - making [33] . While code was the first concept to emerge [34] , it gene rally entails a manip ulative character as choice architectures are influenced through s anctions [34] or other forms of governmental control to influence citizen behavior such as speed bumps to regulate dri ving speed [33] . Thereafter, nudging emerged and addres sed decision - making based on the bounded rationali ty of human s as a way of aidi ng the choice selection of prefer ential opt ions within existing choice archit ectures , rather than constructing choice architectures in a mandated and strictly paternalistic way [5] . However, as Thaler and Sunstein acknowledge the (soft) p at ernalistic character of nudges, they emphasize the libertari an aspect of Page 3929 nudging (e.g. the option to choose differently as nudged by authorities) as the distinguishi ng factor between nudging and manipulation [33]. A commonly used definition describes man ipulation as “ intentio nally causing or encouraging people to make the decisions one wants them to make, by act ively promoting t heir (deci sion - )making (…) in ways that rational persons would not want to make their decisions ” ([35] , p. 33). From our point of view, this definition likely applies to the idea of code. Moreover, nud ging critics argue that nudging has a manipulati ve chara cter [36] . Despite the efforts of Thaler and Sunst ein [5] to legitimize nudging by distinguishing it from code, there is still c ontinuous criticism re gar ding a clear understandi ng of the economic incentives or the lack of defined circumstances for the removal or addition of options [36] . In lin e with th e criticism, the nudging concept does not provide a clarification nor do legitim izing conditions exist to differentiate nudging from manipulati on. In the effort of closing t his gap, we will review existing literature on more detailed approaches to define nudging and distinguish nudges from manipulati on, be fore building our ethi cal gu i delines on the basis t hereof. Hansen [37] draws the conclusion that there are two possibilities for the definiti on of a nudge . One being that nudges pursue paternalistic motive s , which simplifies their ethical legitimization as the motive of the nudger h as to be based on the interests of the nudgee [37] . The other conceptua lizing nudging fr om a technical perspective as the effort to influence human judgement, choice or behavior in predictable ways where rational decision - ma king in line with one’s own pref erential choice set fa lls short to bounded rationality (e.g. due to cognitive boundaries, biases, routines) [37] . In consequence , and contrary to Thaler and Sunstein [5] , here nudges (1) exclude relevant choice options, (2) grant extrinsic incenti ves, and (3) provide rational i nformati on and arguments [37] . In this definition, nudges are not neces sarily paternalistic, therefore not always align with the preferences of the nudgee . Instead, this definition is more suitable for practical use because i t i nclude s both, nudges that ha ve paternalistic motives as well as third - party interests. This is where nudging reaches i ts conceptual limitations because it is originally not laid out to include non - libertarian phenomena [38] , yet, in reality many nudges have long been used for such pu rposes. According to Grü ne - Yanoff [39] , it is impossible to consider all indi vidual nudgee preferences for the construction of a choice ar chitect. Nudgin g and especially digital nudging, always contain some prejudice towards certain choices. ‘S ubconsciou s’ nudges often reach the biggest effect [7] , which particularly decreases the incentive for designer s of digital nudges to aim at transpar ent implementations. Moreover, the impl ementati on of non - transparent nudges bares one of the mo st central ethical con cerns and reservations against nudging as transparency is one of the most i mportant legitimizing conditions for the distinction between nudging and manipulat ion [5] . In regard to the ethical debate on nudging, research has provided a few different perspec tives. On the basis of her conceptual paper on possible aspects counteracting the side - effects (e.g. distrust) or autonomy concerns of nudging, Clavien [65] argues that shared preference justifications ( all affected nudgees agree with the goals pursued by the nudger ) are the most adequate to ensure ethical legitimization of nudges. Meanwhil e, Blumenthal - Darb y and Burroughs [6 6] focused their work on the identification of ethically relevant dimensions in form of a list of considerations that must be acknowle dged when choi ce architectures are being influenced. I n hi s anal yses of case studies, Engelen [ 64 ] establishes a so - called list of criteria and their suggested contextual importance, which health practitioners should consider when nudging t heir patients . However, in a response to Engelen [ 64 ] , Fowler and Roberts [ 67 ] criticize his work and suggest broadening its perspective to include a wider variety of cases for analyses, on top of the perspective s of those affected by the nudges a s well as an assessmen t of the effectiveness of the implemented nudges. In light of the existing work on digital nudging and ethics it becomes apparent that though there have been some efforts to incorporate ethical standards into nudging, to this day two aspects are still miss ing: (1) the evaluation of ethical standards in digital nudging and (2) the introduction of ready to use guidelines for the actual design of nudges. This is al so supported by Lembcke et al. [ 68 ] who point out, that an “ integration of practical ethical guidelines into (an) exi sting framework” is yet to be done” (p. 13). We agree that the existing body of knowledge is still lacki ng a set of guidelines that can be utilized by designers in order to allow theory to actually inform practice. That is why our work set out to provide both aspects, th ereby closing this gap. 3. Method On the basi s of the p rovided defi nitions and first aspects to dist inguish nudging from manipulation, we now introduce our literature analysis. We analyzed literat ure on regul atory mech anisms, nudging, and persuasion. We built our methodological approach based on the framework by vom Brocke et al. [40] and aligned our literature review with the categorizati on by Cooper [41] , to assure a justified scope in line with the Page 3930 goal and target group (Fig. 1). The goal of our literature analysis is a thorough representation of the ethical debate on nudging. Moreover, our goal was to review other potential concepts to prov ide a holistic foundation for the derivation of ethical guidel ines. The select ion process was not entirely neutral as we pre - selected relevant publications, which fit our goal and aligned with the IS context. With our work we aim to address scholars in nudging related topics and designers or practiti oners imple men ting nudges i n virtual environments . Table 1 . Visualizati on and categorization approach (Cooper, [41]; vom Brocke et al., [40] ). Forward and backward searches on a ll identified research articles allowed us to include relevant literature e.g. on regulatory mechanisms. Hereby we took a particular interest in publicatio ns discussing ethics a nd conceptual research methodologies. We derived our ethical guidelines iteratively by comparing relevant concepts and frameworks (e.g. [ 68 ]). The process of d eriving the ethical guidelines for the design of nudges is twofold. We first used a categorizatio n scheme based on the framework as introduced by Hansen and Jespersen [42] , distinguish ing between t ransparent and non - transparent nudges in combination with s ystem 1, unreflective and s ystem 2, reflective thinking. In a second step , we closely regarded the identified literature to derive our guidelines in a stepwise approach. Al together, we used a descriptive research methodology by cast ing a light on a current issue and gradually building on alr eady existing work. Ac cordingly , the et hical guidelines we introduce in the following were develop ed on the basis of past work and the idea , that an et hical approach to the design of nudges should consider transparenc y and the cognitive thinking process es of the target audience as the two start ing points of conside r ation . 4. Results: ethical g uidelines for the design of digital nudges O ur gui delines assist and provide a ready to use check - list, to guide researchers and practitioner in the design process of nudges. In line with this goal we provide a universal fram ework that applie s to all digit al nudges, and most importantly for all designers independent of their occupational or epistem ological background . 4.1 Transparency as a legitimizing condition for nudges While it s eems logi cal in t heory that individuals always have the freedom to choose di fferently than what they are being nudged towards [5] , a broad variety of nudges are not transparent and therefore non - paternalist ic . Consequently, nudging gives individuals the theoretical option to choose differently, but practi cally their behavior is changed in a way that does not provide options to vary choices. Given that nudges always have to be transparent [5] , it is crucial to regard transparency in more detail. Namely, (digit al) nudges can be distinguished as t ransparent and non - transparent [43] . It is the non - transparent types of nudges that fall under the manipulati on objectio n as t hey work sublimi nally and are usually not striking to the nudgee [46] . Considering the unreflective (system 1) and reflective (sys tem 2) cognitive systems, we recognize that there are different forms of influence between these two types of nu dges. Since system 1 nudges influence (e.g. simplify) choices , they can infringe an individual’s autonomy, if undetected by the nudgee [23] . Sy stem 2 nudges alter a nudgee’s ref lective t hinking (e.g. habits ) [23] , possibly influencing a shift in behaviour (e.g. exclude choice). However, non - transparent system 1 nudges can only be legitimate, if preceded by some type of infor mation or clarificatio n about the nudge. This precondition cannot apply to non - transparent system 2 nudges, as the nudger intentional ly excludes possible choices, leading to an intentional and u nethical manipulation of the contextual environment and conseq uent behaviour. This c oncludes that non - transparent nudges have to be used in a disclosed manner and the nudgees have to be able to intervene and resist the trigger ed behavioral change at all times. Wilkins on [47] supports this distinguished view of non -t ransparent system 1 an d 2 nudges by stating that nudges, which neither qualify as a form of intentional influence on choice (unreflective system 1) nor inflict Character - istics Categories 1 focus research outcomes research methods theories appli - cations 2 goal integration criticism central issues 3 organi - zation historical conceptual method - logical 4 pers - pective neutral representation espousal of position 5 au - dience specialized scholars general scholars practition ers/ politician s general public 6 cov - erage exhaus - tive exhaustive and s elective represent ative central/ pivotal Page 3931 on or pervert an individual’s decisi on - making process and consequent behavior (reflective system 2), can be considered manipulatio n. Therefore, under t he given considerations, which we will also further discuss as “legitimizing conditions”, ethical nudges can either be transparent or non - transpare nt (see e.g. Table 3). 4.2 Easy - resistibility as a legitimizing condition for (non - )transparent nudges In line with Thaler and Sunstein [5] , the original choice sets of a nudged person are limitless and should not be inhibited by external factors. Accordingly, Saghai [48] suggests, that if choice architects are used to shape choices, users must also be given the option to resist. Thaler and Sunstein [5] agree by stating that nudges should be “easy and cheap to avoid” (p. 9). According to Saghai [48] , easy resistibility is provided, when a nudgee has the ab ility and chance to become aware of how the nudge is steeri ng the individual towards a certain behavior; when a nudgee has the ability to oppose the triggered behavior or choice; or when the influence of the nudge is not undermining the nudgee’s attention - bringing or inhibitory capacities. We conclude that non - transparent system 1 nudges can only be legitimate, if easy resistibility is ensured. Meanwhile, non - transparent system 2 nudges are excluded from this legiti mization as a nudgee has no ability to opp ose influence taking he or she is unaware of. 4.3 Non - controllability as a legitimizing condition for (non - )transparent nudges Moreover, Saghai [48] also states t hat substantially non - controlling nudges, shaping the environment through intentional choice architectures may be l ibertarian. This means, that the nudger should not take any measures to violat e the autonomy of the nudgee, such e.g. incentives or coercion. The substantial non - controlling condition therefore protects the nudgee from influences that change the individu al’s perspective in a subliminal or even manipulati ve way. However, as it can be a rgued th at resistibility is a subjective criterion, following the argument by Faden and Beauchamp [49] , Saghai [48] suggests that objective views of resistibility align with the perception of the average person. We summar ize that to distinguish digital nudging and (non - libertarian) manipulati on, we first acknowledge prejudices as a constituent of all decision - makin g environme nts, especially in the digita l world. Further, we a cknowledge, that if nudgers construct choice architectures they have to do so transparently, include a possibility of resistance, and avoid extrinsic incentives. This is also expressed by the term “easy resistibility”, which is often used in the nudging co ntext. One may hence speak of manipulation when a non - transparent nudge is implemented without any preceding information, and is further perceived as an unwanted, disfavored alteration of decision - making processes in digital envi ronm ents. According to our provided arguments and conditions to distinguish paternalis m from ma nipulation, Table 2 provides an overview and categorization scheme of the essential conditions for legitimate nudges as well as legitimizing conditions for non - trans parent nudges that we d erived from the literature. Further, Table 2 also resembles the basic frame, on t he basis of which we derived our ethical guidelines in Table 3 . Nudge designer s need to be mindful of the public welfare or normative incentive applying to their target group (Step 1) [ 69 ] , and use this knowledge as the basis on which nudging goals (Step 2) and design principles (Step 3) can be derived , before their evaluation (Step 4) . Table 2 . Summary of ethica lly legitimizi ng conditions of nudges Hence, we distinguish nu dges that affect the reflective thinking from ma nipulative nudges that affect the unreflective thinking, if they are non - transparent or according to [46] remain undetected by the nudgee. In regard s to non - transparent nudges we are now able to further derive legitimizing conditions of ethi cal nudges. Bas ed on existing literatu re regarding the basic characteristics of (digital) nudging as well as the described legitimizing conditions, T able 3 provides ethical guidelines in form of a checklist that has to be satisfied in order to assure the desi gn of ethica l digital nudges . Es sential conditions Leg itimizing Conditions Authors Libertarian Paternalism Transparency [43]; [5]; [37] System 1: easy resistibility, non - controlling [42]; [7]; [46]; [50] [48] System 2: easy resistibility, non - controlling [48] Non - transparency System 1: disclosure, consent, easy resistibility, non - controlling [50]; [42]; [47] System 2: nudges are manipulative and beyond legitimization [51]; [47]; [42] Page 3932 Table 3. Ethical Guideline s for the Construction of Digital Nudges 5. Discussion As support ed by re search of [1] and [2] , the nudger has to understand th e intentions of potential users and their heuristics as well as biases in the first step. This includes acknowledging that th e t argeted nudgees may ex perience both, unref lective and reflect ive thinking during their interaction with the nudge and conseque nt decision - making pro cess. In the consecutive second step the nudger can now derive the Et hical Guidelines f or Digital Nudging: A st epwise approach Step 1: Underst and the in tentions of potenti al users and their cognitive heuristics and biases the target group has b een thoroughly identified the preferential choice set of your tar get group has been identified Step 2: Derive the goals of digit al nudging the goals are in alignmen t with the users’ preferential choice set and/ or stem from good intentions. They benefit and do not harm the user. the potential impact is p redictable Step 3: Design and implemen t the nudg e System 1 (Unreflective Thinking) fast, unconscious decision m aking & parallel/ conve rgent thinking Transparent the choice architecture is presented in the most si mplified way Non - transp arent justification for the need for non - transparency is given Easy resistibil ity the nudge is easy and cheap to avoid. There are no costs to avoiding the nudge. Disclosure simplified information about the nudge i s provided Consent c onsent forms are provided requiring users to thoroughly read & opt - in to the terms an d conditions to ensure informed consent , or informational nudge s signaling preselected default - settings are provided , in cases were inform ed consent was alrea dy established Non - controlling no incentive/ coercion was introduced to influence choice Easy resistibil ity the nudge is easy and cheap to avoid. There are no costs to avoiding the nudge. Non - controlling no incentive/ coercion was introduced to influence choice. System 2 (Reflective Thinking) slow, conscious & sequential/ critical thinking Transparent the choice architecture is presented in a simple and comprehensive way Non - transparent Designing n on - transparent digital system 2 nudges (reflective) is considere d manipulati ve. Therefor e, the nudge should now be re - considered . Easy resistibil ity the nudge is easy and c heap to avoid. There are no co sts to avoiding the nudge. Non - controll ing no incentive/ coercion was introduced to influence choice. Step 4 : Evaluation of the digital nudge a nd iteration the nudge is consistent with the original goal and useful to influence the target behavior there are no unintend ed negative consequence s (e.g. malicious intent, monetary d isadvantag es) for the targe t group. Page 3933 goal a digital nudge aims t o achieve. On the basis thereof the nudger is able to design and implement the nudge. Hereby, the nudger needs to be aware of the different cognitive systems his target group fluently switches in and out of when interacting with the digital interface. When impl ementin g transparent digital system 1 nudges (unreflective), automated behavior is being influenced, making these types of nudges al most impossible to avoid. Although they appear to be detectable to the nudgee, they work manipulative in a technical (not psychological) way. As they are overt and identif iable, these nudges are ethically justifiable. Here, the choice arch itect is held res ponsible for the nudging effect s because an individual’s automatic thinking guides the user behavior. If an individual is nudged against personal preferences, attention bringing capacities can be activated and deliberate thinking allows the individual to resist the nudge [48] . If the goal of a choice architect is to e.g. increase the security settings during online purchasing, respective security settings could be nudged in a simplified and accessible way (e.g. remin ders to update s ettings or inc reased visibility of securi ty opt ions) that the nudgee may automatically feel encouraged to update t he privacy settings. When designing non - transparent digital system 1 nudges (unreflective) manipulation risks are high, as they are almost undetectable and influence behavior subliminally. Thu s, it is diffi cult for the nudge e to reconstruct the ends and means of the nudge or perceive all possible opt ions. Hence, choice architects should always aim to be i n line with user interests. However, as indivi dual user interes ts may be di fficult to r econstruct for designers, another option is to reveal the intentions behind the nudge or request the user’s consent. This would require the nudger to implement at least one of the following legitimizing conditions: disclosure of infor mati on or aski ng user consent as a prerequisite to continue us ing the website. The mentioned legitimizi ng condit ions, al so known as informed consent [14] ar e however not always prompted, especially given t he vastness by which user s move acr oss digi tal la nd scapes , oftentimes holding multiple regist rations with different platf orm pro viders. In such cases we acknowledge that a ready - to - use ethical guideline needs to be aware of the discr epancy between ethical nudging and providing the least amount of disruptio n during a user experi ence. Therefore, instead of suggesting to request informed consent every time a user logs onto a platform, we suggest that priming or signaling nudges could be implemented to raise the user’s awareness. This would qualify for an infor mat ion box signa ling t he users t hat he was nudged according to the preselected default - settings that were agreed upon during the initial registration process on the current platform for instance. Designing and implemen ting transparent d igital system 2 nudg es (reflective), entai ls influencing choice in a transparent way, by emphasizing choices consistent with the nudgee’s preferences [42] . This system of nudges activates reflective thinking, making it easy to understand the means and ends of the implemented nudge and giving the nudgee t he actual possibility to make choices without perverting his or her decision - making process. These nudges a re truly libertarian as they influence individuals without manipulati on or infrin gement o f auton omy. Thi s is in line wi t h Tocchetto [ 63 ] who a rgues that only reflective (and transparent) nudges are morally harmless. Her e, th e nudge r would e.g. use timely information nudges as triggers reminding the user of the insufficient security settings or provide additional information during t he user journ ey on the websit e to encourage the nudgee to reflect password or s ecurity choices. Designing non - transparent digital system 2 nudges (reflective) on the other hand is considered non - paternalist ic and manipulative. Although these nudg es use tools that acti vat e reflective thinking, they do not allow the nudgee to reconstruct the means and ends of the implemented nudge. Choices are manipulate d, l eading to behaviora l cha nge w ithout the opportunity to choose differently. Here, nudgees would be forced to update or r evisit security settings before e.g. logging out. Even though this could pot entially increase th e overall security settings it would manipulate user behav ior and inf ringe fr eedom of choice. Finally , the fo urth and last step in the design or implementation of ethical nudges is to evaluate the nudge and , if necessary , interactively adjust aspects according to the principles we have provi ded. This step will be subject to further research. We used a heterogenous body of lit erature as p art of our methodology to derive the ethical guid elines. Therefore, as a final step, we added the evaluat ion and possible iterati on of the digit al nudge. By doing so we adapted known evalua tion processes as par t of IS research affiliated Design Science R esearch methodologi es (see e.g. [52]) , which provide s an important implication for nudging research and its constant effort of putting nudging theory into practice . We view the evaluat ion phase of the ethic al guideli nes as a central final step to assure fu nctional purpose and t arget analysis [52] and provide a quality standard as part of our key practical contribution to establish ethical nudging design guidelines . Namely, the evaluation is a reas sessment if the results match the Page 3934 expectations ( steps 1 - 3) [5 3] . After we derived a nd iteratively improved the ethical guidelines for nudging, we aim to test and eval uate our guidelines in the future (e.g. by designers of digital nudges) to provide legitimization for the use of digital nudges in IS specific contexts . 6. Conclusion In this paper, we critically reflected the current status quo in the ethical discussion on digital nudging. We expan ded on e xistin g resea rch by d rawing b etween libertarian nudges, non - libertarian nudges and manipulati on. The latter only being present in cases where non - transparent nudges are implemented in the attempt to alter people’s decision - making processes and consequent behaviour in a way they would not have chosen by themselves, thus violat ing thei r autonomy and freedom of choice. We conclu de, that d es igners of a choice architect ure have to be able t o defend the used measures, thereby s elf - checking if their nudge withstands public scrutiny. Our literature - based conceptualization of ethical guidelines very well reflects the status q uo of the existing academic discourse. However, we ackno wledge that bot h, providing different case scenarios of the application of our guidelines as well as possi bly mapping these to the structure of a nudging design method, therefore allowing us to derive an actual research question, would highligh t the applicability of our guideli nes and further improve the contribution to practice . Th is is why we will consider both aspe cts in f uture rese arch. As a next step, the guidelines need empirical validation and d iscussion with nudgers and nudgees. We hence suggest implementing and testing our guidelines i n a consecutive research project to expand on the contribution f or practitioners, validate our guidelines, and provide a concretized call to action for their enfo rcent . 7. Acknowledgements We than k Nico Kastuno wicz f or his suppor t . 8. References [1] Meske, C. and Potthoff, T., “The DINU Model – A Process Model for the Design of Nudges,” in Proceedings of the 25th European Conference on Information Systems , p p. 2587 - 2597, 2017. [2] Schneider, C., Weinmann, M. and vom Brocke, J., “Digital Nudging: Influencing Choices by Using Interface Design,” Forthcoming at Communic ations of the ACM , vol. 61, no. 7, pp. 1 – 11, 2018. [3] Kretzer and Maedche, A., Designing So cial Nudges for Enter prise Recommendation Agents: An I nvestigation in the Business In telligenc e Systems Conte xt, Journal of the Association for Informati on Syst ems , vol . 19, no. 12, pp. 1 - 48, 2018. [4] Hummel, D. and Maedche, A., “How effective is nudging? ” A quantitative review on the effect si zes and limits of empirical nudging studies. Journal of Behavioral and Experiment al Economics Impact , vol. 80, pp. 47 - 58, 2019. [5] Thaler, R. H. and Sunstein, C. R. Nudge, Int’l., Penguin Books, London , GB, 2009. [6] Blumenthal - Barby, J. S., “Choice Architecture: A mechanis m f or improving de cision s wh ile preserv ing liberty?”. In C. Coo ns and M. E. Weber (e ds.) Pate rnalism. Theory and practice , p p. 17 8 - 196. Cambridge: Cambridge Universi ty Press , 2013. [7] Bove ns, L ., “The Ethics of Nud ge,” in Preference Change: Appro aches from Philosophy, Economics and Psychology, T. Grüne - Yan off and S . O. Hans son (eds. ) , pp. 207 - 219 , 2009. [8] Grill, K., “Normative and Non - norm ative Concepts: Paternalism and Libertar ian Pate rnalis m,” in Ethics in Publ ic Health and Health Pol icy: Conce pts, Metho ds, Case Stu dies, D. Strech, I. Hirsc hberg and G. Mar ckmann (eds .), pp. 27 – 46, 2013. [9] Reiss, J. Philosophy of Economics: A Contemporary Introduction, 1st Edition, New York: Routledge , 201 3. [1 0 ] Oz, E. 1992. “Ethical Standards for I nformation Systems Professionals: A Case for a Unified Code,” MIS Qu arter ly , vol. 16 , no. 4, pp. 423 - 433, 1992. [1 1 ] We inmann, M ., Schneider, C. and vom B rocke, J. “Digital Nudg ing,” Business & Information Sys tems Engineering (58) , pp. 433 - 436, 2016. [1 2 ] M aedche, A ., Interview with Prof. Jeroen van den Hoven on "Why d o Ethi cs and Values Matter in Bus iness and Information Systems Engineering?", Business & Informati on Systems Engineering (59:4), pp. 297 - 300, 2017. [1 3 ] Sunstein , C . R., “The Ethic s of Nudging”. Yale Journal on Regulation, (32:2), pp. 413 - 450, 2015. [14] Renaud, K. and Zimmermann, V., „Ethical Guidelines for Nudging in Information Security & Privacy,” International Journal of Human - Computer Stud ies , pp. 1 - 44, 2018. [15] Stahl, B.C., “Researching Ethics and Morality in Information Systems: Some Guiding Questions,” Page 3935 International Conference on Information Systems , vol. 175 , no. 99, pp. 1 - 17, 2008b. [16] Lembcke, T.B., “ Engelbrecht, N., Brendel, A. B., Kolbe, L., “ To N udge or Not To Nudge: Ethical Consi derations of Digital Nudging Base d On Its Behav ioral Econ omics Roots ,” ECIS Proceedings , pp. 1 - 17, 2019. [17] Johnson, D.G. “Computer Ethics," Pren tice Hall (3), pp. 240, 2001. [18] Ricoeur, P., “ De la morale a l’ét hi que et aux éthi ques , ” In: Le Juste 2, Ricoeur , P. (e d.), pp. 55 - 68, 2001. [19] Stahl, B.C., “The ethical nature of critical research in information systems,” Information Systems Journal , vol. 18, pp. 137 - 163, 2008 a. [20] Kahneman, D ., “Maps of Bounded R ationality: Psychology for Behavioral Economics,” Ameri can Econ omic Review , vol. 93, no. 5, pp. 1449 – 1475, 2003. [21] Stanovich, K. E. and West, R.F., “Individual Differences in Reasoning: Implications for the Rationality Debate?” The Behavioral and br ain sciences , vol . 23, no. 5, pp. 645 - 726, 2000. [22] Kahneman, D., “Thinki ng Fast and Slow”, Penguin Books , London, GB, 20 11. [23] Felsen, G., Castelo, N. and Reiner, P. B., “Decisional enhancement and autonomy: Public attitudes to - wa rds overt and covert nudges,” Judgment and Decision Making , vol. 8 , no. 3, pp. 202 – 213, 2013. [24] Rinta - Kahila, T. and Soliman, W., “Und erstandi ng Crowdsurfin g: The Differe nt Ethi cal Lo gics Be hind th e Clandesti ne Industry of Deception,” In ECIS Proceedin gs, pp. 1934 - 1949, 2017. [25] Mirsch, T., Lehrer, C. and Jung, R., “Digital Nudging: Alterin g User Beh avior in Digital Environments ,” Proceedings Der 13. Internationalen Tagung Wirtsc haftsi nforma tik ( WI 2017 ) , pp. 634 – 648, 2017. [26] Zhang, B. and Xu, H. , “Privacy Nudges for Mobile Applicat ions: Ef fects on the Creep iness Emot ion and P rivacy Attitud es,” Proceedings of the 19th ACM Confe rence on CSCW , ACM Pre ss, pp. 1674 – 1688, 2016. [27] Széke ly, N., Weinmann, M., & vom Brocke, J, „Nudging People to Pay CO 2 Offsets – The Effec t of Anchors in Flig ht Booking Proces ses,” In ECIS, paper 62, pp. 2 - 10, 2016. [28] Rodriguez, J., Piccoli, G., and Bartosiak, M., “Nudging the Classroom : Designing a Socio - Technical Artifact to Reduce Academic Pr ocrastina tion”. Proce e dings of the 52nd Haw aii Internati onal Conference on Sy stem Sciences , pp. 4405 - 4414, 2019. [29] Esposito, G., Hernández, P., van Bavel, R. and Vila, J., “Nudging to prevent the purchase of incompatible digital products online: An experimental study,” PLoS ONE, vol. no. 3, e0173333, 2017. [30] Cao, Z., Hui, K. L., and Hong, X., “An Economic Analysis of Peer Disc losure i n Online Soci al Communit ies.” Information Systems Re search , pp. 546 - 566, 2018. [31] Kroll, T, & Stieglitz, S., “Digital Nudging and privac y: improving decision s about self - disclosure in social networks*”. Behavior & Informati on Technology , pp. 1362 - 3001, 2019. [32] Stieglitz, S., Potthoff, T. and Kiß mer, T., “Digital Nudging am Ar beitspl atz: Ein Ansatz zur S teigerun g de r Technologieak zeptan z,” HMD Praxis der Wi rtschaftsinformatik , vol. 54 , no. 6, pp. 965 – 976, 2017. [33] Calo, R., “Code, Nudge, or Notice. Univ ersity of Washing ton S chool of Law Re search Paper , vol. 99 , no. 2, pp. 773 - 802, 2014. [34] Lessing, L., “The Law of the Horse: What C yberlaw Might Team,” Harvard Law Revie w , vol. 113, no. 501, pp. 501 - 546, 1999. [35] Hill, T. E., “Autonom y and Self - Respect”, 2nd Edit ion, Cambridge Uni versity Press , Cambridge, 1991. [36] Hansen, P. G., “The Definition of Nudge and Libertarian Paternali sm: Does the Hand Fit the Glove?” EJRR - European Journal of Risk Regula tion (7:1), pp. 155 – 174, 2016. [37] Hansen, P.G., “The Definition of Nudge and Libertarian Paternalism: Does the Hand Fit the Glove?”, European Journal of Risk Regulation : EJRR , v ol. 7 n o. 1, pp. 155 – 174 , 2016. [38] Lehner, M., Mont, O. and Heiskanen , E., “Nudging – A promising tool for susta inable consumption behaviour?” Journal of Cleaner Production , vol.134, no. A, pp. 166 - 177, 2016. [39] Grüne - Yanoff, T. , “Old wine in n ew casks : libertarian paternal ism still viola tes liberal principles,” Social Choice and Welfare , vol. 38, no. 4, pp. 635 – 645, 2012. [40] vom Brocke, J., Sim ons, A., Niehaves, B., Reim er, K., Plattfaut, R. and Cleven, A., “Reconstructing th e Giant: On the Importanc e of Rigour in Docume nting the Literature”, ECIS 2009 Pro ceedings, 2 009. [41] Cooper, H. M., “Organizing Knowledge Syntheses: A Taxonomy of Literat ure Reviews ” Knowledge, Technol ogy & Policy , vol. 1, no. 1, pp. 104 - 126, 1988. [42] Hansen, P. G. and Jespersen, A . M. “Nudge an d the Manipu latio n of Choic e,” EJRR - Eur opean J ournal of Risk Regulation , vol. 4, no. 1, pp. 3 – 28, 2013. Page 3936 [43] Tversky , A. and Kahneman, D., Prospect theory “Rational Choice and the Framing of Decisions.” Journal of Busines , vol. 59, no. 4, pp. 251 - 78, 1986. [44] Jung, J. Y. and Mellers, B. A., “American attitudes toward nudges,” Judgement and Decision Making , vol. 11, no. 1, pp. 62 - 74, 2016. [45] Sunstein, C. R.. “Nudging: A Very Short Guide,” Journal of Consumer Policy , vo l. 37, no. 4 , pp. 583 - 588, 2014. [46] Heilmann, C.,. “Success conditions for nudges: a methodol ogical critique o f liberta rian pater nalism, ” European Journal for Phil osophy of Science , vol. 4, no. 1, pp. 75 - 94, 2014. [47] Wilkinson, T. M., “Nudging and manipulatio n,” Political Studies , vol. 61, no. 2, pp. 341 – 355, 2013. [48] Saghai, Y., “Salvaging the concept of nudge: Table 1,” Journal of Medical Ethics , vol. 39, no. 8, pp. 487 – 493. [49] Faden, R. R. and Beauchamp, T. L., 1986. A History and Theory of Informed Cons ent, Oxford Uni ve rsity Press , New York, NY, 2013. [50] Greenspan, P., “The Problem with Manipulation,” American Philos ophical Quarte rly , vol. 40, no. 2, pp. 155 – 164, 2003.v [51] Nys, T. R. and Engelen, B. “Judging Nudging: Answering the Manipulati on Objecti on,” Pol itical Studies vol. 65, no. 1, pp. 199 – 214, 2017. [5 2 ] Venable, J., Pries - Hej e, J., and Bas kervill e, R. “ FEDS: a Framework for Evaluat ion in Design Sci ence Research ,” EJIS, vol. 25, no. 1, pp. 77 - 89, 2016. [5 3 ] Wi lia m, D. an d Blac k, P. J.,“ Meanin gs and Conseque nces: a basis f or distinguishing formative and summative functions of assessme A nt s?” Britis h Education al Research Jour nal, vol. 22, no. 5, pp. 537 - 548, 1996. [54] Djurica, D., and Figl . K. , “ The Effec t of Digit al Nudging Techniques o n Customers’ Product Choice and Attitudes towards E - Commerce Sites ,” Twenty - third Americas Conference on Informat ion Systems . Emergent Resea rch Forum Paper, pp. 1–5 , 2017. [55] Eigenbrod, L., and Janson, A. , “ How Digital Nudges Influence Consumers – Exper imental Investigation in the Context of Retargeti ng ,” 26th European Conference on Information Systems , pp. 1- 13 , 2018 . [56] Huang , N., Chen, P., Hong, Y. , and Wu, S. , “ Digital Nudging for Online Soci al Sharing : Evidence f rom a Randomized Field Experimen t ,” 51st Hawaii Interna tional Conference on Syste m Sciences, pp. 1483 - 1491 , 2018. [57] Mi rsch, T., J ung, R. , and Lehrer, C. , “ Making Digit al Nudging Appl icable: The Digit al Nudge Desi gn Method ,” International Conference on Information Sys tems , pp. 1- 16 , 2018. [58] Wang , C., Z hang, X., and Hann, I. , “ Socially Nudged: A Quasi - Experimen tal Study of Fri ends’ Social I nfluence in Online Produc t Ra tings ,” Information Systems Research, vol. 29 , no. 3, pp. 1- 15 , 2018 . [59] Schneider, D., Lins, S., Grupp, T., Benli an, A., and Sunyaev, A. , “ Nudging Users Into Onli ne Verifi cation: The Case of Carsharin g Pl atforms ,” ICIS Proceedings , paper 11, 2017. [60] Hummel, D. , Toreini , P., and Mädche , A. , “ Improving Digital Nudging Usin g Attenti ve User Inte rfaces: The ory Develop men t and Experime nt D esign”, DESRI ST 2018 Proceedings , pp. 1- 37 , 2018. [61] Hummel, D., S chacht, S., and Mädch e, A . , “ Designing Adaptive Nudges for Mul ti - Channel Choi ces of Digit al Services: A Laboratory Experiment Design ,” In Proceedings of the 25th European Conference on In fo rmation Systems , pp. 2677 - 2688 , 2017. [62] Haus man, D. and Welch e, B. , “ Debate: To Nudge or Not to Nudge ,” The Journa l of Political Philosophy, vol. 18 , no. 1, pp. 123 - 136 , 2010. [6 3 ] Tocchetto, D.G., “Searchi ng for the Moral Bound aries of Nudge,” Dive rsitates , vol. 2, no. 2, pp. 14 - 43, 2010. [64] Engelen, B., “Ethical Crite ria for Health - Promoting Nudges: A Case by Case Anal ysis,” The American Journal o f Bioethics, vol. 19, no. 5, pp. 48 - 59 , 2019. [6 5 ] Clavien, C., “Ethics of nud ges: a general framework with a focus on shar ed pre ference justifi cations, ” The American Jour nal of Bioe thics , vol. 16, no. 5, pp. 5 - 15, 2016. [6 6 ] Blumenth al - Darby , J.S., and Burroughs , H., “ Seeking Better Heal th Care Outcomes: The Ethics of Us ing the ‘ Nudge ’, " The American Journal of Bioethics , vol. 12 , no. 2 , pp. 1- 10 , 2012. [6 7 ] Fow ler , L.R. and R oberts, J. L., “A Nudge Tow ards Meanin gful Choic e,” The American Journal of Bioethics , vol. 15, no. 5, pp. 76 - 78, 2019. [6 8 ] Lembc ke, T.B., Engelbrecht, N ., Brendel, A. B., Ko lbe, L ., ” To Nudge or Not To Nudge: Ethic al Consi derations of Digital Nudging Base d On Its Behav ioral Econ omics Roots ” , ECIS Proceedin gs , pp. 1- 17 , 2019. [6 9 ] Chouhan, P., Drap er, H.: “Modifi ed mandate d choice for organ procure ment”. Journal of Medical Eth ics , vol. 29, pp. 157 - 162, 2003. Page 3937

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment