Learning to Adopt Generative AI
Recent advancements in generative AI, such as ChatGPT, have dramatically transformed how people access information. Despite its powerful capabilities, the benefits it provides may not be equally distributed among individuals, a phenomenon referred to as the digital divide. Building upon prior literature, we propose two forms of digital divide in the generative AI adoption process: (i) the learning divide, capturing individuals’ heterogeneous abilities to update their perceived utility of ChatGPT; and (ii) the utility divide, representing differences in individuals’ actual utility derived from per use of ChatGPT. To evaluate these two divides, we develop a Bayesian learning model that incorporates heterogeneities in both the utility and signal functions. Leveraging a large-scale clickstream dataset, we estimate the model and find significant learning and utility divides across various social characteristics. Interestingly, individuals without any college education, non-white individuals, and those with lower English literacy derive larger utility gains from ChatGPT, yet update their beliefs about its utility at a slower rate. Furthermore, males, younger individuals, and those in occupations with greater exposure to generative AI not only obtain higher utility per use from ChatGPT but also learn about its utility more rapidly. Besides, we document a phenomenon termed the belief trap, wherein users underestimate ChatGPT’s utility, opt not to use the tool, and thereby lack new experiences to update their perceptions, leading to continued underutilization. Our simulation further demonstrates that the learning divide can significantly affect the probability of falling into the belief trap, another form of the digital divide in adoption outcomes (i.e., outcome divide); however, offering training programs can alleviate the belief trap and mitigate the divide.
💡 Research Summary
The paper investigates two previously under‑explored dimensions of the digital divide that arise in the adoption of generative artificial intelligence (AI) tools such as ChatGPT: the learning divide and the utility divide. The learning divide captures heterogeneity in individuals’ ability to update their perceived utility (belief) about the technology, while the utility divide measures heterogeneity in the actual per‑use utility that users derive once they adopt the tool.
To jointly model these phenomena, the authors develop a structural Bayesian learning framework. The model includes (i) a utility function that maps a user’s interaction with the AI to a realized payoff, (ii) a signal function that generates noisy feedback from each interaction, and (iii) a belief‑updating rule that follows Bayes’ theorem. Both the utility and signal functions are allowed to vary across individuals based on observable social characteristics (education, race/ethnicity, gender, age, occupation, English literacy, etc.).
Empirically, the authors exploit a large‑scale click‑stream panel collected by a data‑collection firm. The dataset follows thousands of users for six months after the public launch of ChatGPT, recording session counts, dwell times, query types, and other usage metrics, together with demographic information. By fitting the Bayesian model to this data, they estimate individual‑specific learning speeds and utility levels.
Key findings:
-
Learning Divide – Users without a college degree, non‑white respondents, and those with lower English literacy update their beliefs about ChatGPT’s utility more slowly than their counterparts. Their slower learning leads to prolonged under‑use despite potential benefits.
-
Utility Divide – Paradoxically, the same disadvantaged groups experience larger per‑use utility gains (e.g., higher productivity improvements) when they do use the tool. In contrast, males, younger users, and workers in occupations with high exposure to large language models both obtain higher utility and learn faster.
-
Belief Trap – The model reveals a self‑reinforcing mechanism: users who initially hold low expectations about AI’s usefulness may avoid using it, thereby missing the informative signals needed to correct their beliefs. This “belief trap” dramatically raises the probability of falling into long‑term non‑adoption, constituting an outcome divide. Simulations show that individuals with slower learning speeds are 2–3 times more likely to enter the trap.
-
Policy Simulations – Counterfactual experiments indicate that targeted training programs—designed to increase early exposure to generative AI and to improve users’ ability to interpret feedback—substantially reduce the likelihood of the belief trap and narrow both the learning and utility divides. Simplifying the user interface and providing language‑support tools also accelerate belief updating for low‑literacy users.
Theoretical contributions are fourfold: (i) a unified dynamic framework that links information uncertainty, learning, and adoption outcomes; (ii) an empirical strategy that leverages real‑world voluntary usage data rather than controlled laboratory experiments, allowing observation of endogenous belief evolution; (iii) the introduction of “learning divide” and “belief trap” as novel constructs applicable to a broad range of emerging technologies; (iv) concrete evidence on the effectiveness of policy interventions aimed at mitigating AI‑related digital disparities.
In sum, the paper argues that the benefits of generative AI are not solely a function of the technology’s capabilities but are critically mediated by users’ learning processes. Addressing the digital divide therefore requires moving beyond provisioning access to actively fostering users’ ability to learn from and correctly assess AI tools, especially for socially disadvantaged groups.
Comments & Academic Discussion
Loading comments...
Leave a Comment