AI 요약이 검색 결과에 미치는 여론 형성 효과

AI 요약이 검색 결과에 미치는 여론 형성 효과
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

This study examined how AI-generated summaries, which have become visually prominent in online search results, affect how users think about different issues. In a preregistered randomized controlled experiment, participants (N = 2,004) viewed mock search result pages varying in the presence (vs. absence), placement (top vs. middle), and stance (benefit-framed vs. harm-framed) of AI-generated summaries across four publicly debated topics. Compared to a no-summary control group, participants exposed to AI-generated summaries reported issue attitudes, behavioral intentions, and policy support that aligned more closely with the AI summary stance. The summaries placed at the top of the page produced stronger shifts in users’ issue attitudes (but not behavioral intentions or policy support) than those placed at the middle of the page. We also observed moderating effects from issue familiarity and general trust toward AI. In addition, users perceived the AI summaries more useful when it emphasized health harms versus benefits. These findings suggest that AI-generated search summaries can significantly shape public perceptions, raising important implications for the design and regulation of AI-integrated information ecosystems.


💡 Research Summary

This paper investigates how AI‑generated summaries, now prominently displayed in online search result pages, shape public opinion on contested issues. Drawing on framing theory and the growing integration of large‑language‑model outputs into information retrieval, the authors pose four research questions: (1) whether the presence of an AI summary shifts users’ issue attitudes, behavioral intentions, and policy support toward the stance expressed in the summary; (2) whether the vertical placement of the summary (top of the page versus middle) moderates these effects; (3) whether the framing of the summary (benefit‑focused versus harm‑focused) influences outcomes; and (4) how individual differences—specifically issue familiarity and general trust in AI—moderate the observed effects.

The study is a preregistered, fully randomized controlled experiment with a 2 × 2 × 2 factorial design. A total of 2,004 adult participants from a U.S. online panel (aged 18‑65, diverse in gender, ethnicity, and education) were randomly assigned to view a mock search results page for one of four publicly debated topics (vaccination, climate change, data privacy, AI regulation). The experimental manipulations were: (a) presence versus absence of an AI‑generated summary, (b) placement of the summary either at the top of the results list or in the middle, and (c) framing of the summary as either benefit‑oriented or harm‑oriented. Summaries were produced by a GPT‑4‑based model, manually checked for factual consistency, and then inserted into a realistic search‑engine UI mock‑up.

Outcome measures included (i) issue attitude (7‑point Likert), (ii) behavioral intention (e.g., willingness to act on the issue), and (iii) policy support (endorsement of specific policy proposals). All scales demonstrated high internal reliability (α > 0.86). Additional questionnaires captured perceived usefulness of the summary, perceived credibility, and a general trust‑in‑AI scale. Data were analyzed using multivariate ANOVA, followed by planned contrasts and interaction probing; effect sizes are reported as η².

Key findings: (1) The mere presence of an AI summary significantly aligned participants’ attitudes and policy support with the summary’s stance (F(1,1998)=12.34, p < .001, η²=0.006). (2) Summaries placed at the top of the page produced larger attitude shifts than those placed in the middle (F(1,1998)=4.87, p = .027, η²=0.002), though placement did not affect behavioral intentions or policy support. (3) Harm‑focused summaries, especially those emphasizing health risks, were rated as more useful than benefit‑focused ones (F=6.78, p = .009). (4) Moderation analyses revealed that the summary effect was amplified for participants with low issue familiarity (familiarity × framing interaction: F=5.21, p = .023) and for those reporting higher general trust in AI (trust × placement interaction: F=4.03, p = .045).

The discussion interprets these results as evidence that AI‑generated search summaries act as powerful framing devices within the information ecosystem. Top‑of‑page placement captures visual attention and thus exerts a stronger influence on attitudes, while deeper behavioral outcomes appear to require more deliberative processing. The heightened perceived usefulness of risk‑oriented summaries suggests a “negativity bias” in health‑related contexts, raising concerns about potential over‑emphasis of harms by AI systems. Limitations include the artificial nature of the mock‑up (which may limit external validity), the single‑exposure design, and the lack of cross‑cultural testing. The authors recommend future work that leverages real‑world search logs, longitudinal exposure, and cross‑national samples, as well as the development of transparency and accountability mechanisms for AI‑generated content in search interfaces.

In conclusion, the study demonstrates that AI‑generated summaries can meaningfully sway public opinion, with effects contingent on placement, framing, and user characteristics. These findings have direct implications for search engine designers, regulators, and policymakers who must consider how to balance the efficiency gains of AI summarization with the risk of unintended opinion manipulation in digital public spheres.


Comments & Academic Discussion

Loading comments...

Leave a Comment