Privacy and Safety Experiences and Concerns of U.S. Women Using Generative AI for Seeking Sexual and Reproductive Health Information

The rapid adoption of generative AI (GenAI) chatbots has reshaped access to sexual and reproductive health (SRH) information, particularly following the overturning of Roe v. Wade, as individuals assigned female at birth increasingly turn to online s…

Authors: Ina Kaleva, Xiao Zhan, Ruba Abu-Salma

Privacy and Safety Experiences and Concerns of U .S. W omen Using Generative AI for Seeking Sexual and Reproductive Health Information Ina Kale va King’s College London London, United Kingdom ina.1.kaleva@kcl.ac.uk Xiao Zhan VRAIN, Universitat Politècnica de V alència & University of Cambridge V alencia, Spain, Cambridge, United Kingdom xzhan1@upv .es Ruba Abu-Salma King’s College London London, United Kingdom ruba.abu- salma@kcl.ac.uk Jose Such INGENIO (CSIC-Universitat Politècnica de V alència) V alencia, Spain jose.such@csic.es Abstract The rapid adoption of generative AI ( GenAI) chatbots has reshaped access to sexual and reproductive health (SRH) information, par- ticularly following the overturning of Roe v . W ade , as individu- als assigned female at birth increasingly turn to online sources. Howev er , existing r esearch remains largely model-centered, paying limited attention to user privacy and safety . W e conducted semi- structured inter vie ws with 18 U.S.-based participants from b oth restrictive and non-restrictive states who had used GenAI chatbots to seek SRH information. Adoption was inuenced by perceived utility , usability , credibility , accessibility , and anthropomorphism, and many participants disclosed sensitiv e personal SRH details. Par- ticipants identied multiple privacy risks, including excessive data collection, government surveillance, proling, model training, and data commodication. While most participants accepted these risks in exchange for perceived utility , abortion-related queries elicited heightened safety concerns. Few participants employed protective strategies beyond minimizing disclosur es or deleting data. Based on these ndings, we oer design and p olicy r ecommendations—such as health-specic features and stronger moderation practices—to enhance privacy and safety in GenAI-supported SRH information seeking. CCS Concepts • Se curity and privacy → Social aspects of security and pri- vacy ; Usability in se curity and privacy ; Privacy protections ; • Human-centered computing → Empirical studies in HCI . Ke ywords W omen’s health, sexual and reproductive health (SRH), digital health, generative AI (GenAI), privacy , safety . This work is licensed under a Creativ e Commons Attribution 4.0 International License. CHI ’26, Barcelona, Spain © 2026 Cop yright held by the owner/author(s). ACM ISBN 979-8-4007-2278-3/2026/04 https://doi.org/10.1145/3772318.3791531 A CM Reference Format: Ina Kaleva, Xiao Zhan, Ruba Abu-Salma, and Jose Such. 2026. Privacy and Safety Experiences and Concerns of U .S. W omen Using Generative AI for Seeking Sexual and Reproductive Health Information. In Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems (CHI ’26), A pril 13–17, 2026, Barcelona, Spain. ACM, New Y ork, NY, USA, 21 pages. https://doi.org/10.1145/3772318.3791531 1 Introduction In recent y ears, the rapid growth of generative AI ( GenAI) chatbots has transformed how people access and interact with information across diverse domains, extending beyond traditional search en- gines [ 142 ]. Built on large language models (LLMs) and trained on extensive datasets, GenAI chatb ots can generate text, images, and other forms of content in response to user prompts [ 19 ]. These tools are developed by a wide range of organizations, including OpenAI’s ChatGPT , Microsoft’s Copilot, and Google’s Gemini. No- tably , ChatGPT reached approximately 100 million unique users within two months of its launch, marking the fastest adoption of any online platform to date [ 42 ]. Although initially designed for general-purpose use, GenAI tools are increasingly being applied in healthcare, oering new avenues for accessing information related to sexual and reproductive health (SRH) [8]. People assigned female at birth increasingly turn to online re- sources to seek information on sensitive and often stigmatized SRH issues, as these platforms provide rapid and, in principle, discreet access to information [ 39 , 44 , 123 ]. GenAI chatbots are particularly appealing alternatives to traditional search engines and social media due to their conversational interfaces and perceived anthropomor- phism [ 91 , 95 , 133 , 141 ], ability to deliver personalized responses [57, 116], perceived anonymity [95], and convenience [141]. In 2022, the Supreme Court’s decision in Dobbs v . Jackson W omen’s Health Organization overturned Roe v . W ade, eliminating the con- stitutional right to abortion in the United States (U.S.). Follow- ing this ruling, legal restrictions on r eproductive healthcare hav e heightened safety risks for individuals seeking ab ortion and other SRH services, including surveillance, criminalization, social stigma, gender-based violence, and targeted misinformation [ 89 , 113 ]. Lim- ited access to abortion care in certain states has further amplied privacy and legal concerns, as digital data could potentially be CHI ’26, April 13–17, 2026, Barcelona, Spain Kaleva et al. used to prosecute individuals seeking ab ortion-r elated or other SRH-related information [58]. Prior research has examined the privacy and safety b ehaviors of users of female-oriented technology (FemT ech), particularly in the context of period-tracking apps [ 26 , 97 ]. The dynamic, conver- sational interactions between users and GenAI chatbots generate substantial amounts of personal data, often encompassing sensitive and potentially legally contentious topics, raising concerns about data breaches and misuse [ 1 , 48 ]. Howev er , resear ch on GenAI chat- bots for SRH information remains limited. Existing studies have primarily be en model-centered, focusing on clinical ecacy and reliability rather than users’ experiences or the broader societal implications of these tools [10, 12, 27, 132]. T o address this gap, we conducted semi-structured interviews with 18 participants who sought SRH information via GenAI chat- bots in the p ost-Roe era, focusing on their privacy and safety expe- riences. The study aims to understand the facilitators and barriers to adopting and using GenAI chatbots in this context, participants’ perceptions of data practices, their privacy and safety concerns, and the strategies they employed to mitigate risks. Our research questions (RQs) are as follows: RQ1. What factors facilitate or hinder the adoption and use of GenAI chatbots for SRH information seeking? RQ2. What are users’ beliefs about data ows and practices when using GenAI chatbots for SRH information? RQ3. What privacy and safety risks are users concerned about when using GenAI chatbots to seek SRH information? How do these risk perceptions dier across r estrictive and non-restrictiv e states, and across dierent SRH topics? RQ4. What measures or strategies do GenAI chatbot users em- ploy (or w ould consider employing) to protect against these risks, and how can GenAI chatbots be improv ed to better safeguar d users seeking SRH information? W e found that participants used GenAI chatbots to seek answers to a range of SRH queries due to the to ols’ utility , usability , p er- ceived credibility , accessibility , and anthropomorphism. Barriers to adoption included limited usefulness for serious health conditions, usability challenges, lack of perceived credibility , risk of bias, and absence of human empathy and experience (RQ1). Participants of- ten expressed uncertainty or held inaccurate beliefs about GenAI chatbots’ data practices, including data collection, processing, shar- ing, and deletion (RQ2). They also p er ceived GenAI chatbots as posing higher privacy risks than other SRH information sources (e .g., sear ch engines, period-tracking apps, social media, and health- care providers), primarily due to the large volume of personal data collected and processe d. A dditional perceived risks included mo del training, government surveillance, user pr oling, advertising, and insucient regulatory protections. Participants identied several harms from p otential privacy breaches of their SRH-related con- versations with GenAI chatbots, such as criminalization, emotional distress, harassment, and stigmatization. Most wer e willing to share SRH information in exchange for utility , except for abortion-related queries and, in some cases, other stigmatized topics, such as sexu- ally transmitted infections (STIs) or sexual orientation and gender identity (SOGI) (RQ3). Some participants employed protective mea- sures bey ond data minimization and deletion (RQ4). Based on these ndings, we propose socio-technical recommendations to enhance privacy and safety in GenAI chatb ots for SRH information seeking, including introducing health-specic interactive privacy features co-designed with end-users, strengthening legal and regulatory protections, implementing SRH-adapted moderation rules, and pr o- moting greater transparency . T o our knowledge, this is the rst study to examine participants’ privacy and safety experiences with GenAI chatb ots in the context of seeking SRH information. While prior resear ch has focused on technical aspects and the clinical ecacy of GenAI chatbots, our study provides a novel perspective by oering a comprehensiv e, in- depth understanding of user experiences, privacy perceptions, and risk mitigation strategies in the SRH conte xt. The insights from this research can inform the dev elopment of mor e user-centric, privacy- and safety-preserving, and ethically designed GenAI technologies. 2 Related W ork 2.1 GenAI Chatb ots as a Source of SRH Information The Internet pro vides a wide range of sources for SRH information, including websites, social media platforms, blogs, and forums [ 50 ]. The emergence of GenAI chatbots is transforming healthcare and online information seeking by enhancing eciency , accessibility , and personalization [ 8 , 37 ]. For example, ChatGPT has demon- strated the potential to meet or surpass the passing threshold for the U.S. medical licensing exam [ 46 ]. In the context of SRH, GenAI chatbots have p erformed well in answering questions about fertility [ 13 , 32 ], pregnancy [ 61 ], abortion [ 56 ], birth contr ol and contracep- tion [ 24 ], endometriosis [ 108 ], and sexual health, including STIs [ 63 , 66 ]. They have the p otential to improve SRH education and literacy [ 13 , 24 ], pro vide innovations in menstrual health man- agement [ 3 ], address ine qualities in SRH care [ 3 ], and optimize doctor-patient communication [128, 138]. Despite their potential, GenAI chatbots have several limitations. These include hallucinations that generate unreliable content, lack of contextual understanding, inaccurate source referencing, biased training data, unclear information origins, and no interaction with professionals, all of which can pose risks to physical safety [ 14 , 16 ]. For example , ChatGPT has shown inaccuracies, such as misrep- resenting the safety of self-managed medication abortions [ 84 ]. Performance inconsistencies have also been obser ved across dier- ent GenAI chatbots, including ChatGPT , Google Bard, and Microsoft Bing [ 24 , 61 , 85 , 114 , 115 ]. Comparative studies indicate that Google Bard and ChatGPT -3.5 outperform Microsoft Bing and ChatGPT -4.0 in providing refer ences and ensuring readability when addressing contraception-related queries [114, 115]. In terms of user experience, a recent study found that partici- pants considered ChatGPT easy to use for accessing general health information [6]. Additionally , a cross-sectional study showed that people seeking health information through ChatGPT perceived it as equally or more useful than other online resources and even their doctors [ 11 ]. Consequently , participants sometimes requested referrals or changed medications based on information obtaine d from ChatGPT . Another study examined participants’ perceptions of the accuracy of “at-home” ab ortion remedy information provided by ChatGPT and featur ed in TikT ok videos [ 125 ]. In this study , par- ticipants were sometimes less likely to label the video content as Privacy and Safety Experiences and Concerns of U.S. W omen Using Generative AI for Seeking SRH Information CHI ’26, April 13–17, 2026, Barcelona, Spain misinformation when they knew it was generated by ChatGPT , sug- gesting that GenAI chatbots may be perceived as reliable sour ces for SRH information. 2.2 Privacy and Safety of GenAI Chatbots for SRH Information Seeking With the overturn of Roe v . W ade, there has been a notable in- crease in research examining privacy issues associated with the digitalization of the reproductive b ody , particularly fo cusing on period-tracking apps and other FemT ech-related tools and devices, which pose unique safety risks to users [ 9 , 26 , 39 , 80 , 81 , 88 ]. In- timate SRH data can reveal deeply personal information ab out a user’s lifestyle, health status, and reproductive decisions, includ- ing sexual activity , pregnancy status, abortion history , menstrual health, and fertility . Mismanagement or misuse of such data can have serious consequences, including criminalization [ 52 ], work- place monitoring [ 20 , 87 ], discrimination [ 35 , 122 ], harassment [ 89 ], intimate partner violence [ 113 , 119 ], targeted misinformation [ 89 ], and criminal blackmail [9]. The use of GenAI chatbots to access SRH information intr oduces complex privacy and safety risks for se veral reasons. First, the large volume of sensitive data collected during open-ended, interactive, and dynamic conversations with GenAI chatbots is processed by AI algorithms, which can encourage greater engagement and of- ten lead to increased disclosure of personal information [ 124 , 141 ]. Second, AI’s ability to infer sensitive information from user interac- tions or other available datasets extends the scope of data collected beyond what users may intentionally disclose [ 8 , 72 , 118 , 124 , 126 ]. Consequently , seemingly neutral data can be combined to infer sensitive health conditions and personal choices; for example, dis- cussions about menstrual cycles and symptom patterns can reveal pregnancy status or other reproductive health issues. Thir d, to de- velop and rene their models, GenAI chatbots may use training data derived from users’ conversations, potentially including personally identiable information (PII) and other sensitive details that users did not explicitly consent to share. As a result, the model could memorize sensitive SRH information and inadvertently expose it in responses to other users’ prompts due to insucient data protection measures [4, 28, 29, 124, 140]. Existing research on GenAI chatb ots is primarily mo del-center ed, focusing on technical privacy risks [ 28 , 69 ], and provides limited insight into participants’ privacy experiences [ 68 , 78 , 139 , 141 ]. A recent study investigating how users manage disclosure risks and benets when interacting with LLM-based conversational agents found that participants frequently faced trade-os b etw een privacy , utility , and ease of use [ 141 ]. Inaccurate mental models often lim- ited participants’ awar eness and understanding of the privacy risks posed by these agents [ 141 ]. Moreover , human-like interactions encouraged participants to disclose more sensitive information, making it more dicult to navigate these trade-os eectively [ 141 ]. More recent work has shown that these human-like interac- tions can be further exploited maliciously to prompt users to re veal even more p ersonal data [ 139 ]. Another study explored privacy and security concerns among users interacting with general-purpose GenAI chatbots for mental health support [ 65 ]. Through 21 semi- structured interviews, the study found that participants frequently misunderstood the protections surrounding these chatbots, con- ating their human-like empathy with human accountability and assuming that regulatory safeguards such as the Health Insurance Portability and Accountability Act (HIP AA) apply . The study also in- troduced the concept of “intangible vulnerability , ” highlighting that mental health disclosures ar e often under valued by users compared to tangible data, such as nancial information. T o support user-centric design and development of GenAI chat- bots for SRH information se eking, it is essential to understand participants’ experiences. In this paper , we examine the factors that facilitate and hinder the adoption of GenAI chatbots for SRH information seeking, participants’ p er ceptions of data ows within the GenAI ecosystem, their privacy and safety concerns, and the protection strategies they employed or would like to se e imple- mented. 3 Methods W e conducted semi-structured inter vie ws with 18 U .S.-base d partic- ipants assigned female at birth who used a general-purp ose GenAI chatbot to seek SRH information. 3.1 Participant Re cruitment Participants were eligible if they had experience using a general- purpose GenAI chatb ot (e.g., ChatGPT , Gemini, Copilot) to se ek SRH–related information or advice; resided in the U.S.; were be- tween 18 and 45 years old; were assigned female at birth; and spoke English. W e focuse d on U.S. residents aged 18 to 45 b ecause this group is more likely to be within their reproductive span [ 99 ], af- fected by the Ro e v . W ade overturn [ 59 ], and active users of GenAI chatbots [94]. W e developed a screening questionnaire to collect information on participants’ GenAI chatbot use, demographics, privacy con- cerns, attitudes toward abortion, technical background, and the SRH topics they had searched for . The screener was hosted on Qualtrics, included a detailed study description, and obtained informed con- sent for data colle ction. Participants w ere recruited thr ough Prolic. Prolic prescreening criteria include d age (18–45), sex (female), and current state of residence (Alabama, Arkansas, California, Colorado , Connecticut, Delaware, Hawaii, Idaho , Illinois, Indiana, Kentucky , Louisiana, Maryland, Massachusetts, Michigan, Minnesota, Missis- sippi, Missouri, Montana, Nevada, New Jersey , New Mexico , New Y ork, North Dakota, Ohio, Oklahoma, Oregon, T ennessee, T exas, Rhode Island, V ermont, Virginia, W ashington, and W est Virginia) 1 . Screener respondents were compensated at an average rate of $23 per hour through Prolic. A total of 420 interested candidates completed the screener , of whom 201 met the eligibility criteria. Given the exploratory , in- depth nature of our qualitative study , we did not aim for statistical representativeness [ 38 ]. Instead, we used purposive sampling to achieve demographic and experiential diversity [ 112 ]. Specically , we aimed to (1) include users of dierent GenAI chatbots to avoid 1 At the time of the study , ab ortion was legal (until viability or with no gestational limit) in California, Connecticut, Colorado, Delaware, Hawaii, Illinois, Mar yland, Mas- sachusetts, Michigan, Minnesota, Missouri, Montana, Nevada, New Jersey , New Mexico, New Y ork, Ohio, Oregon, Rhode Island, V ermont, Virginia, and W ashington, and fully banned in Alabama, Arkansas, Idaho, Indiana, Kentucky , Louisiana, Mississippi, North Dakota, Oklahoma, T ennessee, T exas, and W est Virginia. CHI ’26, April 13–17, 2026, Barcelona, Spain Kaleva et al. overrepr esenting ChatGPT users; (2) capture a range of SRH topics to enable comparisons across privacy concerns; (3) achieve propor- tional demographic diversity (e.g., age groups, ethnic backgrounds, gender identities); (4) balance participants with low (0–4) and high (5–10) privacy concern scores; and (5) recruit e qual numb ers of par- ticipants from restrictive states (where abortion is legally banned) and non-restrictive states (where abortion is legal until viability or permitted past 18 weeks) to capture diverse legal contexts. El- igible participants were invited on a rolling basis from a pool of 210 candidates, with invitations sent in batches due to nonresponse or scheduling conicts. In total, we extended 110 invitations and continued recruitment until we obtained a balanced sample of 18 participants and achieved data saturation within our purposively selected sample [ 112 ] (see T able 1). Participants were compensated $40 for completing the interview via Prolic. The screener instru- ment is provided in §A.1. 3.2 Interview Procedure W e conducte d 18 semi-structured inter views. During the interviews, we took detailed notes and assessed after every one to two sessions whether data saturation had been reached; that is, whether the data continued to yield additional insights within the scope of our RQs [ 121 ]. This process guided the point at which no further participants were r ecruited, which we refer to as data saturation. The research team also assessed data saturation during the coding and theme dev elopment process, conrming that the data were sucient to allow meaningful interpretation to answer our RQs (see below in §3.4) [ 121 ]. Each interview lasted b etw een 60 and 90 minutes and was conducted via video chat using our institution’s Microsoft T eams account. The interview guide encompassed a broad set of questions to encourage data saturation at the individual level [ 121 ]. Interviews began with warm-up questions ab out participants’ experiences with general-purpose GenAI chatbots for seeking SRH information and advice, including comparisons to other sources. W e then explored participants’ perceptions and concerns regarding interaction qual- ity and AI-generated output. Next, we asked about their views on data practices, including data collection, processing, use, storage , retention, sharing, and deletion. For participants without direct experience with data deletion, we asked hypothetical questions. W e also investigated participants’ privacy and safety concerns, as well as any protective strategies the y employed. The subsequent ques- tions examined participants’ views on existing privacy regulations for GenAI chatbots. Finally , closing questions focuse d on partic- ipants’ preferences and design recommendations for enhancing privacy and safety when using GenAI chatbots for SRH informa- tion. No time limits w ere impose d on r esponses, and each interview concluded with an open prompt for additional thoughts. Our inter- view instrument can be found in §A.2. Piloting. W e conducted three pilot interviews to rene question clarity , prioritize topics, and improve question ordering. Following the pilot inter views, we remov ed repetitive questions, combined overlapping items ( e.g., questions exploring privacy and security separately , or data storage, retention, and safeguards across dier- ent data types), and rened the wording and denitions of several items (e .g., Q13–Q17, Q43). W e also added prompts about spe cic privacy settings available in existing GenAI chatbots (Q46). Pilot interviews were excluded from the analysis. Further minor adjust- ments were made as neede d while conducting the 18 interviews included in the analysis, consistent with the exibility inherent in semi-structured interviews. All interviews were transcribed verba- tim, and transcripts were checked by the authors for accuracy . 3.3 Research Ethics The Resear ch Ethics Committee at King’s College London revie wed and appr oved our study . T o conduct the interviews, we used anony- mous Prolic IDs and institutional Micr osoft T eams links. Partici- pants did not need a T eams account to join. All participants received a consent form and information sheet outlining study details, includ- ing example questions and potential risks. Befor e the interviews, participants were assured that their r esponses, including sensitive or potentially criminalizing information regarding their SRH, such as abortion disclosures, would remain anonymous and condential, and that they could skip questions or withdraw without conse- quences. Consent for recording and transcription was explicitly obtained through both the consent form and verbally prior to the interview . Because vide o r ecording might pose risks to data con- dentiality , participants were given the option to decline recording and transcription and instead allow the researcher to take notes only . No participants chose to refuse recording or transcription. Af- ter each interview , participants had the opportunity to discuss how they felt and ask questions. They also received information about the privacy and security settings available in their most used GenAI chatbot. W e also followed trauma-informed research practices [ 55 ] by oering participants mental health resources, including support from Mental Health America via the Pr olic message system. No cases of distress occurred. The transcripts were fully anonymize d by assigning individual IDs and removing PII, e .g., names mentione d during the interviews, or contextual identiers, e.g., references to profession. T o fully guarantee data condentiality , audio and video recordings were permanently deleted imme diately after tran- scription, and all data les and transcripts were securely stored on institutional OneDrive. 3.4 Data Analysis W e employ ed a Thematic Analysis (TA ) approach inspired by Braun and Clarke’s guidelines [ 18 ]. W e inductively code d the transcripts using MAXQD A. Although inter views were automatically tran- scribed using Microsoft T eams, we took notes of insights through- out the inter vie w process and maintaine d reexivity . T wo researchers independently coded two randomly sele cted transcripts to develop an initial coding frame. They then discussed their coding frames and merged them into one frame after resolving any disagreements. A third researcher also provided input to rene the frame. The two researchers repeated this step, coding a dierent transcript each time to iteratively expand the frame until no new codes were introduced, thereby reaching code saturation. After nalizing the coding frame, the two resear chers re-coded all the interviews using the nalized frame . Inter-rater reliability (IRR) ( 𝜅 = 0 . 70 ) was calcu- lated using the Inter coder A greement function in MAX QDA, with a minimum code overlap rate of 1% at the segment lev el, and only for codes included in the reported themes in §4. Although the use of Privacy and Safety Experiences and Concerns of U.S. W omen Using Generative AI for Seeking SRH Information CHI ’26, April 13–17, 2026, Barcelona, Spain IRR in qualitative research is debated, as it can b e seen as misaligned with the epistemological principles of T A [ 109 ], in this study , IRR was not used as a strict measure of reliability . Instead, it served as a tool to support reexivity and engagement with the data [ 109 ]. Specically , it help ed identify dierent interpr etive perspectives, improved communicability , and claried the application of codes (e .g., recognizing that higher-level codes wer e not always necessar y when lower-lev el codes were more appropriate) to ensure coding consistency . W e report our IRR value in this paper for transparency purposes. Our nal coding frame can be found at this [ link]. Next, we iteratively developed and named themes and sub-themes by gr ouping codes to address our RQs (see §4). Theme formation re- quired deep engagement with the data, including repeated reading, note-taking, and interpretation. The rst author led theme dev el- opment, and the research team engaged in iterative discussions to deep en the analysis and provide additional interpretative per- spectives, which were treated r eexively ( embracing researchers’ subjectivity) rather than perceived as disagreements [ 101 ]. Conse- quently , the themes evolved signicantly throughout the analysis: several wer e rened and new themes wer e developed. For example, following initial theme development, w e deepened the analysis fur- ther by e xamining dierences b etw een participants from r estrictive and non-restrictive states, as well as across various SRH topics, highlighting themes that were unique or shared among sp ecic characteristics (e .g., technical background) [ 31 ]. Thematic rene- ment continued until themes became conceptually complete, and additional data generated only further examples rather than new insights, achieving thematic saturation. W e also reviewed the e x- cerpts for each topic to further rene the themes. As part of the reexive natur e of T A, the themes reected what the authors saw as important for answering the RQs. 3.5 A uthor Positionality As resear chers, we acknowledge that our backgrounds, identities, and p erspectives shaped how we appr oached this study , interpreted the data, and engaged with participants. The authors were three women and one man with extensive experience conducting research at the intersection of human-computer interaction (HCI) and com- puter security/privacy , with a focus on at-risk populations. W e also recognize that residing outside the U .S., in contexts where abortion access is less restricted, might have inuenced our interactions with participants living in more restrictive states. For example, our own commitments to r eproductive rights could aect how we in- terpreted participants’ experiences and narratives; however , we mitigated this inuence by maintaining reexivity throughout the research and critically examining our assumptions at each stage . 3.6 Limitations Given the private and sensitive nature of SRH information partici- pants might have sought using GenAI chatbots, some might have been hesitant to disclose conversations involving highly personal or potentially stigmatizing content [ 141 ]. T o reduce social desirability bias, we avoided leading questions and emphasized that there were no right or wrong answers [ 51 ]. This study focused exclusively on participants who use d GenAI chatbots for SRH information, po- tentially excluding insights from individuals who chose not to use such tools; future work should examine and compare perspectives from non-users with our ndings. Another limitation arose from adjustments made to the inter view script after inter viewing the rst three participants included in the analysis (distinct from pilot par- ticipants). These three participants were not explicitly asked ab out their views on using GenAI chatbots for abortion-related queries, though one raised the topic unprompted, and therefor e the other two might not have had the opportunity to share their p erspectives. Howev er , insights regarding abortion-related topics were obtained from the remaining 16 participants who had the opportunity to share their persp ectives. For our qualitative study , the screening survey employed a single item to assess self-reported privacy con- cerns, with the aim of minimizing completion time. This approach provided a general indication of participants’ privacy concerns to inform participant diversication, rather than capturing multiple dimensions through a fully validated scale. Future conrmatory re- search could incorporate quantitative measures to test hypotheses generated qualitatively in this study . Finally , we focused on adults aged 18–45, as this group is more likely to b e in their repr oductive span [ 99 ], and to face the risk of prosecution following the overturn- ing of Ro e v . W ade [ 59 ]. While our focus might limit insights into privacy perceptions during later life stages (e .g., menopause), four participants discussed menopause-related topics. Future research could examine privacy persp ectives among older adults se eking SRH information or those not directly at risk of abortion-related prosecution. 4 Results This section reports ndings from 18 semi-structured interviews conducted to address RQ1–RQ4. Participants wer e evenly divided between restrictive (P1–P9) and non-restrictive (P10–P18) states. T able 1 summarizes participants’ demographics, the GenAI chat- bots they most frequently used to seek SRH information, the SRH topics they inquired about, and their self-reported levels of privacy concern regarding the use of GenAI chatb ots (rate d on a 0–10 scale), as colle cted in the screening sur v ey . Three participants (P8, P13, and P16) self-r eported having technical backgrounds. An overview and summary of all themes and sub-themes are presented in T able 2. 4.1 Facilitators of and Barriers to the Adoption of GenAI Chatbots (RQ1) This section examines facilitators and barriers to adopting GenAI chatbots for seeking SRH information, including utility , usability , credibility , equity and accessibility , and anthropomorphism. 4.1.1 Utility . Most participants (n=15) found GenAI chatbots use- ful and eective for seeking SRH information, often viewing them as more informative than other online sources or healthcare pr o- fessionals: “ that was way more information than I received from my physician ” (P5). Half of participants (n=9) appreciated the person- alization of the content: “ it makes me fe el like it’s more catered to what’s going on with my bo dy ” (P5). The majority of participants (n=10) made decisions about their (and sometimes others’) health base d on generated SRH information, including scheduling medical appointments (e.g., surgery consul- tations), adjusting diet and nutrition, managing symptoms (e.g., CHI ’26, April 13–17, 2026, Barcelona, Spain Kaleva et al. T able 1: Participants’ use of GenAI tools, SRH topics they sought information on, privacy concerns, attitudes toward abortion, and demographics. ID Most used chatbot SRH topic(s) Privacy concern level Attitude to- ward abortion as health care State Status of abortion access Gender Age Ethnicity P1 Meta AI Menstrual bleeding; cramps; nutrition 5 ( out of 10) Disagree Alabama Banned W oman 29 Black or African American P2 ChatGPT Emergency contraception; pregnancy testing 4 (out of 10) Agree T exas Banned W oman 30 Asian or Asian American P3 ChatGPT SRH symptoms (unspecied) 0 ( out of 10) Strongly Agree W est Virginia Banned Non-binary 29 White or Caucasian P4 Gemini Tubal ligation failure 7 (out of 10) Neither A gree or Disagree T exas Banned W oman 31 Black or African American P5 Microsoft Copilot Menstrual cycles and fertility window 2 ( out of 10) Strongly Agree T ennesse e Banned W oman 29 Black or African American P6 ChatGPT Menstrual cycles and nutrition 6 ( out of 10) Strongly Agree T exas Banned W oman 28 Asian or Asian American P7 Microsoft Copilot Access to abortion clinics; fer- tility tracking for birth control 1 (out of 10) Strongly Agree Indiana Banned W oman 27 American Indian or Alaska Native P8 ChatGPT Abortion laws 7 (out of 10) Strongly Agree T ennesse e Banned W oman 30 White or Caucasian P9 ChatGPT Perimenopause symptoms; STIs 4 (out of 10) Neither A gree or Disagree T exas Banned W oman 36 White or Caucasian P10 ChatGPT HPV treatment 4 (out of 10) Strongly Agree W ashington Legal W oman 41 White or Caucasian P11 Microsoft Copilot Perimenopause symptoms 7 (out of 10) Strongly Agree W ashington Legal W oman 45 Asian or Asian American P12 ChatGPT Side eects, costs, and access to dierent contraception meth- ods 4 (out of 10) Strongly Agree W ashington Legal W oman 22 Asian or Asian American P13 ChatGPT Abortion laws; LGBTQ-related resources; menstrual cramps 9 ( out of 10) Strongly Agree Virginia Legal Non-binar y 23 White or Caucasian P14 ChatGPT Gender-arming hormone therapy; interpretation of hormone prole results 7 (out of 10) Strongly Agree New Y ork Legal Non-binary 29 White or Caucasian P15 ChatGPT At-home ab ortion care and guidance 6 ( out of 10) Strongly Agree Ohio Legal W oman 37 White or Caucasian P16 Gemini Perimenopause symptoms 3 ( out of 10) Strongly Agree Massachusetts Legal W oman 40 Black or African American P17 ChatGPT Contraception; PCOS; Hepati- tis B; mammography 4 (out of 10) Agree California Legal W oman 37 White or Caucasian P18 Gemini Perimenopause symptoms 9 ( out of 10) Strongly Agree California Legal Non-binary 40 White or Caucasian during at-home abortion care), managing conditions such as poly- cystic ovary syndrome (PCOS), and implementing birth control changes (e.g., contraceptive pills or fertility awareness methods). The responses also enhanced participants’ understanding of SRH issues. Almost all participants (n=17) reporte d certain limitations. They preferred consulting healthcare pr ofessionals for serious or emer- gency concerns. Some found the AI-generated information too generic, reecting the chatbots’ limited understanding of personal contexts. Participants also emphasized GenAI’s lack of “lived expe- riences”: “ It [ChatGPT] doesn’t have experience managing menstrual health. In those cases, I would probably go to social media to hear somebody’s actual experience ” (P6). Some participants also found AI-generated information unhelpful due to distrust or a lack of novel information. 4.1.2 Usability . All participants (n=18) reported using a GenAI chatbot due to its ease of use and convenience . They emphasized several advantages of GenAI chatbots, including 24/7 availability , quick access to information, the ability to revisit past conversations, and an interactive interface. These features were seen as signicant benets compared to traditional metho ds, such as consulting health- care professionals, which ar e often limited by time constraints: “ I don’t have to sche dule an appointment and wait for that information. I can get it right away ” (P3). Participants also frequently highlighte d the b enet of information consolidation and synthesis: “ I don’t have to search through dierent websites just to nd one sentence of what I want to know ” (P5). 4.1.3 Credibility . Participants’ trust in the generated information varied, ranging from high to low: “ I do trust it, to be honest. I don’t ever sit there and question it ” (P9). Despite these mixe d perceptions, most participants (n=14) acknowledged the risks of SRH misinfor- mation, hallucinations, or intentional disinformation, which could result in delayed care, harmful de cisions, or even death. T o mit- igate these risks, participants took a critical approach, verifying new information by consulting external sources, discussing it with peers, asking their GenAI chatbot for sources, or consulting a dif- ferent GenAI chatbot. Some participants also avoided using GenAI chatbots for personal or serious health issues. A small number of participants (n=4) did not attempt to verify the SRH information Privacy and Safety Experiences and Concerns of U.S. W omen Using Generative AI for Seeking SRH Information CHI ’26, April 13–17, 2026, Barcelona, Spain T able 2: O v erview and descriptions of themes, sub-themes, and recommendations. Theme Description Barriers and facilitators Utility Participants frequently found GenAI chatbots eective for se eking SRH information due to their personalization and level of detail. Limitations included reduced eectiveness for serious health needs, absence of “lived experiences, ” and limited understanding of personal contexts, which sometimes made responses too generic. Usability Participants highlighted the ease of use, 24/7 availability , interactive interfaces, and rapid consolidation of information as key reasons for using GenAI chatbots. Credibility Perceptions of credibility were mixed. Inuencing factors included source-level aspects (e.g., company reputation, perceived expertise, origin of information), content-level aspects (e.g., accuracy , timeliness, clarity), and channel-level aspects (e.g., interactivity , ability to self-correct). Participants also noted risks of inaccuracies and hallucinations. Equity and accessibility Participants appreciated the aordability of GenAI chatbots. Opinions on information equity varied: some viewed chatbots as neutral, while others expressed concerns about bias and state-specic censorship of SRH information. Anthropomorphism Some participants valued the compassionate and nonjudgmental interaction style of chatbots, whereas others perceived them as robotic. Beliefs about data practices Data collection Most participants believed that both conversational and non-conversational data was collecte d, while some thought only conversational data was collected. Data processing Participants perceived that GenAI chatbots retrieved information from databases and/or the Internet to generate responses. Data deletion Participants described various deletion me chanisms: in-app or website settings, contacting customer support, deleting their full account, closing the browser , or requesting the chatbot to delete data. Data recipients and purposes of use Participants commonly believed data was shared with the GenAI company for service improvement, with third parties for prot, and with government or law enforcement entities. Perceived privacy risks Excessive data collection Participants were concerned about large volumes of contextual SRH data being colle cted. They felt prompted to disclose more personal information due to chatbots’ interactivity , perceived intimacy , and p ersonalization. Government access and surveillance Participants worried that GenAI companies could be subpoenaed or surveilled by government or law enforcement, or that chatbots could ag sensitive data, potentially leading to criminalization. User proling Participants highlighted the risk of chatbots inferring sensitive or incorrect information ( e.g., demographics, health conditions, SRH choices) from interactions. Model training Participants expressed concerns about personal SRH information being used for model training and response generation. Data selling and advertising Concerns were raised regarding unwanted, oensive, or intrusiv e advertisements targeted based on SRH conditions. Lack of regulatory protections Participants noted a lack of condence in existing privacy laws applicable to SRH data in GenAI chatbots. Dynamics SRH topics Most participants felt comfortable sharing SRH information, except on criminalizing or stigmatizing topics such as abortion, SOGI-related issues, and STIs. Only two participants (both with technical backgrounds) considered any SRH topic too sensitive to search in GenAI chatbots. Cross-state dynamics Concerns about seeking ab ortion-r elated information were more prevalent among participants in restrictive states, primarily due to fear of criminalization. Conversely , participants in non-restrictive states reported feeling safer , citing lower-risk demographic proles and higher levels of trust in GenAI companies. User-driven mitigation strategies Data minimization Participants minimized the data provided, particularly PII, demographics, and sensitive health information. Using separate accounts Participants used separate accounts for personal and professional purposes due to privacy concerns related to conversation history. Data management and controls Participants engaged in data deletion and suggested control settings to reduce data collection and tracking. Exp orting data copies was seen as both benecial and potentially risky. External privacy tools Participants reported using, or intending to use, private browsing or privacy-preserving tools. Recommendations Public awareness and involvement Educational programs to improve GenAI literacy for users and clinicians; co-design workshops as part of Participatory Action Research (P AR). Privacy by Design RLHF to train models to handle sensitive prompts responsibly; conservative ltering to remo ve or obfuscate sensitive content before user data enters the training pipeline; default opt-outs from model training; and the advancement of machine-unlearning techniques and auditing tools to ensure eective data deletion. Interactive privacy Explicit consent during sensitive conversations; gentle discouragement of oversharing; safety warnings; reminders of privacy settings; and redirection to trusted resources. Improved transparency Explainable AI (XAI) approaches including open-source models, data visualizations, alerts, and privacy nudges. GenAI regulatory protections Dedicated “Health” models compliant with medical privacy laws; se cur e deployment of GenAI chatbots by healthcare providers under strict local data controls; and co-regulation between regulatory agencies and the industry. obtained from their GenAI chatbot, with one participant relying on the chatbot itself as a verication tool. W e identied several factors inuencing participants’ p er cep- tions of GenAI chatbots’ credibility , categorized as source -, content -, and channel -level factors, following the framew ork described in Ou and Ho’s [ 106 ] meta-analysis. A source-level factor was company reputation. For example, one participant trusted Google’s Gemini to provide accurate information due to the company’s r eputation. Howev er , two participants held misconceptions about ChatGPT’s ownership, which could inadvertently aect their trust: “ I trust the Google ChatGPT a bit more when it comes to the answers I want ” (P7). Content-level factors included perceived information quality and content uency . Almost all participants (n=17) considered the SRH information they received accurate, up-to-date , and readable, though some limitations wer e noted: “ those locations weren’t able to give you an abortion be cause in the state of T exas, it’s illegal. . . it’s really not accurate b ecause you weren’t able to get an ab ortion in those places ” (P7). While participants trusted GenAI chatbots because the content “ reads like it comes from experts ” (P5) and cites “ reputable sources ” (P3), some (n=5) expressed concern that the information could be inuenced by less credible sources, such as blogs, chatbot programmers, or the broader Internet, including the “ dark web ” (P8). A channel-level factor , interactivity , was also identied. Some participants (n=6) reported that their trust and overall positive atti- tudes were inuenced by the GenAI chatbot’s ability to maintain dialogue, respond to user feedback, and self-correct: “ I have noticed CHI ’26, April 13–17, 2026, Barcelona, Spain Kaleva et al. that the AI will even correct itself. If they tell me something and I’m like, ’Oh, I don’t think that was true, ’ and they were like ’Oh, I’m sorry!’ They’ll even regroup what they told me ” (P1). 4.1.4 Equity and Accessibility . Equitable access to information was a key factor supporting adoption. First, the aordability of GenAI chatbots, noted by a few participants (n=3), provided a more accessi- ble alternative to traditional healthcare services in the U.S. Second, participants highlighted access to neutral and comprehensive in- formation relevant to individuals from diverse sociodemographic backgrounds. Some participants (n=5) found the SRH information from GenAI chatbots to be more objective and less inuenced by political, moral, or religious perspectives compared to social media, blogs, healthcare providers, and peers: “ Let’s say I’m on a Y ouT ube video about reproductive health; sometimes you deal with people’s political, [. . . ] moral, and religious views... With the AI, I do feel like they se em to be less politically charged or tr ying to manipu- late ” (P1). Howev er , others (n=6) expressed concerns that biased responses could disproportionately aect vulnerable groups, in- cluding ‘ people of color , women, sexual and gender minorities ” (P8), and note d the possibility of location-spe cic censorship of SRH information—particularly regarding abortion and birth control. For example, P10 questioned whether “ ChatGPT is not allowed to say certain things in dierent states like sharing information ab out how to induce abortion , ” highlighting potential access barriers to abortion information due to legal restrictions. 4.1.5 A nthropomorphism. The majority of participants (n=10) val- ued the compassionate, nonjudgmental, and private nature of GenAI chatbots when discussing sensitive SRH issues, with some even using gendered pronouns to refer to the chatbot: “ she ’ll have com- passion for what you say ” (P1). Howev er , some participants (n=4) also noted limitations, describing GenAI interactions as “ imp ersonal and robotic ” (P12); “ There wasn’t that human element of conne ction and being able to talk about p otential fears or worries about birth control, which you could discuss with a healthcare professional ” (P12). In addition, participants expressed concerns that “ p e ople can get triggered by those kinds of topics easily and might even have trau- matic experiences that it brings up ” (P10) when interacting with GenAI chatbots, suggesting that these tools may not b e adequately trained, like healthcare professionals, to manage the emotional risks inherent in sensitive discussions. 4.2 Beliefs Ab out Data Practices (RQ2) All participants (n=18) reported some level of uncertainty regarding GenAI companies’ data practices, including how data was collected, processed, shared, and deleted. Participants’ b eliefs about these practices are described in greater detail in the following paragraphs. 4.2.1 Data Colle ction. Most participants (n=16) assume d that GenAI companies collected both conversational data, such as user inputs and AI-generated outputs, and non-conversational data, including account information (e .g., login credentials, phone numbers) and technical information ( e.g., IP addresses, browser types). In contrast, a small number of participants (n=2) believed that data collection was limited solely to conversational data or user inputs, with no account or technical information being collecte d. 4.2.2 Data Processing. T o generate output, several participants (n=8) compared GenAI chatb ots to search engines and believed that they collected and rephrased information from the Internet or ltered data by cross-referencing keywords with online sources: “ They’re highlighting, ‘OK, birth control is the main thing they’re looking for’ and ‘USA is the location, ’ and then they cross-reference that with a whole bunch of websites online ” (P12). Others (n=4) believed that the systems sear ched their own databases for relevant information. One participant also mentioned clicking “ on the little analyzing part, and it showed like the Python code that it was using ” (P14). Another participant, with a technical background, provided a more technical explanation: “ So, it denitely takes in all the data and a lot of it is enco ding in a lot of dierent layers to sort of break down the individual responses into many dierent bytes. A nd from there, it’s, sort of, able to process everything. A nd then, but not compare terms or verses, in a bit, to kind of give you a response back, where you take all the information, break it down into the core parts. A nd then use that to process a new answer through language and coding models, and then go back to you ” (P13). 4.2.3 Data Deletion. Some participants (n=6) reported having deleted conversational data from GenAI chatbots in the past, either due to privacy concerns or simply to “ organize it a little bit more ” (P6). Par- ticipants described a range of possible data deletion mechanisms, including using in-app or website settings, contacting customer support, deleting their entire account, closing browsers, or directly asking the chatbot to delete their conversations: “ I would imagine that I would have to put it in a prompt to say ‘Delete, ’ but then I wouldn’t have really much condence that it’s done it ” (P11). Most participants (n=15) believed that even if they deleted their data, the company would likely retain copies within internal systems. 4.2.4 Data Recipients and Purposes of Use. Almost all participants (n=17) believed that their interactions with GenAI chatbots wer e primarily shared with the chatb ot’s parent company for various pur- poses, including improving services and functionalities (e .g., train- ing AI models, maintaining personalization), performing modera- tion, ensuring security , and conducting research and analytics: “ now it’ll feed me information that’s more liberal or more sexual health- oriented ” (P8). Participants acknowledged that their data might be shared with third-party companies (n=5) for selling or advertising purposes, and with government or law enforcement entities (n=8), which are discussed in greater detail in §4.3. Other potential recipi- ents, mentioned by two participants, included employers in cases where information was “ searched through an employment-specic account ” (P8). 4.3 Perceived Privacy and Safety Risks of GenAI Chatbots (RQ3) 4.3.1 Pr ompts and Disclosure Practices. Participants used GenAI chatbots to seek information related to menstrual health and fertil- ity , perimenopause, sexual health, pregnancy , contraception meth- ods, ab ortion, hormonal health, and breast health (details can be found in T able 1). While some participants (n=7) asked general, non- personalized questions (e.g., “ Can HP V ever b e cured? ” — P10), many (n=11) disclosed personal information , such as demographics (e.g., age , gender , ethnicity , location), physical characteristics (e.g., Privacy and Safety Experiences and Concerns of U.S. W omen Using Generative AI for Seeking SRH Information CHI ’26, April 13–17, 2026, Barcelona, Spain weight), or specic SRH details. For instance , participants shared details about their menstrual health, including cycle start and end dates to identify fertile days, menstrual irregularities, conditions such as PCOS, and perimenopause symptoms: “ I was like ‘Here are my symptoms. Could these be p erimenopause?’ ” (P11). Additionally , two participants sought tailored abortion-related information. One participant, residing in a restrictive state, searched for nearby abortion clinics to support a friend experiencing preg- nancy scares, disclosing their current location. The other partici- pant, located in a non-restrictive state, asked Gemini for guidance during an at-home abortion, disclosing sp ecic symptoms they were experiencing: “ I would message on Gemini ‘When should I be concerned about the amount of blood that I have and seeing with a medical at-home abortion?’ [. . . ] But I’m very specic with the symptoms that I’m having, I’ll just ask straight out like ‘These are the symptoms, what could b e the issue?’ ” (P15). Another participant dis- closed their intent to undergo gender-arming hormone therapy: “ I was like, if I’m doing this hormone therapy , am I gonna have to get surgery? ” (P14). They also shared their hormone blood test results to seek interpretation: “ I was like, ‘Hey , these are my results of my tests. What could this mean?’ ” (P14). 4.3.2 Privacy Risks of Using GenAI Chatb ots for Seeking SRH In- formation. Many participants (n=11) felt “ vulnerable ” (P14) and considered GenAI chatbots riskier than other SRH information sources they had used, including search engines, social media, period-tracking apps, and healthcare professionals: “ I don’t have a lot of personal conversations with my general search engine ” (P1). Participants identied several privacy risks, including excessive data collection, government access and surveillance, data selling and advertising, mo del training and memorization, and, more gener- ally , personal identication, lack of transparency , and hacking. The following subsections focus on the privacy risks that participants perceived as most prominent or unique to GenAI chatbots. Excessive data collection. Many participants (n=10) expressed concerns about the collection of large volumes of contextual and intimate SRH data in a conversational format: “ ChatGPT is like a whole another level of that, where you can literally dump your sub- conscious. People I know are like, ‘Hi, put my entire blood, my entire body scan, medical records on ChatGPT’ (...) People really use it in a very intimate way ” (P14). W e identied three factors contribut- ing to excessive data disclosure and collection. First, participants felt inuenced to disclose more information due to the interactive nature of GenAI chatbots, which, unlike other tools, proactively encouraged self-disclosure through follow-up questions: “ ChatGPT is the only place where it would proactively ask me for information ” (P6). Second, participants described GenAI chatbots as providing an “ intimate ” (P1) space that increased comfort with sharing de- tails: “ I do feel like the fact that it feels more intimate, you’re gonna be more comfortable giving more details ” (P1). Third, participants were inclined to trade additional personal information for greater personalization: “ For some p eople , if they want more specic stu, they might spit out information like their city or age ” (P12). Government access and sur veillance . Several participants (n=6) recognized that go vernment bodies could p otentially access chatbot conversation data via subpoenas for legal enforcement. Although some acknowledged that government access could b e useful in other criminal contexts, its implications for reproductive rights made it a “ double-edged sword ” (P13). Participants often compared this risk to privacy controv ersies surrounding p eriod-tracking apps or scenarios “ when law enforcement se eks search engine histor y of people they’re investigating criminally ” (P18), suggesting that these perceptions might be shape d by publicize d legal cases involving other digital tools. Nevertheless, participants expr essed uncertainty about how GenAI companies would handle legal compliance due to the absence of such events to date: “ W e haven’t really seen a case like that yet, where the government’s challenged them and saw how they responde d [...] I don’t know if ChatGPT stands on giving up the information willingly ” (P13). Furthermore, many participants (n=9) expressed concerns about government access to and sur v eillance of sensitiv e SRH-related con- versations, including abortion, to “ ensure they [p eople] are obeying the laws ” (P7). Sp ecically , they worried that conversations could b e agged as criminal based on users’ ge ographic locations or sensitive keywords: “ I feel like certain prompts are like mayb e red ags... and using the IP address or something, they can say ‘Hey , this person is in T exas and they’re looking up this information’ ” (P4). One participant noted, “ AI knows that it’s [abortion] not legal here ” (P9), indicating that participants perceived GenAI chatbots as capable of recogniz- ing illegal activities. In contrast, some believed that ab ortion-r elated conversations would either not be tr eated dierently or would be managed with increased caution under “ sp ecial guidelines for more sensitive topics ” (P13). While most participants (n=12) focused on legal concerns related to ab ortion, a few (n=2) also raised concerns about legal issues involving SOGI-related conversations in states with limited protections for LGBTQ+ rights. User proling. When prompted, all participants (n=18) stated that GenAI chatbots inferred details about their health status, lifestyles, sexual activity (e.g., number of sexual partners), interests, demo- graphics, sexual orientation, locations, legal jurisdictions, political views, or emotional states based on their inputs: “ It makes up a little bit of a picture about who I am and what’s concerning me ” (P11). For instance, P7 noted that the GenAI chatb ot could infer “ that you’re probably pregnant or you’re not pregnant . ” Another par- ticipant highlighted that GenAI chatbots could infer reproductive choices, observing that they were “ someone that was not interested in having any more children and someone that was looking for a solution to that ” (P4). Most participants (n=13) reported feeling comfortable with these inferences. Howev er , some participants (n=6) recognized and felt uncom- fortable with the risks posed by inaccurate inferences about their reproductive health or choices, which could lead to p otential harm: “ I might be curious about what an ab ortion is like or how to access it [...] A nd then if I happen to get pregnant later on, and then I already had that in my histor y , I could se e how it would be misinterpreted. Like, if I just had a miscarriage and they’re like ’Oh, no, you de- nitely had an abortion!’ ” (P10). One participant noted that the risk of inaccurate inferences was higher in GenAI chatbots than in period-tracking apps due to the broader range of SRH data col- lected. Another participant describe d using a shar ed account with their partner , which resulted in the chatbot revealing the partner’s health information. Although the participant found it “ funny ” (P14), CHI ’26, April 13–17, 2026, Barcelona, Spain Kaleva et al. using a joint account raised concerns about collective data poten- tially generating inaccurate inferences about the account holder or , as in this case, accidentally rev ealing another user’s data. Model training and memorization. Some participants (n=4) ex- pressed concerns about the use of personal SRH data for training purposes and generating new responses. These concerns, while not always explicitly stated, might reect perceived risks of model memorization: “ I think that the data from Go ogle Gemini is saved and used to generate other results. Whereas for healthcare providers, they’re not using your information to provide additional resources to others ” (P16). Participants also note d that GenAI companies deriv ed nancial benets from model training. Data selling and advertising. Many participants (n=12) expressed concerns ab out the sale of data to third-party companies, such as social me dia platforms (e .g., TikT ok, Meta), marketing rms, data brokers, or even the dark web. They feared this could lead to intrusive and inappropriate adv ertisements related to their health conditions: “ If you search [...] that you were having pain in your private area and then they show you an ad about getting STD [sexually transmitted disease] teste d, that could b e oensive” (P1). Participants also described receiving ads tied to their conversations as “ w eird, creepy , and freak me out a little bit ” (P6). Similar to concerns about government access, worries r egarding data selling and advertising were shaped by “ scandals in the past ” (P13) involving other digital platforms. Lack of regulatory protections. Most participants (n=13) wer e unaware of, or lacked condence in, the privacy r egulations gov- erning GenAI chatbots: “ ChatGPT is not bound to any HIP AA laws or any health privacy laws. So, you have to assume that those data are just now out for consumption. [...] There is no condentiality or privacy ” (P8). Nearly all participants expressed a desir e for stronger regulatory protections, such as extending me dical data privacy laws to GenAI chatb ots (similar to “ AI nurses ” in healthcare systems, P8), enhanced safeguards for minors, and stricter legal conse quences for data breaches. 4.3.3 Privacy and Safety Concerns Across Dierent SRH T opics and Data Types. Most participants (n=15) felt comfortable or neutral about using GenAI to ols for seeking SRH information. This was largely because they did not p er ceive SRH topics as particularly sensitive or risky: “ If the data is at risk for being shared, it’s just a nor- mal human being asking about sexual and reproductive health ” (P2). Other reasons included trust in the GenAI company , not disclosing personal information, not perceiving risks as personally relevant (e.g., being beyond reproductive age, living in a non-restrictive state), employing protective measures, and a br oader sense of res- ignation toward data privacy risks. Additionally , one participant expressed altruistic attitudes, likening data usage for AI model training to a research assistant taking notes in a doctor’s oce. These views often, but not always, shifted for abortion topics (n=12), and in some cases, SOGI-related disclosures (n=2), due to fear of criminalization: “ I believe it’s fairly benign again because I’m primarily researching perimenopause. I understand it would b e much more sensitive if I was seeking information on ab ortion care and especially abortion care if I live in a state where that was criminal- ized ” (P18). Some participants (n=3) also expressed concerns about targeted harassment fr om individuals opp osed to SRH choices: “ Pr o- life p eople or anyone on the Internet could start harassing these pe ople and sending death threats ” (P12). Beyond criminalization, participants (n=8) raise d concerns ab out stigma and emotional distress associated with certain SRH topics. For instance, some (n=3) briey mentioned STIs as a sensitive SRH topic due to its associated stigma: “ damaging to their personal image, to their condence ” (P13). Another participant noted that exposure of sensitive SRH conditions, such as endometriosis , could lead to being “ seen as some one who is disabled in some way or not me eting society’s standards for what a healthy person lo oks like ” (P8). Additionally , a few participants (n=2) acknowledged the risk of negative professional consequences as a result of exposing general SRH information: “ It wouldn’t look go od to an employer to have your sexual and reproductive information out in the open ” (P17). Only two participants, both with technical backgrounds, considered any type of SRH information to o sensitive to seek via GenAI chatbots. They were also the only ones who describ ed more complex risk mitigation strategies (e.g., using a VPN and GenAI privacy settings) or extreme strategies (e.g., avoiding GenAI entirely), even when residing in non-restrictive states. 4.3.4 Perspectives Across Restrictive vs. Non-r estrictive States. Par- ticipants frequently highlighted cross-state dynamics, e ven with- out prompting. Among participants living in non-restrictive states, many (n=5) indicated they used or would use GenAI chatbots for seeking abortion-related information. These participants felt safe because of their geographic lo cation or demographic background, which placed them at lower risk of prosecution related to abortion, and/or because of their trust in the GenAI company . The remaining participants (n=4) stated they would not use GenAI chatb ots for abortion-related queries, preferring more reliable sour ces or fear- ing legal consequences, even when residing in more liberal states: “ As someone who is A merican and how we’v e seen lots of potential subpoenas in the past for personal records in regard to period-tracking apps, in regard to people seeking medical assistance, even though I’m not at risk of anything currently , it’s still something where I’d rather not have my data be available and know it could be subpoenaed for any reason like that ” (P13). Some of these participants b elie ved that GenAI chatbots would indirectly “refuse” to answ er abortion- related questions, or that their ability to respond to ab ortion-r elated questions is limited, even within jurisdictions where ab ortion is legal: “ Be cause of the legality around it, it’ll usually say like ‘seek professional healthcare or see a healthcare provider’ ” (P15). In contrast, only a small number of participants (n=2) residing in restrictive states indicate d that they would use or had used a GenAI chatbot to seek ab ortion-r elated information. The majority (n=6) cited legal risks in abortion-restrictive states as their primary reason for avoiding GenAI chatbots: “ If you’r e in a state in the United States where ab ortions are ee ctiv ely outlawed at this point, you should assume that those data are going straight to the authorities and that you can be arrested, ned, charged, or whatever you want to call it, for searching that ” (P8). Aside from dierences in GenAI use and abortion-related queries, we did not identify clear dierences in other protection strategies, reported in detail in §4.4, b etw een participants in restrictiv e versus non-r estrictive states based on the qualitative data. Privacy and Safety Experiences and Concerns of U.S. W omen Using Generative AI for Seeking SRH Information CHI ’26, April 13–17, 2026, Barcelona, Spain 4.4 Risk Mitigation Strategies (RQ4) Participants describe d only a few privacy risk-mitigation strate- gies that they had used previously or would consider using, which are detailed in the following subsections. Reasons for not employ- ing such strategies included lack of concern, limited awareness of available options, or trust in the GenAI company . 4.4.1 Behavioral Strategies. User-driven data minimization. Most commonly , participants (n=15) intentionally withheld information during their interactions with GenAI chatbots, including PII (e.g., names), details about their SRH, and demographic information. As one participant explained, “ I wanted to ask Copilot about miscarriage, but I didn’t be cause I didn’t want to share that ” (P5). Many participants re-framed their inquiries as more general questions or avoided using GenAI chatbots alto- gether , “ I don’t want to b e very specic like, ‘Oh, I’m a 20-year-old woman who lives in this place and my cycle is X amount of days long’ [...] I don’t want it to have that information at all because I’m afraid of it ” (P6). Another participant describe d their r eluctance to exchange personal information for more personalized responses, “ I’m not giving it super specic information, I’m not receiving specic information back. So, I’m getting general information in return for the general information that I’m putting in ” (P6). These accounts illustrate participants’ willingness to protect their privacy , even when doing so reduced the utility of the chatbots. Using separate accounts. One participant reported creating a sep- arate account specically to seek information about period cramps: “ I am very wary , and the only time I’ve ever done it, I made a sepa- rate account that didn’t have anything attached to it, and I use an Incognito browser ” (P13). Another participant maintained separate accounts for personal and professional use due to privacy concerns. 4.4.2 T echnical Strategies. Data management controls. Only a few participants (n=3) ac- tively engaged with the privacy and security settings of their GenAI chatbots. These actions included turning o memorization or per- sonalization features, opting out of data use for training purposes, enabling temporary chat or auto-deletion settings, and setting up multi-factor authentication. T wo participants also reported man- aging “Cookie” permissions. Many participants (n=14) deleted or would delete their conversational data in the future due to fe el- ings of embarrassment or privacy concerns, and some explicitly expressed a desire for increased transparency and greater user con- trol over data retention. Additionally , one participant suggested that users should have the option to export sensitive data, such as abortion-related conversations, and then have that data immedi- ately deleted. Participants proposed several improvements to data management, including granular opt-out settings that would allow users to control data collection and sharing, the ability to manage inferences made by the GenAI chatbot, and visible data collection disclaimers prior to submitting sensitive queries. External privacy tools. Some participants (n=4) reported using or considering the use of incognito browsing modes: “ I did literally everything. I was on VPN [Virtual Private Networks], on Incognito browser all that. So, they were like OK, this random user is having period cramps ” (P13). A dditionally , some participants (n=3) recom- mended integrating private browsing modes directly into GenAI chatbots to ensur e that user data was not tracked or sav ed: “ So that I know it’s not tracking and it’s not collecting and saving this data ” (P6). One participant mentioned the p otential use of privacy-preserving browsers such as T or and DuckDuckGo, though they note d, “ in ChatGPT I think that you can’t use it unless you have your home IP. Like, I can tell if you have an IP blo cker on ” (P8). Similarly , the use of VPNs was considered a potential approach by a few participants (n=5), although some did not elaborate on specic experiences using VPNs in the context of GenAI. 5 Discussion In this section, we present our key ndings regarding participants’ experiences using GenAI chatbots for seeking SRH information. After summarizing the main takeaways, we highlight points for consideration grounded in these ndings and propose actionable design and policy recommendations. 5.1 From 24/7 Personalized Support to Censorship and Bias: K ey Factors Shaping GenAI Chatb ot Adoption (RQ1) Our ndings show that participants actively used, trusted, and sometimes preferred GenAI chatbots over healthcare professionals, valuing their perceived eectiveness, constant availability , acces- sibility , personalization, and compassion. Participants fr equently made health-related decisions base d solely on information pro vided by these chatbots. Although GenAI chatb ots could enhance patient education and supp ort clinical de cisions [ 79 ], they should b e consid- ered a supplementary tool and not a replacement for professional medical consultation. One notable concern raised by participants was the potential censorship of information on abortion and contraception by GenAI systems across dierent states in the U.S. This is particularly im- portant because major GenAI pro viders (e .g., OpenAI) typically moderate certain types of content, including hate speech, violence, self-harm, sexually explicit material, and illegal advice [ 104 , 105 ]. Although most providers distinguish sensitive from harmful content (e.g., illegal activities), uncertainty remains ab out how criminal- izing SRH topics like ab ortion are classied under these policies. Furthermore, moderation policies dier internationally based on government rules and Internet regulations. Examples include Ernie Bot ( by Baidu in China), which reporte dly refuses to answer po- litically sensitive questions [ 83 ], and DeepSe ek, which has b een accused of censorship in accordance with Chinese gov ernment “public opinion guidance” r egulations [17]. Finally , as discussed in Friend et al . [45] , recent changes to so- cial media content moderation p olicies—such as those by Meta, shifting from company-led monitoring to community-driven moni- toring—may increase bias and inaccuracies in GenAI chatb ots [ 54 ]. Given that Meta trains its GenAI chatb ot using data from public Facebook and Instagram p osts, it is important to consider how polar- ized opinions on abortion expressed on so cial media may inuence GenAI outputs [41, 92]. CHI ’26, April 13–17, 2026, Barcelona, Spain Kaleva et al. 5.2 The Black Box Problem: Unclear Data Flows and Practices of GenAI Chatbots (RQ2) Although all participants expressed uncertainty about the data practices of GenAI companies, they rev ealed important aspects of their perceptions. Some participants incorrectly assumed that only their conversations (or only their prompts) were collected, without any personal or technical data. There was also widespread uncer- tainty ab out how responses wer e generate d, with participants often assuming that the system rephrased online sources or searched databases for relevant information. While these assumptions may seem reasonable , since GenAI outputs, particularly from LLMs, often resemble summarized information, the underlying me cha- nism is dierent [ 73 ]. It is true that the model’s knowledge often originates from data on the Internet, and if the system is explicitly integrated with a search tool (e .g., ChatGPT browsing), retrieval does occur [ 100 ]. Howev er , rather than rephrasing documents or cross-referencing sour ces, the model generates text by pr edicting likely words based on patterns learned during training [73, 124]. Our ndings exemplify the “black box problem, ” a term use d to describe users’ (and creators’) inability to understand and ex- plain how GenAI systems work [ 120 , 131 , 137 ]. This ambiguity introduces numerous vulnerabilities, including security risks, pri- vacy violations, and bias, which become especially signicant when GenAI is used for health or other sensitive purposes [ 131 ]. Further- more, participants’ internal misperceptions, combined with a lack of awareness of available privacy protections, might distort how they noticed and interpreted data-related stimuli when interacting with GenAI chatbots, which in turn could aect privacy decision-making [33]. 5.3 Perceived Privacy Risks: From Oversharing to O v erexposure (RQ3) 5.3.1 Data Collection in GenAI Chatbots vs. Other T o ols for Seeking SRH Information. Participants often perceived GenAI chatbots as carrying greater privacy risks than search engines, perio d-tracking apps, so cial media, or healthcare providers. This perception was primarily due to the distinct data collection methods of GenAI chatbots, which gather large amounts of detailed information in a conversational format. Our ndings support this view by show- ing that participants shared sensitive SRH details in their GenAI prompts, providing detailed insight into participants’ self-disclosure behaviors when using these tools. In line with participants’ beliefs, research indicates that GenAI tools collect large volumes of data through user interactions [ 124 ]. For example, while period-tracking apps also handle sensitive data, their data colle ction is generally categorical and more limited [ 129 ]. Similarly , search engines and social media p ose privacy risks primar- ily through search history , location tracking, and browsing activity [ 15 , 36 , 71 ]. Although healthcare providers and FD A-approv ed med- ical devices collect substantial volumes of sensitiv e personal data, these are rigorously governed by privacy regulations such as HIP AA and the General Data Protection Regulation (GDPR) [40]. Why do es se eking SRH information through GenAI chatbots lead users to shar e more personal data than when using other tools? Our ndings suggest three key factors. First, anthropomorphism [ 70 ], the attribution of human-like characteristics, such as being “nonjudg- mental” or “ compassionate, ” to GenAI chatb ots, has been associated with fostering user trust and engagement [ 30 , 70 ]. The perceived anthropomorphic nature of GenAI chatbots can directly predispose users to share more p ersonal details by creating a false sense of inti- macy and, consequently , safety . Second, interactivity plays a critical role. As describe d by participants, GenAI chatbots “proactively” stimulate users to share personal information by asking follow-up questions, creating the sense of a dialogue with an attentive partner . This is particularly concerning b ecause sensitive disclosures are not only insuciently safeguarded, but also actively encouraged. Fi- nally , the desire for personalized responses—a common motivation for using GenAI chatbots—led participants to provide extensive personal details. This aligns with Privacy Calculus Theor y , which posits that users continuously weigh perceived benets against perceived risks, as observed with other digital tools [ 90 ]. However , unlike entering menstrual cycle dates into a period-tracking app or using a search engine, the conversational, open-ended design of GenAI prompts encourages richer disclosures. While social me- dia platforms oer similar conversational interactions, users are typically aware that their posts are public or shared with a dene d audience [ 43 ]. In contrast, participants using GenAI chatb ots w ere often unaware of—or held misconceptions about—data practices, raising signicant concerns about transparency . 5.3.2 Concerns About Intimate Surveillance and Criminalization. Although participants generally felt comfortable seeking SRH in- formation, most, particularly those living in r estrictive states, ex- pressed signicant concerns regar ding abortion and, in a few cases, SOGI-related topics. Following the r ecent overturn of Roe v . W ade in the U.S., some participants feared that their abortion-related conversations could be subpoenaed by the government or agged by the chatbot, potentially leading to criminalization, stigma, or ha- rassment. Consequently , they lacked condence in existing privacy protection laws, and often avoided such discussions. This nding can be interpreted through the lens of Protection Mo- tivation The ory (PMT) [ 34 ]. According to PMT , individuals are more likely to adopt protective behaviors if the y perceive the potential harm as severe ( e.g., criminalization) and plausible (e.g., publicized cases involving digital SRH data), consider themselv es vulnerable (e.g., r esiding in a restrictive state), and believe that the protective action (e.g., complete avoidance) is ee ctiv e and feasible. In contrast, when perceived harms are abstract ( e.g., selling anonymized SRH data to unknown recipients) or less immediate, participants tende d to dismiss privacy threats. Moreov er , SRH information types (e.g., STIs, abortion, endometriosis), associated with stronger negative emotional responses (e.g., fear , shame, stigma), elicited heightened privacy concerns, consistent with the Risk-as-Feelings Hypothesis [76]. Only two participants, both with technical backgrounds, ex- pressed concern about sharing personal SRH details with GenAI chatbots regardless of the subject. While this caution may reect greater digital literacy [ 22 ], this explanation remains speculative, as prior work has shown that even technically skilled individuals can make privacy errors, for example, I T employees unintentionally leaking company data through ChatGPT in 2022 [117]. Privacy and Safety Experiences and Concerns of U.S. W omen Using Generative AI for Seeking SRH Information CHI ’26, April 13–17, 2026, Barcelona, Spain T o what extent are concerns about sur veillance and criminaliza- tion justied? Although there is no evidence that popular GenAI chatbots such as ChatGPT engage in proactive sur veillance , law enforcement can subpoena companies to hand over user data, and companies are legally obligated to comply [ 103 ]. Additionally , par- ticipants’ concerns regarding privacy laws are well-founded, as sensitive health data within general-purpose GenAI chatbots is cur- rently not protected by medical privacy regulations such as HIP AA and remains in a regulatory grey area [96]. 5.3.3 Concerns About T argeted Advertising. Participants expressed concern that GenAI providers could use their data for commercial purposes, such as targeted advertising related to SRH products and ser vices. This concern is particularly salient because some GenAI companies have already indicate d intentions to leverage user data for advertising. Recently , MetaAI announced that it will collect interactions with its AI to ols—including both text and voice conversations—to serve targeted ads on its social media platforms, excluding users in the UK, EU , and South K orea [ 93 , 110 ]. The com- pany stated that conversations on sensitive topics such as “sexual orientation, ” “political views, ” and “health” will not be used for ad- vertising; how ever , questions remain regarding how these topics will be moderated [ 93 ]. Similarly , Google has announced plans to introduce AI-driven ads in its AI Mode feature within the search engine [ 49 ], Microsoft experimented with ads on Copilot a few years ago [ 86 ], and OpenAI has briey considered introducing ads to ChatGPT [ 77 ]. Given the rapid surge in research highlighting privacy risks of GenAI chatbots [ 124 ], the large volume of sensitive self-disclosure observed in our study and others [ 65 , 136 ], and re- peated calls from cyb ersecurity experts, researchers, policymakers, and users for stronger privacy protections, these announcements by GenAI providers appear parado xical and directly conict with user expectations. 5.4 Insucient Protective Measures for Safe GenAI Use (RQ4) 5.4.1 Data Minimization: Primar y Pr otective Measure. Participants generally lacked sucient knowledge to eectively mitigate privacy risks and rarely engaged with GenAI-spe cic privacy settings. Most often, they relied on withholding personal information, such as health details, PII, or location data. Many GenAI providers (e .g., OpenAI) state in their privacy p oli- cies 2 that they collect information about users’ IP addresses, coun- try , and time zone, which can rev eal location even if users do not explicitly share it [ 102 ]. Similarly , while participants minimized PII disclosures, GenAI platforms also collect account details, in- cluding names, contact information, and device identiers, which can be used to trace data back to individual users [ 102 ]. Accord- ing to ChatGPT’s privacy policy , these data types may be shared with third parties, including vendors, service providers, aliates, government authorities, or industr y p eers, to comply with legal obligations. Although GenAI platforms typically aggregate or de- identify personal information, these data can still be re-identied when required by law [ 102 ]. Moreover , ev en partial or inaccurate data can enable GenAI chatbots to infer potentially identifying 2 All cited privacy policies refer to the 2025 versions available at the time of conducting the study . information about users [ 126 ]. Therefore, users’ eorts to protect their privacy through data minimization may not fully align with their expectations. 5.4.2 Challenges of Data Deletion in GenAI Systems. While all par- ticipants viewed privacy concerns as a strong motivator for deleting their data, there were widespread misconceptions about how data deletion in GenAI chatbots actually works—for example, some as- sumed that sending a prompt like “Delete ” would remove their data. Participants also expressed a lack of condence in companies’ commitment to fully honoring deletion requests, suspecting that backups might still be retained. These concerns are valid, as delet- ing conversations typically removes them from a user’s account, but companies may retain internal copies. For instance, OpenAI’s privacy policy mentions temporar y retention of logs without spec- ifying the timeframe, and permanent deletion generally requires full account deletion. The challenge is further compounded because once user data is incorporated into model training, it is nearly impossible to remove without retraining the model; which is ex- pensive and impractical [ 47 ]; opting into model training eectively precludes p ermanent deletion. In the SRH context, this is partic- ularly concerning, as sensitive data emb edded in mo dels cannot be removed even if legally requested, and only a few participants engaged with model training settings, highlighting the need for greater attention to this issue. 5.5 Recommendations Here, we pr esent practical and policy recommendations informed by our ndings. T able 2 provides a summary . 5.5.1 Public awareness and inv olvement. W e emphasize the need to improve public GenAI literacy . W e advocate for robust educational campaigns on responsible use, targeting b oth patients and clini- cians, to foster eective clinician-patient-GenAI collaboration [ 62 ]. Clinicians and patients should engage in open, transparent discus- sions about the b enets and risks of using GenAI chatbots for SRH management, ensuring proper verication practices, appr opriate levels of trust, and safe disclosure . Given the limited awareness and use of data protection strategies among our participants, we recommend greater user involvement in designing and developing human-centered privacy protections in GenAI chatbots. This is particularly important for vulnerable users seeking sensitive health information (e.g., people assigned female at birth or from diverse SOGI subgroups), who may face risks such as criminalization, harassment, or stigmatization [ 82 , 89 ]. T o align protections with users’ needs, we suggest organizing co-design workshops within a Participator y Action Research (P AR) frame- work [ 75 ], where end-users collaborate with designers, developers, and researchers to explore privacy mechanisms such as r eal-time reminders, opt-out options, and other safeguards for sensitive data collection. 5.5.2 Privacy by Design. As individuals increasingly use GenAI chatbots for de eply personal and intimate queries, developers should design systems with self-disclosure considerations in mind. GenAI providers must adopt a Privacy by Design approach ( e.g., [ 5 ]), in- tegrating data protection into the cor e design and architecture of CHI ’26, April 13–17, 2026, Barcelona, Spain Kaleva et al. GenAI systems. For example, companies should invest in and trans- parently communicate technical and design approaches to data deletion, including: (a) advancing machine-unlearning techniques and auditing tools to ensure stronger guarantees for removing specic data from models over time [ 74 , 127 ], ( b) implementing conservative ltering and data-minimization strategies to remove or obfuscate sensitive content before user data enters the training pipeline [ 21 , 111 ], and (c) setting “ opt-out from model training” as the default, thereby safeguarding users’ rights to data deletion or clearly communicating alternative options [ 23 , 60 ]. Additionally , companies could employ Reinforcement Learning from Human Feedback (RLHF) [ 107 ] to align chatbot r esponses with ethical stan- dards, train models to handle sensitive prompts responsibly , and redirect users to trusted health resources. 5.5.3 Interactive privacy . While GenAI interactivity has been iden- tied as both a facilitator of self-disclosure and a potential privacy limitation, this feature can become an asset if applied thoughtfully . Since GenAI can recognize sensitive discussions, it can leverage dynamic, interactive strategies to supp ort users in making informed decisions. For instance , it can request explicit consent for data collection in real time, pr ovide gentle discouragement from ov er- sharing personal information, issue safety warnings (e.g., legal consequences), oer privacy nudges and reminders about avail- able privacy settings [ 2 , 98 ], and redirect users to trusted resources. Moreover , providers should clearly identify GenAI chatbots as AI- based tools—not human or condential healthcare professionals (e.g., via generated outputs)—to prev ent unrealistic user expecta- tions. 5.5.4 Impr oved transparency . Although the “black box pr oblem, ” discussed in §5, is well-known and has be en explored in prior research [ 53 , 137 ], it is especially concerning in the context of sen- sitive conversations. Pro viders should adopt more transparent and explainable AI (XAI) approaches [ 53 , 64 , 137 ], such as open-source models, data visualizations, user-friendly transparency dashboards, and alerts for high-risk use cases where opaque pr ocessing could have serious consequences. These measures help users better un- derstand how AI pr ocesses data and generates outputs, fostering greater awareness of the system’s limitations. 5.5.5 GenAI regulations. GenAI is an emerging and complex tech- nology , still in the early stages of developing clear ethics, gover- nance, and regulation. Existing frameworks (e.g., the EU AI Act) have been criticized for their incompleteness [ 130 ], challenges that are particularly acute in health-related contexts. Given the dis- crepancies between user expectations observed in this study and the data practices of GenAI providers, there is an urgent need for global data protection regulations that ensure equitable protections across international users. T o b egin addressing these issues, we recommend that GenAI providers introduce a dedicated “Health” model (similar to custom “GPT s” in ChatGPT) designe d to comply with me dical privacy laws, incorp orating encryption and secure data handling to supp ort safe health discussions while maintain- ing accessibility . Additionally , healthcare providers could consider deploying GenAI models under strict local data controls to enable secure use among patients. W e recognize that such regulations may slow innovation in GenAI; howev er , rapid advancement should not come at the cost of safe and ethical implementation. T o mitigate potential delays, we advocate for collaborative policymaking between government and industry (co-regulation [ 25 , 67 ]) rather than relying solely on government or self-regulation. Co-regulation can support initia- tives such as regulatory sandboxes [ 135 ], which provide controlled environments for testing new rules. 5.6 Comparison with Prior W ork Our study builds on prior research in two key areas: (1) users’ privacy and safety perceptions and concerns r egarding SRH data, expanding previous ndings beyond FemT ech to include general- purpose GenAI chatbots; and (2) users’ perceptions of and experi- ences with GenAI chatbots in the sensitive context of SRH informa- tion seeking. In this subsection, we outline how these dierences led to contrasts with other studies. Consistent with prior ndings, GenAI tool adoption is shaped by utility , usability , credibility , aordability , and anthropomorphism; howev er , our participants emphasize d additional factors specic to SRH [ 6 , 7 , 141 ]. They highlighted the imp ortance of equitable access to neutral, comprehensive information and raised concerns about location-based barriers and censorship , particularly in conservative states. A recent study [ 134 ] explored participants’ mental mo dels of GenAI chatbots in a dierent context, focusing on task-based in- teractions (i.e., booking a hotel). Y et, we obser v ed some conceptual overlap. For instance, W ang et al . [134] found that users believed the chatb ot directly shared data with its provider , mirroring our participants’ belief that data were sent to the pro vider for service improvement. Moreover , W ang et al . [134] also found that users’ perceptions of the chatbot’s parent company shaped trust. W e ob- served a similar pattern, although some of our participants held incorrect beliefs about the parent company (e.g., believing ChatGPT was owned by Google), which could lead to misplaced trust. In another interview study involving participants seeking men- tal health information in GenAI chatbots [ 65 ], participants were primarily concerned about sharing data with employers and insur- ers, resulting in employability harms. In contrast to mental health topics, SRH-related conversations raised concerns primarily about government surveillance and criminalization. Our participants also identied GenAI-specic risks b e yond general data breaches, com- paring them to privacy risks in other tools. K wesi et al. [ 65 ] also noted that while emotional and psychological data is deeply per- sonal, participants did not perceive obvious real-world e xploit path- ways that would prompt more cautious behavior . Findings related to our participants’ perceptions of SRH information seeking align with this observation, except for abortion or other potentially crim- inalized SRH topics. In these cases, fears were likely amplied by real-world instances in which some participants might face prose- cution based on their digital SRH data. Another key contrast with [ 65 ] is that, while participants in their study believed that their conversations were HIP AA-protected, our participants lacke d con- dence in privacy protections, especially in the context of conicting abortion regulations. Privacy and Safety Experiences and Concerns of U.S. W omen Using Generative AI for Seeking SRH Information CHI ’26, April 13–17, 2026, Barcelona, Spain Compared to FemT ech research, our ndings related to govern- ment surveillance and criminalization align with users’ risk percep- tions of other FemT ech to ols [ 26 , 39 , 81 , 88 ]. However , participants in our study expressed additional concerns about GenAI chatbots collecting more data than other FemT ech to ols (e.g., p eriod-tracking apps) and p otentially recognizing illegal or sensitive topics, a percep- tion likely inuenced by the chatbots’ attributed intelligence and anthropomorphism. Furthermore, among participants who use d risk mitigation strategies, most protected themselv es by minimizing or attempting to delete data, echoing ndings from prior studies with users of both GenAI [ 141 ] and FemT e ch [ 26 , 81 , 88 ]. Only a few participants congured GenAI-specic privacy settings, such as opting out of model training. Future research should employ data-donation studies to examine how people interact with GenAI chatb ots during sensitive SRH conversations; for example, how they formulate prompts, what they disclose, how the chatbot responds, and how follow-up ques- tions unfold. Measurement studies could also test whether chatbot responses vary by location (e.g., via VPN), r eecting participants’ concerns about potential information censorship. Additionally , al- though our qualitative data showed no clear dierences in protec- tions between participants in restrictive and non-restrictive states, aside from avoiding ab ortion-related queries due to criminaliza- tion concerns, large-scale quantitative studies may r eveal distinct patterns between these groups. 6 Conclusion This study examines users’ experiences with GenAI chatbots for seeking SRH information. The ndings indicate that such chat- bots are increasingly used due to their perceived utility , usability , credibility , accessibility , and capacity to provide empathetic, non- judgmental support. At the same time, participants viewed GenAI chatbots as posing heightened privacy risks compared to other information sources, citing concerns ab out the large volume of personal data collected, the use of data for mo del training, potential government access or surveillance, user proling, data commer- cialization, and lack of regulatory protections. Although partic- ipants were generally willing to accept these risks in exchange for perceived benets, they expressed reluctance to seek abortion- related information or advice, particularly in abortion-restrictive states. Mor eover , most participants did not eectively adopt GenAI- specic strategies to mitigate these risks. T o our knowledge, this is the rst study to investigate users’ privacy and safety experiences with GenAI chatb ots in the context of SRH information seeking. These ndings oer important implications for the design of more user-centered, privacy-preserving, and ethically grounded GenAI technologies. Acknowledgments This resear ch was supp orted by INCIBE’s strategic SPRINT (Seguri- dad y Privacidad en Sistemas con Inteligencia Articial) project C063/23, funded by the EU NextGenerationEU program through the Spanish Government’s Plan de Recuperación, Transformación y Resiliencia; by the Spanish Government under grant PID2023- 151536OB-I00; by the Generalitat V alenciana under grant CIPROM/2023/23; and by the UK Engineering and Physical Sciences Research Council (EPSRC) under award RE16677. W e are grateful to all participants for generously sharing their time and experiences. W e also acknowl- edge the UKRI Centre for Doctoral Training in Safe and Trusted Articial Intelligence for providing academic training ( e.g., semi- nars and masterclasses) that contributed to this research. References [1] Ruba Abu-Salma, Pauline Anthonysamy , Zinaida Benenson, Benjamin Berens, Kovila P. L. Coopamootoo, Andreas Gutmann, Adam Jenkins, Sameer Patil, Sören Preibusch, F lorian Schaub, William Seymour , Jose Such, Mohammad T ahaei, A ybars Tuncdogan, Max V an Kleek, and Daricia Wilkinson. 2025. Grand challenges in human-centered privacy. IEEE Security & Privacy 23, 4 (2025), 103–110. doi:10.1109/MSEC.2025.3566451 [2] Alessandro Acquisti, Idris A djerid, Rebecca Balebako, Laura Brandimarte, Lor- rie Faith Cranor , Saranga Komanduri, Pedro Giovanni Leon, Norman Sadeh, Florian Schaub, Manya Sleeper, et al . 2017. Nudges for privacy and security: Understanding and assisting users’ choices online. ACM Computing Surveys (CSUR) 50, 3 (2017), 1–41. doi:10.1145/3054926 [3] Prottay Kumar Adhikary , Isha Motiyani, Gayatri Oke, Maithili Joshi, Kanupriya Pathak, Salam Michael Singh, and T anmoy Chakraborty . 2025. Menstrual health education using a specialized large language mo del in India: Dev elopment and evaluation study of MenstLLaMA. Journal of Medical Internet Research 27 (2025), e71977. doi:10.2196/71977 [4] Harshvardhan Aditya, Siddansh Chawla, Gunika Dhingra, Parijat Rai, Saumil Sood, Tanmay Singh, Zeba Mohsin W ase, Arshde ep Bahga, and Vijay K Madisetti. 2024. Evaluating privacy leakage and memorization attacks on large language models (LLMs) in generative AI applications. Journal of Software Engineering and Applications 17, 5 (2024), 421–447. doi:10.4236/jsea.2024.175023 [5] Hamda Al Breiki and Qusay H Mahmoud. 2025. A framework for integrating Privacy by Design into generative AI applications. In Proceedings of the AAAI Symposium Series, V ol. 6. 2–9. doi:10.1609/aaaiss.v6i1.36018 [6] Mohammad Khaled Issa Al Shboul, Asma Alwreikat, and Faiz Abdullah Alotaibi. 2023. Investigating the use of ChatGpt as a novel method for se eking health information: A qualitative approach. Science & Technology Libraries (2023), 1–10. doi:10.1080/0194262X.2023.2250835 [7] Fahad Alanezi. 2024. Factors inuencing patients’ engagement with ChatGPT for accessing health-related information. Critical Public Health 34, 1 (2024), 1–20. doi:10.1080/09581596.2024.2348164 [8] Anas Alhur. 2024. Re dening healthcare with articial intelligence (AI): the contributions of ChatGPT, Gemini, and Co-pilot. Cureus 16, 4 (2024). doi:10. 7759/cureus.57795 [9] T eresa Almeida, Laura Shipp, Maryam Mehrnezhad, and Ehsan T oreini. 2022. Bodies like yours: Enquiring data privacy in FemT ech. In Adjunct Proceedings of the Nordic Human-Computer Interaction Conference . 1–5. doi:10.1145/3547522. 3547674 [10] Kanhai S Amin, Linda C Mayes, Pavan Khosla, and Rushabh H Doshi. 2024. Assessing the ecacy of large language models in health literacy: A compre- hensive cross-sectional study . The Y ale Journal of Biology and Medicine 97, 1 (2024), 17. doi:10.59249/ZTOZ1966 [11] Oluwatobiloba A yo-Ajibola, Ryan J Davis, Matthew E Lin, Jerey Riddell, and Richard L Kravitz. 2024. Characterizing the adoption and experiences of users of articial intelligence–generated health information in the United States: Cross- sectional questionnaire study . Journal of Medical Internet Research 26 (2024), e55138. doi:10.2196/55138 [12] Magdalena Bachmann, Ioana Duta, Emily Mazey , William Co oke , Manu V atish, and Gabriel Davis Jones. 2024. Exploring the capabilities of ChatGPT in women’s health: obstetrics and gynaecology . NPJ W omen’s Health 2, 1 (2024), 26. doi:10. 1038/s44294- 024- 00028- w [13] Kiri Beilby and Karin Hammarb erg. 2023. Using ChatGPT to answer patient questions about fertility: The quality of information generated by a de ep learning language model. Human Reproduction 38, S1 (2023), i54. doi:doi.org/10.1093/ humrep/dead093.103 [14] Kiri Beilby and Karin Hammarberg. 2024. ChatGPT: A reliable fertility decision- making tool? Human Reproduction 39, 3 (2024), 443–447. doi:10.1093/humrep/ dead272 [15] Joanna Biega, Ida Mele, and Gerhard W eikum. 2014. Probabilistic prediction of privacy risks in user search histories. In Proceedings of the First International W orkshop on Privacy and Security of Big Data . 29–36. doi:10.1145/2663715. 2669609 [16] Som S Biswas. 2023. Role of Chat GPT in public health. Annals of Biomedical Engineering 51, 5 (2023), 868–869. doi:10.1007/s10439- 023- 03172- 7 [17] Robert Bo oth and Dan Milmo. 2025. Chinese AI chatbot DeepSeek censors itself in real time, users report. The Guardian (28 Jan 2025). https://w ww .theguardian.com/technology/2025/jan/28/chinese- ai- CHI ’26, April 13–17, 2026, Barcelona, Spain Kaleva et al. chatbot- deepseek- censors- itself- in- realtime- users- report Accessed: 2025-11- 18. [18] Virginia Braun and Victoria Clarke. 2021. Thematic analysis: A practical guide . SAGE Publications Ltd. https://digital.casalini.it/9781526417305 [19] Giovanni Briganti. 2024. How ChatGPT works: A mini review . European Archives of Oto-Rhino-Lar yngology 281, 3 (2024), 1565–1569. doi:10.1007/ s00405- 023- 08337- 7 [20] Elizabeth A Brown. 2020. The Femtech paradox: How workplace monitoring threatens women’s equity . Jurimetrics 61 (2020), 289. https://w ww .proquest. com/docview/2568314630?sourcetype=Scholarly%20Journals [21] Hannah Brown, Katherine Lee, Fatemehsadat Mireshghallah, Reza Shokri, and Florian Tramèr . 2022. What does it mean for a language mo del to preserve privacy? . In Proceedings of the ACM Conference on Fairness, Accountability , and Transparency. 2280–2292. doi:10.1145/3531146.3534642 [22] Moritz Büchi, Natascha Just, and Michael Latzer . 2017. Caring is not enough: The importance of Internet skills for online privacy protection. Information, Communication & Society 20, 8 (2017), 1261–1278. doi:10.1080/1369118X.2016. 1229001 [23] Duc Bui, Brian T ang, and Kang G Shin. 2022. Do opt-outs really opt me out?. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security. 425–439. doi:10.1145/3548606.3560574 [24] Christina Burns, Angela Bakaj, Amonda Berishaj, V agelis Hristidis, Pamela Deak, Ozlem Equils, et al . 2024. Use of generative AI for improving health literacy in reproductive health: Case study . JMIR Formative Research 8, 1 (2024), e59434. doi:10.2196/59434 [25] Marta Cantero Gamito and Christopher T Marsden. 2024. Articial intelligence co-regulation? The role of standards in the EU AI Act. International Journal of Law and Information T echnology 32 (2024), eaae011. doi:10.1093/ijlit/eaae011 [26] Jiaxun Cao , Hiba Laabadli, Chase H Mathis, Rebecca D Stern, and Par dis Emami- Naeini. 2024. “I deleted it after the overturn of Roe v . W ade”: Understanding women’s privacy concerns toward p eriod-tracking apps in the p ost Ro e v. Wade era. In Proceedings of the CHI Conference on Human Factors in Computing Systems. 1–22. doi:10.1145/3613904.3642042 [27] Ping Cao, Ganesh Acharya, Andres Salumets, and Masoud Zamani Esteki. 2025. Large language models to facilitate pregnancy prediction after in vitro fer- tilization. Acta Obstetricia et Gynecologica Scandinavica 104, 1 (2025), 6–12. doi:10.1111/aogs.14989 [28] Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer , and Chiyuan Zhang. 2022. Quantifying memorization across neural language models. arXiv preprint (2022). doi:10.48550/arXiv. 2202.07646 [29] Nicholas Carlini, Florian Tramer , Eric Wallace , Matthe w Jagielski, Ariel Herbert- V oss, Katherine Lee, Adam Roberts, T om Brown, Dawn Song, Ulfar Erlingsson, et al . 2021. Extracting training data from large language models. In USENIX Security Symposium (USENIX Security) . 2633–2650. https://w ww .usenix.org/ conference/usenixsecurity21/presentation/carlini- extracting [30] Qian Qian Chen and Hyun Jung Park. 2021. How anthropomorphism aects trust in intelligent personal assistants. Industrial Management & Data Systems 121, 12 (2021), 2722–2737. doi:10.1108/IMDS- 12- 2020- 0761 [31] V alerie Chepp. 2024. Comparative subgroup analysis in qualitative interview studies: Does sample size matter? Chest 166, 3 (2024), 423–424. doi:10.1016/j. chest.2024.05.016 [32] Joseph Chervenak, Harry Lieman, Miranda Blanco-Breindel, and Sangita Jindal. 2023. The promise and peril of using a large language model to obtain clinical information: ChatGPT performs strongly as a fertility counseling tool with limitations. Fertility and Sterility 120, 3 (2023), 575–583. doi:10.1016/j.fertnstert. 2023.05.151 [33] Kovila PL Coopamootoo and Thomas Groß. 2014. Mental mo dels: An approach to identify privacy concern and behavior . In Symposium on Usable Privacy and Security (SOUPS). 9–11. [34] Robert E Crossler . 2010. Protection motivation theory: Understanding deter- minants to backing up personal data. In Hawaii International Conference on System Sciences. IEEE, 1–10. doi:10.1109/HICSS.2010.311 [35] Mary Crossley . 2005. Discrimination against the unhealthy in health insurance. U. K an. L. Rev . 54 (2005), 73. https://scholarship.law .pitt.edu/fac_articles/292 [36] Martin Degeling, Christine Utz, Christopher Lentzsch, Henr y Hosseini, Flo- rian Schaub, and Thorsten Holz. 2018. W e value your privacy... now take some cookies: Measuring the GDPR’s impact on web privacy. arXiv preprint arXiv:1808.05096 (2018). doi:10.14722/ndss.2019.23378 [37] Caterina Delcea and Catalin Adrian Buzea. 2024. The medicine of the past, present, and future generations: From Sir William Osler to ChatGPT . Medicina Clínica Práctica 7, 3 (2024), 100433. doi:10.1016/j.mcpsp.2024.100433 [38] Elaine Denny and Annalise W eckesser . 2018. Qualitative research: What it is and what it is not. BJOG: An International Journal of Obstetrics and Gynaecology (2018). doi:doi/10.1111/1471- 0528.15198 [39] Umama Dewan, Cora Sula, and Nora Mcdonald. 2024. T een reproductive health information seeking and sharing post-Roe. In Proceedings of the CHI Conference on Human Factors in Computing Systems . 1–12. doi:10.1145/ 3613904.3641934 [40] Brian C Drolet, Jayson S Marwaha, Brad Hyatt, P hillip E Blazar , and Scott D Lifchez. 2017. Electronic communication of protected health information: Pri- vacy , security , and HIP AA compliance. The Journal of Hand Surgery 42, 6 (2017), 411–416. doi:10.1016/j.jhsa.2017.03.023 [41] Clare Duy . 2024. Social media platforms are using what you create for articial intelligence. Here’s how to opt out. CNN (23 Sep 2024). https://edition.cnn. com/2024/09/23/tech/social- media- ai- data- opt- out/index.html Accessed: 2025- 11-18. [42] Christof Ebert and Panos Louridas. 2023. Generative AI for software practition- ers. IEEE Software 40, 4 (2023), 30–38. doi:10.1109/MS.2023.3265877 [43] Casey Fiesler , Michaelanne Dye, Jessica L Feuston, Chaya Hiruncharoenvate, Clayton J Hutto, Shannon Morrison, Parisa Khanipour Roshan, Umashanthi Pavalanathan, Amy S Bruckman, Munmun De Choudhury , et al . 2017. What (or who) is public? Privacy settings and social me dia content sharing. In Proceedings of the ACM Conference on Computer Supported Cooperative W ork and Social Computing. 567–580. doi:/10.1145/2998181.2998223 [44] Lidia Flores, Zeyad Kelani, Carrie Chandwani, and Sean D Y oung. 2023. Internet searches for self-manage d ab ortion after Roe v W ade overturned. JAMA Surgery 158, 9 (2023), 976–977. doi:10.1001/jamasurg.2023.2410 [45] Juliana Friend, Claire D Brindis, and Ushma D Upadhyay . 2025. Abortion AI: T oward an equity-centered r esearch agenda for AI and abortion. Contraception (2025), 111241. doi:10.1016/j.contraception.2025.111241 [46] Aidan Gilson, Conrad W Safranek, Thomas Huang, Vimig Socrates, Ling Chi, Richard Andrew T aylor , David Chartash, et al . 2024. Correction: Ho w does Chat- GPT perform on the United States Medical Licensing Examination (USMLE)? The Implications of large language dodels for me dical education and knowledge assessment. JMIR Medical Education 10, 1 (2024), e57594. doi:10.2196/45312 [47] Antonio Ginart, Melody Guan, Gregory V aliant, and James Y Zou. 2019. Mak- ing AI forget you: Data deletion in machine learning. Advances in Neural Information Processing Systems 32 (2019). doi:10.48550/arXiv .1907.05012 [48] Abenezer Golda, Kidus Mekonen, Amit Pandey , Anushka Singh, Vikas Hassija, Vinay Chamola, and Biplab Sikdar . 2024. Privacy and security concerns in generative AI: A comprehensive sur ve y . IEEE Access (2024). doi:10.1109/ ACCESS.2024.3381611 [49] Google. 2025. More opportunities for your business on Google Search: AI in Search and brand discovery. Google Ads Blog. https://blog.google/products/ads- commerce/google- search- ai- brand- discovery/ Accessed: 2025-11-18. [50] B Grace , JILL Shawe, and J Stephenson. 2023. A mixed metho ds study investigat- ing sources of fertility and reproductive health information in the UK. Sexual & Reproductive Healthcare 36 (2023), 100826. doi:10.1016/j.srhc.2023.100826 [51] Pamela Grimm. 2010. Social desirability bias. Wiley International Ency clopedia of Marketing (2010). doi:10.1002/9781444316568.wiem02057 [52] Y awen Guo, Rachael Zehrung, Katie Genuario, Xuan Lu, Qiaozhu Mei, Y unan Chen, and Kai Zheng. 2023. Perspectives on privacy in the post-Roe era: A mixed-methods of machine learning and qualitative analyses of tweets. In AMIA Annual Symposium Proceedings , V ol. 2023. American Medical Informatics As- sociation, 951. https://pmc.ncbi.nlm.nih.gov/articles/PMC10785943/ [53] AKM Bahalul Haque, AKM Najmul Islam, and Patrick Mikalef. 2023. Explainable Articial Intelligence (XAI) from a user perspective: A synthesis of prior litera- ture and pr oblematizing avenues for future research. T echnological Forecasting and Social Change 186 (2023), 122120. doi:10.1016/j.techfore.2022.122120 [54] Justin Hendrix. 2025. Transcript: Mark Zuckerb erg announces major changes to Meta’s content moderation policies and operations. T ech Policy Press 7 (2025), 2025. https://w ww .techpolicy.pr ess/transcript- mark- zuckerberg- announces- major- changes- to- metas- content- moderation- policies- and- operations/ [55] T ad Hirsch. 2020. Practicing without a license: Design research as psychother- apy . In Proceedings of the CHI Conference on Human Factors in Computing Systems. 1–11. doi:10.1145/3313831.3376750 [56] T Hunter , M Seminatore, K Lindsay , and J Sanchez. 2023. AI and a self-managed abortion: Can ChatGPT provide assistance when no physician is present? Contraception 127 (2023), 110147. doi:10.1016/j.contraception.2023.110147 [57] Mohd Javaid, Abid Haleem, and Ravi Pratap Singh. 2023. ChatGPT for health- care services: An emerging stage for an innovative perspective. BenchCouncil Transactions on Benchmarks, Standards and Evaluations 3, 1 (2023), 100105. doi:10.1016/j.tbench.2023.100105 [58] Kierra B. Jones. 2025. Stopping the abuse of tech in surveilling and criminalizing abortion. Center for American Progress (2025). https://www.americanpr ogress.org/article/stopping- the- abuse- of- tech- in- surveilling- and- criminalizing- abortion/ Accessed: 2025-02-13. [59] Louise Kaplan. 2022. The overturn of Roe v . W ade: Reproductive health in the post-Roe era. The Nurse Practitioner 47, 10 (2022), 5–8. doi:10.1097/01.npr. 0000873552.13618.91 [60] Paul Keller and Zuzanna W arso. 2023. Dening best practices for opting out of ML training. Open Future 28 (2023). https://openfuture.eu/publication/dening- best- practices- for- opting- out- of- ml- training/ [61] Keren Khromchenko, Sameeha Shaikh, Meghana Singh, Gregory V urture, Rima A Rana, Jonathan D Baum, and Rima Rana. 2024. ChatGPT -3.5 versus Privacy and Safety Experiences and Concerns of U.S. W omen Using Generative AI for Seeking SRH Information CHI ’26, April 13–17, 2026, Barcelona, Spain Google Bard: Which large language model responds best to commonly asked pregnancy questions? Cureus 16, 7 (2024). doi:10.7759/cureus.65543 [62] Y ubin Kim, Chanwoo Park, Hyewon Je ong, Yik Siu Chan, Xuhai Xu, Daniel McDu, Cynthia Breazeal, and Hae W on Park. 2024. Adaptive collaboration strategy for LLMs in medical decision making. CoRR (2024). doi:10.48550/arXiv . 2404.15155 [63] Matthew Chung Yi Koh, Jinghao Nicholas Ngiam, Paul Anantharajah T ambyah, and Sophia Archuleta. 2024. ChatGPT as a tool to improve access to knowledge on sexually transmitted infections. Sexually Transmitted Infections (2024). doi:10.1136/sextrans- 2024- 056217 [64] Matthew Kosinski. 2024. What is black box AI and how does it work? IBM Think (29 Oct 2024). https://www .ibm.com/think/topics/black- box- ai Accessed: 2025-11-20. [65] Jabari Kwesi, Jiaxun Cao, Riya Manchanda, and Pardis Emami-Naeini. 2025. Exploring user security and privacy attitudes and concerns toward the use of general-purpose LLM chatbots for mental health. arXiv preprint arXiv:2507.10695 (2025). doi:10.48550/arXiv .2507.10695 [66] Phyu M Latt, Ei T Aung, Kay Htaik, Nyi N Soe, David Lee, Alicia J King, Ria Fortune, Jason J Ong, Eric PF Chow , Catriona S Bradshaw , et al . 2025. Evaluation of articial intelligence (AI) chatbots for pro viding sexual health information: A consensus study using real-world clinical queries. BMC Public Health 25, 1 (2025), 1788. doi:10.1186/s12889- 025- 22933- 8 [67] Michael Latzer, Natascha Just, and F lorian Saurwein. 2013. Self-and co- regulation: Evidence, legitimacy and governance choice. In Routledge handb ook of media law. Routledge, 373–392. doi:10.4324/9780203074572 [68] Anna Leschanowsky , Silas Rech, Birgit Popp, and T om Bäckström. 2024. Evalu- ating privacy , security , and trust perceptions in conversational AI: A systematic review . Computers in Human Behavior (2024), 108344. doi:10.1016/j.chb.2024. 108344 [69] Haoran Li, Dadi Guo, W ei Fan, Mingshi Xu, Jie Huang, Fanpu Meng, and Yangqiu Song. 2023. Multi-step jailbreaking privacy attacks on Chatbot. arXiv preprint arXiv:2304.05197 (2023). doi:10.48550/arXiv .2304.05197 [70] Mengjun Li and Ay oung Suh. 2022. Anthropomorphism in AI-enable d tech- nology: A literature revie w . Electronic Markets 32, 4 (2022), 2245–2275. doi:10.1007/s12525- 022- 00591- 7 [71] Muyuan Li, Haojin Zhu, Zhaoyu Gao, Si Chen, Le Yu, Shangqian Hu, and Kui Ren. 2014. All your location are belong to us: Breaking mobile social networks for automated user location tracking. In Proceedings of the ACM international symposium on Mobile ad hoc networking and computing . 43–52. doi:10.48550/arXiv .1310.2547 [72] Y uanchun Li, Hao W en, W eijun W ang, Xiangyu Li, Yizhen Yuan, Guohong Liu, Jiacheng Liu, W enxing Xu, Xiang Wang, Yi Sun, et al . 2024. Personal LLM agents: Insights and survey about the capability , eciency and security . arXiv preprint arXiv:2401.05459 (2024). doi:10.48550/arXiv.2401.05459 [73] Yiheng Liu, Hao He , Tianle Han, Xu Zhang, Mengyuan Liu, Jiaming Tian, Yutong Zhang, Jiaqi W ang, Xiaohui Gao, Tianyang Zhong, et al . 2025. Understanding LLMs: A comprehensive ov erview from training to inference. Neurocomputing 620 (2025), 129190. doi:10.48550/arXiv .2401.02038 [74] Y ujian Liu, Y ang Zhang, T ommi Jaakkola, and Shiyu Chang. 2024. Revisiting who’s Harry Potter: T owards targeted unlearning from a causal intervention perspective. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. doi:10.48550/arXiv .2407.16997 [75] Wulf Livingston and Andrew Perkins. 2018. Participatory action research (P AR) research: Critical methodological considerations. Drugs and Alcohol T oday 18, 1 (2018), 61–71. doi:10.1108/DA T - 08- 2017- 0035 [76] George F Loewenstein, Elke U W eber , Christopher K Hse e , and Ne d W elch. 2001. Risk as fe elings. Psychological Bulletin 127, 2 (2001), 267. doi:10.1037/0033- 2909.127.2.267 [77] Natasha Lomas. 2024. Ads might be coming to ChatGPT — despite Sam Altman not being a fan. T echCrunch (2 Dec 2024). https://techcrunch.com/2024/12/02/ ads- might- be- coming- to- chatgpt- despite- sam- altman- not- being- a- fan/ Ac- cessed: 2025-11-20. [78] Rongjun Ma, Caterina Maidhof, Juan Carlos Carrillo, Janne Lindqvist, and Jose Such. 2025. Privacy perceptions of custom GPT s by users and creators. In Proceedings of the CHI Conference on Human Factors in Computing Systems . 1–18. doi:10.1145/3706598.3713540 [79] Subhankar Maity and Manob Jyoti Saikia. 2025. Large language models in healthcare and medical applications: A revie w . Bioengineering 12, 6 (2025), 631. doi:10.3390/bioengineering12060631 [80] Lisa Mekioussa Malki, Ina Kaleva, Dilisha Patel, Mark W arner, and Ruba Abu- Salma. 2024. Exploring privacy practices of female mHealth apps in a post-Ro e world. In Proceedings of the CHI Conference on Human Factors in Computing Systems. 1–24. doi:10.1145/3613904.3642521 [81] Nora Mcdonald and Nazanin Andalibi. 2023. “I did watch ‘The Handmaid’s Tale ’”: Threat modeling privacy post-Roe in the United States. ACM T ransactions on Computer-Human Interaction 30, 4 (2023), 1–34. doi:10.1145/3589960 [82] Nora McDonald and Andrea Forte. 2022. Privacy and vulnerable populations. In Modern socio-technical perspectives on privacy . Springer International Pub- lishing Cham, 337–363. doi:10.1007/978- 3- 030- 82786- 1_15 [83] Stephen McDonell. 2023. Elusive Ernie: China’s new chatbot has a censorship problem. BBC News (2023), 09–09. https://ww w .bbc.co.uk/news/world- asia- 66727459 [84] Hayley V McMahon and Br yan D McMahon. 2024. Automating untruths: ChatGPT , self-managed me dication abortion, and the threat of misinformation in a post-Roe world. Frontiers in Digital Health 6 (2024), 1287186. doi:10.3389/ fdgth.2024.1287186 [85] Anjali Me diboina, Rajani Kumari Badam, and Sailaja Chodavarapu. 2024. Assess- ing the accuracy of information on medication abortion: A comparativ e analysis of ChatGPT and Google Bard AI. Cureus 16, 1 (2024). doi:10.7759/cureus.51544 [86] Y usuf Mehdi. 2023. Driving more trac and value to publishers from the ne w Bing. Bing Search Blog. https://blogs.bing.com/search/march_2023/Driving- more- trac- and- value- to- publishers- from- the- new- Bing Accessed: 2025-11- 20. [87] Maryam Mehrnezhad and T eresa Almeida. 2021. Caring for intimate data in fertility technologies. In Proceedings of the CHI Confer ence on Human Factors in Computing Systems. 1–11. doi:10.1145/3411764.3445132 [88] Maryam Mehrnezhad and T eresa Almeida. 2023. “My sex-related data is more sensitive than my nancial data and I want the same level of se curity and privacy”: User risk perceptions and protective actions in female-oriented tech- nologies. In Proceedings of the European Symposium on Usable Security . 1–14. doi:10.48550/arXiv .2306.05956 [89] Michela Meister and Karen Levy . 2022. Digital security and reproductive rights: Lessons for feminist cyb erlaw . Feminist Cyberlaw (Meg Leta Jones and Amanda Levendowski, eds.), University of California Press, Forthcoming (2022). doi:10. 2139/ssrn.4262774 [90] Xiaoxiao Meng and Jiaxin Liu. 2025. “Talk to me, I’m secure”: Investigating information disclosure to AI chatbots in the context of privacy calculus. Online Information Review (2025). doi:10.1108/OIR- 06- 2024- 0375 [91] Devadas Menon and Kalyan Shilpa. 2023. “Chatting with ChatGPT”: Analyzing the factors inuencing users’ intention to Use the Open AI’s ChatGPT using the U T AU T model. Heliyon 9, 11 (2023). doi:10.1016/j.heliyon.2023.e20962 [92] Meta. 2025. How Meta uses information for generative AI models and features . https://www.facebook.com/privacy/genai/ Accessed: 2025-11-18. [93] Meta. 2025. Improving your r ecommendations on our apps with AI. Meta News- room. https://about.f b .com/news/2025/10/improving- your- recommendations- apps- ai- meta/ Accessed: 2025-11-18. [94] Microsoft. 2024. Generative AI online safety day global survey . https://blogs.microsoft.com/on- the- issues/2024/02/05/generative- ai- online- safety- day- global- survey/. https://blogs.microsoft.com/on- the- issues/2024/02/05/generative- ai- online- safety- day- global- survey/ Accessed: 2025-02-03. [95] Rhiana Mills, Emily Rose Mangone, Neal Lesh, Diwakar Mohan, and Paula Baraitser . 2023. Chatb ots to improve sexual and reproductive health: Realist synthesis. Journal of Medical Internet Research 25 (2023), e46761. doi:10.2196/ 46761 [96] Timo Minssen, Ey V ayena, and I Glenn Cohen. 2023. The challenges for regulating medical use of ChatGPT and other large language models. Jama 330, 4 (2023), 315–316. doi:10.1001/jama.2023.9651 [97] Sarika Mohan and Judy Jenkins. 2025. F lo wing data: W omen’s views and experiences on privacy and data security when using menstrual cycle tracking apps. Oxford Open Digital Health 3 (2025), oqaf011. doi:10.1093/oodh/oqaf011 [98] Patrick Murmann and Farzaneh Karegar . 2021. From design requirements to eective privacy notications: Empowering users of online services to make informed decisions. International Journal of Human–Computer Interaction 37, 19 (2021), 1823–1848. doi:10.1080/10447318.2021.1913859 [99] AF Nabhan, G Mburu, F Elshafeey , R Magdi, M Kamel, M Elshebiny , YG Abuel- naga, M Ghonim, MH Ab delhamid, Mo Ghonim, et al . 2022. W omen’s repro- ductive span: A systematic scoping review . Human Reproduction Open 2022, 2 (2022), hoac005. doi:10.1093/hropen/hoac005 [100] Jacob F Oeding, Amy Z Lu, Michael Mazzucco, Michael C Fu, Samuel A Taylor , David M Dines, Russell F Warr en, Lawrence V Gulotta, Joshua S Dines, and K yle N Kunze. 2025. ChatGPT -4 performs clinical information retrie val tasks us- ing consistently more trustworthy resources than does Google search for queries concerning the latarjet procedure. Arthroscopy: The Journal of Arthroscopic & Related Surgery 41, 3 (2025), 588–597. doi:10.1016/j.arthro.2024.05.025 [101] Francisco M Olmos- V ega, Renée E Stalmeijer, Lara V arpio, and Renate K ahlke. 2023. A practical guide to reexivity in qualitative research: AMEE Guide No. 149. Medical T eacher 45, 3 (2023), 241–251. doi:10.1080/0142159X.2022.2057287 [102] OpenAI. 2024. Rest of world privacy policy . https://openai.com/policies/row- privacy- policy/. https://openai.com/policies/row- privacy- policy/ Accessed: 2024-02-13. [103] OpenAI. 2025. OpenAI RO W privacy policy . https://openai.com/p olicies/r ow- privacy- policy/ Accessed: 2025-08-22. CHI ’26, April 13–17, 2026, Barcelona, Spain Kaleva et al. [104] OpenAI. 2025. Transparency & content moderation . https://op enai.com/ transparency- and- content- moderation/? utm_source=chatgpt.com Accessed: 2025-11-18. [105] OpenAI. 2025. Usage p olicies . https://openai.com/policies/usage- policies/ Accessed: 2025-11-18. [106] Mengxue Ou and Shirley S Ho. 2024. Factors associated with information credibility perceptions: A meta-analysis. Journalism & Mass Communication Quarterly 101, 2 (2024), 346–372. doi:10.1177/10776990231222556 [107] Long Ouyang, Jer ey Wu, Xu Jiang, Diogo Almeida, Carroll W ainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray , et al . 2022. Training language models to follo w instructions with human feedback. Advances in Neural Information Processing Systems 35 (2022), 27730–27744. doi:10.48550/arXiv .2203.02155 [108] Bahar Yuksel Ozgor and Melek Azade Simavi. 2024. Accuracy and reproducibility of ChatGPT’s free version answers about endometriosis. International Journal of Gynecology & Obstetrics 165, 2 (2024), 691–695. doi:10.1002/ijgo.15309 [109] Cliodhna O’Connor and Helene Joe. 2020. Intercoder reliability in qualitative research: Debates and practical guidelines. International Journal of Qualitative Methods 19 (2020), 1609406919899220. doi:10.1177/1609406919899220 [110] Kate O’Flaherty . 2025. Meta AI conrms your data will be used for ads — here ’s how and when. Forbes (6 Oct 2025). https: //www.forbes.com/sites/kateoahertyuk/2025/10/06/meta- ai- conrms- your- data- will- be- used- for- ads- heres- how- and- when/ Accessed: 2025-11-18. [111] Anwesan Pal, Radhika Bhargava, K yle Hinsz, Jacques Esterhuizen, and Sudipta Bhattacharya. 2024. The empirical impact of data sanitization on language models. arXiv preprint (2024). doi:10.48550/arXiv .2411.05978 [112] Lawrence A Palinkas, Sarah M Hor witz, Carla A Green, Jennifer P Wis- dom, Naihua Duan, and Kimberly Hoagwood. 2015. Purposeful sampling for qualitative data colle ction and analysis in mixed method implementation re- search. Administration and Policy in Mental Health and Mental Health Services Research 42 (2015), 533–544. doi:10.1007/s10488- 013- 0528- y [113] Christina C Pallitto, Claudia García-Moreno , Henrica AFM Jansen, Lori Heise, Mary Ellsberg, Charlotte W atts, et al . 2013. Intimate partner violence, abortion, and unintende d pregnancy: Results from the WHO multi-countr y study on women’s health and domestic violence. International Journal of Gynecology & Obstetrics 120, 1 (2013), 3–9. doi:10.1016/j.ijgo.2012.07.003 [114] Anisha V Patel, Sona Jasani, Abdelrahman AlAshqar , Aisvarya Panakam, Kanhai Amin, and Sangini S Sheth. 2024. Evaluating the accuracy and utility of large language models in answering common contraception questions [ID 2683633]. Obstetrics & Gynecology 143, 5S (2024), 12S. doi:10.1097/01.AOG.0001013000. 12240.72 [115] Anisha V Patel, Aisvarya Panakam, Kanhai Amin, Rushabh H Doshi, Ankita Patil, and Sangini S Sheth. 2024. Comparative readability assessment of four large language models in answers to common contraception questions [ID 2683638]. Obstetrics & Gynecology 143, 5S (2024), 12S. doi:10.1097/01.AOG. 0001013004.01563.47 [116] Andrew Pearson. 2019. Personalisation the articial intelligence way. Journal of Digital & Social Media Marketing 7, 3 (2019), 245–269. doi:10.69554/JJGR7331 [117] Vilius Petkauskas. 2023. Lessons learned from ChatGPT’s Samsung leak. Cybernews (2023). https://cyb erne ws.com/security/chatgpt- samsung- leak- explained- lessons/? ref=blog.mithrilsecurity .io Accessed: 2025-08-22. [118] Kummaragunta Joel Prabhod. 2024. Leveraging generative AI and foundation models for personalized healthcare: Predictive analytics and custom treatment plans using deep learning algorithms. Journal of AI in Healthcare and Medicine 4, 1 (2024), 1–23. https://healthsciencepub.com/index.php/jaihm/article/view/23 [119] Celia Rosas. 2019. The future is FemT ech: Privacy and data security issues surrounding FemT e ch applications. Hastings Bus. LJ 15 (2019), 319. :https: //repository .uchastings.edu/hastings_business_law_journal/vol15/iss2/5 [120] Emrullah ŞAHiN, Naciye Nur Arslan, and Durmuş Özdemir . 2025. Unlocking the black box: An in-depth review on interpretability , explainability , and reliability in deep learning. Neural Computing and Applications 37, 2 (2025), 859–965. doi:10.1007/s00521- 024- 10437- 2 [121] Benjamin Saunders, Julius Sim, T om Kingstone, Shula Baker , Jackie W atereld, Bernadette Bartlam, Heather Burroughs, and Clar e Jinks. 2018. Saturation in qualitative research: Exploring its conceptualization and operationalization. Quality & Quantity 52, 4 (2018), 1893–1907. doi:10.1007/s11135- 017- 0574- 8 [122] Allysan Scatterday . 2021. This is no ovary-action: FemT ech apps need stronger regulations to protect data and advance public health goals. NCJL & T ech. 23 (2021), 636. https://scholarship.law .unc.edu/ncjolt/vol23/iss3/6 [123] Nicholas Sellke, Kimberly Tay , Helen H Sun, Alexander T atem, Aram Loeb, and Nannan Thirumavalavan. 2022. The unprecedented increase in Google searches for “vasectomy” after the reversal of Roe vs. Wade . Fertility and Sterility 118, 6 (2022), 1186–1188. doi:10.1016/j.fertnstert.2022.08.859 [124] Y ashothara Shanmugarasa, Ming Ding, MA Chamikara, and Thierr y Rako- toarivelo. 2025. SoK: The privacy paradox of large language models: Advance- ments, privacy risks, and mitigation. arXiv preprint (2025). doi:10.48550/arXiv .2506.12699 [125] Filipo Sharevski, Jennifer V ander Loop, Peter Jachim, Amy Devine, and Emma Pieroni. 2023. Talking abortion (mis)information with ChatGPT on TikT ok. In IEEE European Symp osium on Security and Privacy W orkshops (EuroS&PW) . IEEE, 594–608. doi:10.48550/arXiv .2303.13524 [126] Robin Staab, Mark V ero, Mislav Balunović, and Martin V echev . 2023. Beyond memorization: Violating privacy via inference with large language models. arXiv preprint arXiv:2310.07298 (2023). doi:10.48550/arXiv.2310.07298 [127] Guangzhi Sun, Potsawee Manakul, Xiao Zhan, and Mark Gales. 2025. Un- learning vs. obfuscation: Are we truly removing knowledge? . In Proceedings of the Conference on Empirical Methods in Natural Language Processing . doi:10.48550/arXiv .2505.02884 [128] Daryl O Traylor , Keith V Kern, Eboni E Anderson, and Robert Henderson. 2025. Beyond the screen: The impact of generative articial intelligence (AI) on patient learning and the patient-physician relationship. Cureus 17, 1 (2025). doi:10.7759/cureus.76825 [129] Beatrice Tylstedt, Maria Normark, and Lina Eklund. 2023. Reimagining the cycle: Interaction in self-tracking period apps and menstrual empowerment. Frontiers in Computer Science 5 (2023), 1166210. doi:10.3389/fcomp.2023.1166210 [130] Sandra W achter. 2023. Limitations and lo opholes in the EU AI Act and AI Liability Directives: What this means for the European Union, the Unite d States, and beyond. Y ale JL & T ech. 26 (2023), 671. doi:10.2139/ssrn.4924553 [131] Jordan Joseph W adden. 2022. Dening the undenable: The black b ox pr oblem in healthcare articial intelligence. Journal of Medical Ethics 48, 10 (2022), 764–768. doi:10.1136/medethics- 2021- 107529 [132] Christopher W an, Angelo Cadiente, Keren Khromchenko, Natalie Friedricks, Rima A Rana, and Jonathan D Baum. 2023. ChatGPT: An evaluation of AI- generated responses to commonly asked pregnancy questions. Open Journal of Obstetrics and Gynecology 13, 9 (2023), 1528–1546. doi:10.4236/ojog.2023. 139129 [133] Lu W ang, Max Song, Rezvaneh Rezapour, Bum Chul K won, and Jina Huh- Y oo. 2023. People’s perceptions toward bias and related concepts in large language models: A systematic review . arXiv preprint (2023). doi:10.48550/arXiv .2309.14504 [134] Xingyi W ang, Xiaozheng W ang, Sunyup Park, and Y axing Y ao. 2025. Mental models of generative AI chatbot ecosystems. In Proceedings of the International Conference on Intelligent User Interfaces . 1016–1031. doi:10.1145/3708359. 3712125 [135] Katerina Y ordanova and Natalie Bertels. 2024. Regulating AI: Challenges and the way forward through regulatory sandboxes. Multidisciplinary Perspectives on Articial Intelligence and the Law 441 (2024). doi:10.1007/978- 3- 031- 41264- 6_23 [136] Y aman Yu, T anusree Sharma, Melinda Hu, Justin W ang, and Yang W ang. 2025. Exploring parent-child perceptions on safety in generative AI: Concerns, miti- gation strategies, and design implications. In IEEE Symposium on Security and Privacy (SP). IEEE, 2735–2752. doi:10.48550/arXiv .2406.10461 [137] Carlos Zednik. 2021. Solving the black box problem: A normative framework for explainable articial intelligence. Philosophy & T e chnology 34, 2 (2021), 265–288. doi:10.1007/s13347- 019- 00382- 7 [138] Denisa Oana Zelinschi, Alexandra Ursache, Eduard Cristian Mihoci, Emil An- ton, Dan Bogdan Navolan, and Dragos Nemescu. 2025. Generative articial intelligence applications in reproductive health: Opportunities and challenges. (2025). doi:10.20944/preprints202507.2446.v1 [139] Xiao Zhan, Juan-Carlos Carrillo, William Seymour , and Jose Such. 2025. Mali- cious LLM-based conversational AI makes users re veal personal information. In USENIX Security Symposium. doi:10.48550/arXiv .2506.11680 [140] Chiyuan Zhang, Daphne Ippolito, Katherine Lee, Matthew Jagielski, F lorian Tramèr , and Nicholas Carlini. 2023. Counterfactual memorization in neural lan- guage models. Advances in Neural Information Processing Systems 36 (2023), 39321–39362. doi:10.48550/arXiv .2112.12938 [141] Zhiping Zhang, Michelle Jia, Hao-Ping Lee, Bingsheng Y ao, Sauvik Das, Ada Lerner , Dakuo W ang, and Tianshi Li. 2024. “It’s a fair game ” , or is it? Examining how users navigate disclosure risks and benets when using LLM-based con- versational agents. In Proceedings of the CHI Conference on Human Factors in Computing Systems. 1–26. doi:10.48550/arXiv .2309.11653 [142] T ao Zhou and Songtao Li. 2024. Understanding user switch of information seeking: From search engines to generative AI. Journal of Librarianship and Information Science (2024), 09610006241244800. doi:10.1177/09610006241244800 A Study Materials A.1 Screening Sur ve y [Use of GenAI]. In this section, we will ask you questions about your use of generative AI tools . These are programs that create new content, such as text, images, video, or other data. Examples Privacy and Safety Experiences and Concerns of U.S. W omen Using Generative AI for Seeking SRH Information CHI ’26, April 13–17, 2026, Barcelona, Spain include OpenAI’s ChatGPT , Microsoft Copilot, Go ogle ’s Gem- ini, and Meta’s Llama 2 . P lease note that in this section, we are NOT referring to voice assistants such as Amazon Alexa, Apple Siri, Samsung Bixby , etc. 1. GenAI usage – Have you ev er used a generative AI tool ? - Y es, I have used a generative AI tool (1) - No, I have not used a generative AI tool (2) 2. How often do you use a generative AI tool ? - Daily (1) - W eekly (2) - Monthly (3) - Other (please specify) (4) - Prefer not to say (5) 3. Hav e y ou e ver used a generative AI to ol to seek information or advice about sexual and reproductive health ? - Y es, I have used a generative AI tool for this purpose (1) - No, I have not used a generative AI tool for this purpose (2) 4. Please specify the generative AI tool(s) you have used for seeking sexual and reproductive health information. (Check all that apply) - OpenAI’s ChatGPT (1) - Google’s Gemini (2) - Microsoft Copilot (3) - Meta’s Llama 2 (4) - Other (please specify) (5) 5. What specic sexual or repr oductive health topic(s) did y ou seek information about using a generative AI tool? (Check all that apply) - Contraception methods (1) - Pregnancy-related information (2) - Sexual health and wellness (3) - Menstrual health and fertility (4) - Other (please specify) (5) - Prefer not to say (6) 6. Please use the slider below to indicate your level of concern about how generative AI tools handle your data, with 0 meaning “not at all concerned” and 10 meaning “extr emely concerned. ” [Use of V oice Assistants]. In this se ction, we will ask you questions about your use of voice assistants . V oice assistants are devices or applications that can understand human language and respond to questions, provide information, or assist with tasks using speech. They can be found on mobile phones, smart speakers, smartwatches, and other devices. Some well-known examples include Amazon Alexa, Go ogle Assistant, Apple’s Siri, Microsoft Cortana, Sam- sung Bixby , and Blackberry Assistant . Please note that in this section, we are NOT referring to generative articial intelligence (AI) tools such as ChatGPT , Microsoft Bing, or similar systems. 7. Have you ev er used a voice assistant ? - Y es, I have used a voice assistant (1) - No, I have not use d a voice assistant (2) 8. How frequently do you use a voice assistant ? - Daily (1) - W eekly (2) - Monthly (3) - Other (please specify) (4) - Prefer not to say (5) 9. Have you ev er used a voice assistant to seek information or advice about sexual and reproductive health ? - Y es, I have use d a voice assistant for this purpose (1) - No, I have not used a voice assistant for this purpose (2) 10. Please spe cify which voice assistant(s) you have use d to seek information about sexual and reproductive health . (Check all that apply .) - Amazon Alexa (1) - Google Assistant (2) - Apple’s Siri (3) - Microsoft Cortana (4) - Samsung Bixby (5) - Blackberry Assistant (6) - Other (please specify) (7) 11. What sp ecic sexual and repr oductive health topic(s) have you sought information about using a voice assistant ? (Check all that apply .) - Contraception methods (1) - Pregnancy-related information (2) - Sexual health and wellness (3) - Menstrual health and fertility (4) - Other (please specify) (5) - Prefer not to say (6) 12. Please use the slider b elo w to indicate your level of concern about how voice assistants handle your data, with 0 meaning “not at all concerned” and 10 meaning “extr emely concerned. ” [Abortion Attitudes]. 13. Please indicate the extent to which you agree or disagree with the following statement: " Abortion is health care. " - Strongly Disagree (1) - Disagree (2) - Neither Agree nor Disagree (3) - Agree (4) - Strongly A gree (5) [T echnical Background]. 14. Do you have education or work ex- perience in any information technology (IT) elds , such as Computer Science, Software Engineering, or App Develop- ment ? - Y es (1) - No (2) 15. Do you r egularly use any of the following metho ds to protect your privacy? - Virtual Private Networks (VPNs) (1) - End-to-end encryption (e .g., encrypted emails) (2) - Private br owsing ( e.g., Incognito mode) (3) - T or Browser (4) - DuckDuckGo search engine (5) - Passwor d managers (6) - Other (please specify) (7) - None of the above (8) [Demographics]. 16. Next, we will ask several questions about diversity and inclusion . Y our participation in this study will not be aected by your answers. Please select the group that best describes you from the options below . - White or Caucasian (1) - Black or African American (2) - A sian or Asian American (3) - Native Hawaiian or Other Pacic Islander (4) - American Indian or Alaska Native (5) - Arab or North African (6) - Mixed or multiple groups (please specify) (7) - Pr efer to self- describe (8) - Prefer not to say (9) 17. What is your gender? - W oman (1) - Man (2) - Non-binar y or prefer to self-describe (3) - Prefer not to say (4) 18. What is the highest level of education you hav e completed? - Less than high school (1) - High school/GED or equivalent (2) - Some college or univ ersity but no degr ee (3) - A ssociate degree (4) - Bachelor’s degree or equivalent (5) - Master’s degree or equivalent (6) - Doctoral or professional degree (PhD, JD , MD) (7) - Other , please specify (8) - Prefer not to say (9) 19. How would you describe your curr ent employment status? - Unemployed (1) - Employed full-time (2) - Employed part-time or casually (3) - Self-employed (4) - Student (5) - Retired (6) - Other (please specify) (7) - Prefer not to say (8) 20. Please select the option that best describ es your total annual household income after taxes. - Less than $10,000 (1) - $10,000–$15,999 (2) - $16,000–$19,999 (3) - $20,000–$29,999 (4) - $30,000–$39,999 (5) - $40,000–$49,999 (6) - $50,000–$59,999 (7) - $60,000–$69,999 (8) - $70,000–$79,999 (9) - $80,000–$89,999 (10) - $90,000–$99,999 (11) - $100,000–$149,999 (12) - More than $150,000 (13) - Prefer not to say (14) CHI ’26, April 13–17, 2026, Barcelona, Spain Kaleva et al. [Final Question]. Please briey explain why you are interested in participating in this study . A.2 Interview Script A.2.1 General Vie ws and Experiences. In this interview , I will fo- cus on your use of generative AI tools sp ecically for seeking information or advice on any aspect of sexual and reproductive health (SRH), including menstrual health, fertility , sexual health, pregnancy , contraception, and abortion. T o clarify , generative AI tools are te chnologies that generate new data; examples include ChatGPT , Microsoft Copilot, Google Gemini, and others. In the screener , you reported that you have used [tools], is that correct? [ If multiple tools have b een use d ]: Which of these tools have you used the most for seeking information about SRH, and why? I will focus my questions on [tool] sp ecically; how ever , please feel free to share your experiences with [other tools] whenever you think it is relevant or important. 1. Can you describe a situation in which you used [tool] to seek information or advice related to SRH? 2. What motivated you to use [tool] specically for seeking SRH information or advice? - Why did you choose [tool] instead of other generative AI tools? - [ If the tool is ChatGPT ]: Did you use a custom version of ChatGPT or the general version? 3. Have y ou noticed any benets of using [to ol] for seeking SRH information? 4. Have you noticed any drawbacks or challenges when using [tool] for seeking SRH information? 5. What other sources of SRH information do you currently use or have used in the past? 6. Did you use [tool] as your primary source of information on SRH? 7. How do you think [tool] diers fr om other sources of infor- mation for seeking SRH information? 8. Are there situations or circumstances in which you prefer using [tool] over other sources of SRH information? 9. In contrast, are there situations or circumstances in which you prefer using other sources of information ov er [tool] for SRH queries? A.2.2 Interaction with GenAI tools. 10. Can you describe the pro- cess you followed when interacting with [tool] to seek SRH infor- mation? - What prompts did you use when interacting with [tool] to seek SRH information? - What information, if any , did you share with [tool] when seeking SRH information (for example, personal information or health- and non-health-related data)? 11. How eective was [tool] in helping you accomplish your goal, and why? 12. How did interacting with [tool] compare to interacting with a human, such as a friend or healthcar e professional, when seeking SRH information or advice? I will now ask several questions about the information generate d by [tool] and how you perceive its quality . I will clarify any terms before each question. 14. How complete do you think the information on SRH pro- vided by [tool] was, and why? [Provide clarication: information is complete when all relevant elements and details are present and no essential information is missing] 15. How consistent do you think the information on SRH pro- vided by [tool] was compared to other sources, and why? [Provide clarication: consistency means that the information given by [tool] does not contradict information from other sources] 16. How authentic do you think the information on SRH pro- vided by [tool] was, and why? [Provide clarication: information is authentic when it has an origin supported by unquestionable evidence and is veried] - Where do y ou think the SRH information pr ovided by [tool] came from? 17. How authoritative do you think [tool] is as a source of in- formation on SRH, and why? [Pro vide clarication: authoritative sources are widely recognized or generated by experts in a specic eld] 18. Did you verify the SRH information pro vided by [tool], and why or why not? 19. How did you use or act on the SRH information you received from [tool]? - Did you make any health-related decisions based on the infor- mation you received from [tool]? 20. In y our opinion, are ther e any potential safety risks or harms posed by using [tool] to seek SRH information? If so, please describ e them. 21. How do you feel about these safety risks? 22. How do you think the potential safety risks compare to , or dier between, [tool] and other sources of SRH information, such as web-based sources or healthcare professionals? 23. What potential consequences or harms to users do you think may result from these safety risks? 24. Do you take any active steps to pr otect yourself against the safety risks of seeking SRH information using [tool]? If so, please describe these steps. A.2.3 Beliefs About Data Practices. I will now ask y ou questions about your understanding of the data collected by [tool] and how it ows within [tool]’s system. By “ data ow , ” I mean how the information collected by [to ol] mov es from one place to another and who might receive it. There are no right or wrong answers. Please try to use your imagination and think aloud. Don’t worry about specic terminology . 25. What types of data do you think [tool] collected when you used it to seek SRH information? - Are both the prompts you enter and the responses generated by [tool] collected? - What other data, if any , do you think [tool] has collected about you in general, for example, during registration or while using [tool]? What data, if any , do you think [tool] collects automatically on its own? - Where do you think this data is stored, and for ho w long? - Ar e ther e any measures you believe were taken to protect this data? 26. How do y ou feel about the collection of your conversation data? - What do you think the purpose or purposes of [to ol] colle cting conversation data are? - Who do you think has access to your conversation data, if anyone, and for what purposes? 27. How do you feel about the collection of your account data? Privacy and Safety Experiences and Concerns of U.S. W omen Using Generative AI for Seeking SRH Information CHI ’26, April 13–17, 2026, Barcelona, Spain - What do you think the purpose or purposes of [to ol] colle cting this data are? - Who do you think has access to the data collected by [tool], if anyone, and for what purposes? 28. What do you believe happens to the data you provide to [tool] when seeking SRH information, b efor e it generates a response? How do you think this data is processed by [tool] to generate a response? 29. What do you think [tool] learned or inferred about you or others based on your SRH-related interactions? - How do you feel about [to ol] learning or inferring this? - How condent do you feel that the information learned or inferred is accurate? - What do you think the purp ose or purposes of [tool] learning and inferring this data about you or others are? A.2.4 Data Deletion. 30. Have you ever deleted any of the data collected or used by [to ol]? - [ If yes ] What data did you delete? 31. Why did you delete your data? 32. How did you delete your data, and can you describe the process? 33. What do you think happened to your data after you deleted it? 34. Did you export a copy of your data b efor e deleting it, and why or why not? [ Hypothetical: If participants did not delete data colle cted or use d by [tool], ask the following questions: ] 35. What would motivate you to delete data collected or used by [tool]? - What type of data would you choose to delete? 36. How would you delete each of these data types, and can you describe the process? 37. What do you think would happ en to your data after you deleted it? 38. W ould you export a copy of your data before deleting it, and why or why not? A.2.5 Privacy Risks and Mitigation Strategies. Privacy refers to an individual’s right to control their personal information, including how it is colle cted, used, shared, and who can access it. Data security refers to the protection of data from potential threats. 39. In your opinion, are ther e any potential privacy or security risks when using [tool] or other generative AI tools to seek sexual and reproductive health information? 40. Who do you think would benet from any privacy or security risks, if at all? 41. How do you feel about the privacy and security risks posed by seeking SRH information in [tool]? 42. Have you ever wanted to ask or share something with [tool] but chose not to because of privacy or security concerns? 43. How do you think the p otential privacy and security risks of seeking SRH information compare to, or dier between, [to ol] and other sources, such as web-based platforms or healthcare providers? - Are there any privacy or security risks that are spe cic to [tool] and generative AI chatbots in general? 44. What potential consequences or harms to users do you think may result from the privacy and security risks of using [tool] to seek SRH information? 45. Do you take any active steps to pr otect yourself against the privacy and security risks of seeking SRH information in [tool]? If so, please describe these steps. 46. Have you ever interacte d with the privacy or security settings of [tool]? - Are you aware of, or have you used, [ each available privacy or security setting in the GenAI chatb ot used by the participant ]? 47. Are there any other potential risks, b esides privacy , security , and safety , that you think might arise from using [tool] to seek SRH information that I haven’t asked you about? 48. Overall, do you think [tool] is worth using for this purpose, considering the potential risks you described? Why or why not? A.2.6 Privacy Legislation and Abortion. 49. Are you aware of any privacy laws in your country or [state] that protect people’s digital data? - Who do you think this law or these laws are designe d to protect? - What protections does this law , or do these laws, oer? - Does this law , or do these laws, protect people’s personal information or digital data specically related to SRH when using [tool] or other generative AI tools? 50. According to y our Prolic/screener data, you live in [ state]. Can you describe the current legal status of abortion there? - [ If the person is pro-choice-oriented ]: W ould you use [tool] to seek information or advice ab out abortion care access, and why or why not? - [ If the person is not pro-choice-oriented ]: What are your thoughts on using generative AI tools, such as [tool], to seek information or advice about abortion care access? 51. Can you describe any specic ways you believe ab ortion- related conversations might be processed, stored, used, or shared by [tool]? 52. T o what extent do you feel condent that [privacy law] ade- quately protects people’s abortion-related conversations in [tool] or other generative AI tools, and why or why not? A.2.7 Suggestions and Recommendations. 53. Based on our dis- cussion, are there any specic improvements you would like the developers of [to ol] or other generative AI tools to make to enhance your privacy , security , or safety when seeking SRH information? 54. What other measures would you like gov ernments and poli- cymakers to take to address users’ concerns regarding [tool] and other generative AI tools? 55. W ould you like to share or discuss any additional thoughts?

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment