A Scoping Review of AI-Driven Digital Interventions in Mental Health Care: Mapping Applications Across Screening, Support, Monitoring, Prevention, and Clinical Education

Artificial intelligence (AI)-enabled digital interventions, including Generative AI (GenAI) and Human-Centered AI (HCAI), are increasingly used to expand access to digital psychiatry and mental health care. This PRISMA-ScR scoping review maps the lan…

Authors: Yang Ni, Fanli Jia

A Scoping Review of AI-Driven Digital Interventions in Mental Health Care: Mapping Applications Across Screening, Support, Monitoring, Prevention, and Clinical Education
A S C O P I N G R E V I E W O F A I - D R I V E N D I G I T A L I N T E RV E N T I O N S I N M E N TA L H E A LT H C A R E : M A P P I N G A P P L I C A T I O N S A C R O S S S C R E E N I N G , S U P P O RT , M O N I T O R I N G , P R E V E N T I O N , A N D C L I N I C A L E D U C A T I O N Y ang Ni School of International and Public Aff airs Columbia Univ ersity New Y ork yang.ni2@columbia.edu Fanli Jia Department of Psychology Seton Hall Univ ersity South Orange fanli.jia@shu.edu A B S T R AC T Background/Objectiv es: Artificial intelligence (AI)-enabled digital interventions are increasingly used to expand access to mental health care. This PRISMA-ScR scoping revie w maps how AI technologies support mental health care across fi ve phases: pre-treatment (screening), treatment (therapeutic support), post-treatment (monitoring), clinical education, and population-le vel pre vention. Methods: W e synthesized findings from 36 empirical studies published through January 2024 that implemented AI-driv en digital tools, including large language models (LLMs), machine learning (ML) models, and con versational agents. Use cases include referral triage, remote patient monitoring, empathic communication enhancement, and AI-assisted psychotherap y deli vered via chatbots and voice agents. Results: Across the 36 included studies, the most common AI modalities included chatbots, natural language processing tools, machine learning and deep learning models, and lar ge language model-based agents. These technologies were predominantly used for support, monitoring, and self-management purposes rather than as standalone treatments. Reported benefits included reduced wait times, increased engagement, and improved symptom tracking. Howe ver , recurring challenges such as algorithmic bias, data pri vac y risks, and workflo w inte gration barriers highlight the need for ethical design and human oversight. Conclusion: By introducing a four-pillar framew ork, this re view offers a comprehensi v e ov ervie w of current applications and future directions in AI-augmented mental health care. It aims to guide researchers, clinicians, and policymakers in dev eloping safe, effecti ve, and equitable digital mental health interventions. K eywords digital mental health · artificial intelligence · con v ersational agents · large language models · machine learning · chatbots · AI-assisted psychotherapy · mental health screening · remote patient monitoring · scoping revie w 1 Introduction In recent decades, mental health disorders have surged despite economic and technological progress, with barriers such as stigma, cost, and professional shortages continuing to hinder treatment access [ 1 , 2 , 3 ]. Digital health technologies offer promise in addressing these challenges [ 4 ], with teletherapy , mental health apps, and computerized cognitiv e behavioral therap y sho wing ef fecti veness [ 5 ]. The CO VID-19 pandemic further accelerated the use of digital mental health tools, re vealing unmet needs at the intersection of technology and psychotherapy [ 6 , 7 ]. This gro wing demand has sparked increased interest in the role of con versational artificial intelligence in mental health applications [ 8 , 9 ], with research exploring its potential to enhance psychotherapy and improv e care deli very [ 10 , 11 , 12 , 13 ]. Researchers hav e increasingly focused on le v eraging con v ersational artificial intelligence to support psychotherapy ef fectiveness, as the mental health sector grapples with rising demand and the need for innov ati ve solutions [ 12 , 13 , 14 , 15 , 16 ]. Since the public release of ChatGPT (V ersion GPT -4, United States of America) in early 2023, it has become the first con v ersational artificial intelligence tool to achieve global mainstream use, reshaping approaches to learning, doi.org/10.3390/healthcare13101205 communication, and problem solving [ 8 , 17 ]. Research on ChatGPT has expanded rapidly across disciplines, especially in education, medicine, and psychology , highlighting its gro wing potential to adv ance mental health services [ 10 , 11 , 12, 13]. Artificial intelligence-driv en digital interventions in mental health refer to software systems or mobile applications that embed artificial intelligence techniques to deli v er , support, or e valuate mental health services [ 6 , 12 ]. These include con v ersational artificial intelligence agents that interact with users through natural language, ranging from simple F A Q style or rule-based chatbots to more adv anced multi-turn dialogue systems capable of handling complex communication tasks [ 18 , 19 ]. Natural language processing techniques enable these agents to parse user input, detect sentiment, and extract ke y emotional cues [ 20 ]. Machine learning models, such as classification and regression algorithms, and deep learning networks, such as con volutional and recurrent neural networks, power predictive and monitoring tools to classify diagnoses, forecast risks, and tailor treatment recommendations based on user data [ 20 ]. Large language models such as GPT and BER T , which belong to a subclass of deep learning models b uilt on transformer architectures with self- attention mechanisms, expand capabilities by generating and comprehending coherent and context-rich te xt, opening ne w possibilities for nuanced therapeutic dialogue and personalized content creation [ 18 , 19 ]. Artificial intelligence has emerged as a promising tool to augment human therapists [ 12 ], although its adoption challenges traditional care models and raises concerns regarding ef ficacy , ethics, priv acy , and the interpretation of human mental health experiences [ 21 ]. This re view aims to pro vide a comprehensi v e analysis of con v ersational artificial intelligence in mental health care by mapping empirical evidence across dif ferent clinical phases. It of fers insights into current applications, challenges, and future opportunities. Although pre vious re vie ws ha ve focused narro wly on large language model capabilities, ethical concerns, or specific generative artificial intelligence applications, this scoping revie w provides a broader synthesis by mapping artificial intelligence-dri ven digital interventions across fi ve clinical phases of mental health care: (1) pre-treatment, (2) treatment, (3) post-treatment, (4) clinical education, and (5) general improvement and pre vention. W e organize this landscape into a unified life cycle frame work grounded in empirical e vidence. Four research questions guide our analysis: Research Question 1 (RQ1): W ithin each clinical phase, which artificial intelligence modalities (rule-based chatbots, natural language processing, machine learning or deep learning models, and large language models) po wer digital interventions, and what e vidence e xists regarding their ef ficac y and limitations? Research Question 2 (RQ2): For each phase, which artificial intelligence-dri v en tools demonstrate the greatest impact, and what performance metrics and barriers hav e been reported? Research Question 3 (RQ3): What are the general strengths, weaknesses, opportunities, and threats of utilizing artificial intelligence-driv en interv entions in mental health care both ov erall and within each clinical phase? Research Question 4 (RQ4): Which artificial intelligence technologies and applications appear most mature today , which emerging trends warrant priority research, and which technical, clinical, and policy challenges must be addressed to advance artificial intelligence-dri ven digital interventions? By linking conceptual insights with empirical outcomes, this revie w complements and extends prior work to provide an integrated reference for research, practice, and policy . Finally , it aims to equip future researchers and practitioners to identify promising development pathways and positions in this revie w as a key reference for understanding the full spectrum of artificial intelligence applications in mental health care. 2 Methods 2.1 Research Aims V arious types of AI technologies are utilized within the broad conte xt of mental health care. Many of these technologies are specifically linked to the con versational AI interface, which engages users to provide a wide range of support. V arious technologies, including AI chatbots, different language models, prediction modeling, sentiment analysis, and recommender systems, ha ve been implemented into health care settings [ 22 , 23 , 24 , 25 ]. These technologies are making significant adv ancements in mental health care by improving diagnostic accurac y , enhancing personalized treatment, providing insights and recommendations to clinicians, tailoring services to indi vidual needs, and of fering accessible and cost-ef fectiv e mental health support to e veryone [ 26 , 27 , 28 , 29 ]. As the field of computer science continues to progress, there is a transformativ e opportunity for the mental health care field to understand and apply these technologies to their services effecti vely . Howe ver , it is crucial to approach this inte gration thoughtfully , maintaining standards of care and prioritizing patient- centric approaches. There is a need for a scoping re vie w e xamining ho w dif ferent AI technologies are being used in mental health, their impacts, ethical considerations, and practical aspects of combining AI with human care across 2 doi.org/10.3390/healthcare13101205 various settings. This review aims to present a framework for the existing state of integrating AI into mental health services in a manner that maximizes benefits and minimizes risks. The summarization can serve as an overvie w for those interested in researching and dev eloping solutions for these issues. T o realize their full promise, integration must be guided by evidence on efficac y , ethical safeguards, and alignment with clinical workflo ws. Accordingly , this scoping revie w maps the current landscape of AI-driv en digital interventions in mental health, proposes a four-pillar mapping to or ganize empirical findings, and identifies practical barriers and enablers to maximize benefits and minimize risks. Our primary goals are to chart which AI modalities are deployed in each care phase, summarize their prov en outcomes and reported limitations, and outline strategic directions for future research and policy . 2.2 Design and Scope of the Study This scoping re vie w adhered to the PRISMA-ScR guidelines [ 30 ] to ensure a rigorous, transparent, and reproducible methodology for mapping the use of AI-dri v en digital interventions in mental health care. W e define our scope as all con v ersational AI agents (from rule-based/F A Q chatbots to ML-po wered multi-turn systems and transformer-based LLMs) and related predictiv e/monitoring models (NLP and ML/DL algorithms) deployed across five phases: (1) pre-treatment (screening and triage), (2) treatment (therapeutic support), (3) post-treatment (follow-up and monitoring), (4) clinical education, and (5) general improv ement and pre vention. Conducting this revie w , we examined ho w each technology is applied, its demonstrated outcomes (e.g., accuracy , engagement, and health gains), and its limitations, thereby offering a unified life c ycle frame work for AI in mental health. 2.3 Identification and Selection of Studies Our search strategy encompassed empirical research reports and publications up to January 2024, focusing on the application and efficac y of artificial intelligence technologies in mental health care. The search included multiple databases using ke ywords and phrases related to con v ersational artificial intelligence, machine learning, and mental health, such as “con v ersational AI and mental health”, “ChatGPT psychotherapy”, “ AI counseling”, “ AI psychotherapy”, “ AI counselor”, and “machine learning mental health”. Initially , the comprehensiv e search yielded 1674 records. After removing 724 duplicates and excluding 804 articles that did not meet predefined criteria, 146 records remained for further ev aluation. W e retriev ed 143 full-text reports for in-depth assessment. Inclusion criteria targeted empirical research reports and publications published in English that examined the application of artificial intelligence technologies in mental health contexts. W e excluded non-empirical w orks, such as literature revie ws, editorials, and opinion pieces, as well as studies focusing on non-AI technologies or those outside the scope of mental health. Follo wing screening, we excluded 82 non-empirical studies, 4 preprints that did not meet inclusion criteria, and 20 articles that fell outside our scope. The first author independently conducted the search and data extraction using a custom-designed template to sys- tematically capture ke y information from each study , including objecti ves, AI technologies used, main findings, and conclusions. Discrepancies were resolv ed through discussion to ensure consensus with the second author . Data were organized by clinical phase, AI modality , reported outcomes, and identified limitations. In total, 36 studies met the inclusion criteria and formed the core sample for this scoping re vie w (Figure 1). These studies represent the most relev ant and methodologically sound research currently av ailable on con versational AI in mental health care. Owing to the heterogeneity of study designs and outcomes, we did not conduct a risk-of-bias appraisal or quantitativ e synthesis. 3 doi.org/10.3390/healthcare13101205 Figure 1: PRISMA-ScR flo w chart. 2.4 Search Strategy and Data Extraction A customized data-charting form was de veloped to capture key information: study phase (application scenarios), AI technology , setting, outcomes, and limitations from each included article. It captured key details such as the objectiv es, AI technologies utilized, main findings, and conclusions. This structured approach allowed for a thorough and organized analysis of the empirical e vidence on con v ersational AI in mental health care. T wo revie wers independently e xtracted all study-lev el data; an y discrepancies were discussed and resolved by consensus adjudication prior to synthesis. 3 Results 3.1 Summary of Reviewed Results The revie w process included 36 articles that showcased surve y results of users, clinical students, and mental health professionals’ perceptions tow ard AI applications. Additionally , all 36 articles ev aluated the efficac y of specific aspects of AI applications. T o or ganize the re vie wed studies, we classified them based on the primary AI technologies employed. Giv en the considerable ov erlap between categories, we focused on highlighting the main types of technologies without reporting precise counts. The technologies identified across the studies include AI chatbots, con versational agents, natural language processing (NLP) tools, large language models (LLMs), machine learning (ML) models, deep learning (DL) models, and AI-based prediction systems. W e acknowledge that many studies employed hybrid approaches or combinations of these technologies. T o improv e clarity , we report general trends and key examples without quantifying the exact number of studies per cate gory . T able 1 lists the technologies mentioned in the articles. 4 doi.org/10.3390/healthcare13101205 T able 1: AI modalities categories. Label Theme AI Chatbots Rule-based or scripted dialogue systems that deliv er predefined psychoeducation or CBT prompts; no statistical/ML adaptation; always user -interfacing. Con v ersational AI Agents Multi-turn dialogue systems that incorporate traditional NLP components, such as intent classification or sentiment analysis to tailor replies, but do not emplo y large language model generation. Machine Learning (ML) Models Supervised algorithms (e.g., logistic regression, SVM, and gradient boosting) trained on structured or text features to classify diagnosis, predict outcomes, or service flo w , generally backend analytics. Natural Language Processing (NLP) T ools Standalone language-processing pipelines, tokenization, topic modelling, and emotion detection, used either to support a con v ersational or analyze text corpora; e xcludes LLMs. Large Language Models (LLMs) T ransformer -based generativ e models with >1 billion parameters (e.g., GPT -3.5/4) capable of free-text generation, conte xtual memory , and zero-shot reasoning; typically fine-tuned for multi-turn counselling. Deep Learning (DL) Models Neural networks such as CNNs or RNNs applied to non-text signals (images, sensor streams) or structured clinical data for pattern recognition and outcome prediction. AI Prediction Modelling (RPM) Any ML or DL model embedded in a remote patient-monitoring pipeline that ingests physiological or behavioral signals to stratify risk or trigger alerts in real time. This classification directly supports RQ1, helping us map which AI modalities power interventions across different mental health phases. In the following Results sections, we unpack these mappings phase by phase, for example, noting the prominence of rule-based chatbots in screening, NLP agents in empathic support, ML/DL models in post-treatment risk assessment, and emerging LLM agents in multi-turn counseling. T able 2: Clinical phases of AI deployment in mental health care. Functional roles of AI in mental health interventions. Clinical Phase/Scenario Descriptions Pre-treatment/Screening Interventions used before formal care be gins, including online self-referral, triage, or risk screening. T reatment AI components integrated during activ e psychotherapy , pharmacotherapy , or combined treatment phases. Post-treatment/Monitoring AI tools used for follow-up care, symptom monitoring, risk assessment, or treatment adjustment after formal treatment. General support and prev ention Standalone tools aimed at maintaining well-being, reducing stress, or prev enting mental health problems in non-clinical or community populations. Clinical education AI tools used to train, assess, or upskill mental health professionals, clinical students, or educators. Functional Category Descriptions Assessment Structured intake or self-report instruments automated by AI to collect clinical or mental health information. Diagnosis T ools designed to output diagnostic labels or severity assessments of mental health conditions. Patient monitoring T ools providing continuous or periodic tracking of symptoms, behaviors, or physiological markers. T reatment outcome prediction Models that forecast treatment responses, dropout risks, or recov ery trajectories. Mental health counseling/therapy AI-deliv ered psychotherapeutic interv entions, such as cognitiv e beha vioral therapy (CBT), behavioral acti vation, or problem-solving therapy . Mental health treatment (Clinical decision support) AI systems recommending treatment plans, medication, or therapy adjustments, supporting clinician decision making. Mental health support Low- intensity support services, such as psychoeducation, emotional assistance, or peer facilitation without formal therapy claims. 5 doi.org/10.3390/healthcare13101205 The revie wed studies reported applications across multiple clinical phases (RQ2): pre-treatment (screening and triage), treatment (therapeutic support), post-treatment (monitoring and follo w-up), general mental health support and pre vention, and clinical education. Functions included assessment, diagnosis, patient monitoring, treatment outcome prediction, mental health counseling or therapy , clinical decision support, and general mental health assistance. Many studies co vered multiple functions, which we discuss in detail in the Results subsection “Ke y Findings in the Applications in Mental Health Care” by linking these functions to observed clinical outcomes and implementation challenges (see T able 2). 3.2 Analysis and Synthesis of the Results The results of the included studies were synthesized using a narrativ e synthesis approach. Giv en the expected heterogeneity in study designs and objectiv es, this method allows for the identification of ov erarching themes, discussion of patterns and discrepancies, and a nuanced understanding of the current landscape and potential future directions of con v ersational AI applications in mental health care (see T able 3). T able 3: Descriptive table of scoping re views. Reference Scenario/Application AI T echnology Purpose Main Result [21] Mental health (MH) assessment AI chatbot Examined “Limbic Access” AI in enhancing mental illness recovery in NHS services. The use of AI tool was associated with an increase in recovery rates from 47.1% in the pre-implementation period to 48.9% post-implementation. [23] Patient monitoring AI prediction modelling AI-based RPM model incorporates RFID for monitoring mental health, targeting vital signs and acti vity classification. The implementation can help to monitor patients with mental illnesses sufficiently and ef fectiv ely support the treatment teams to provide timely interventions, impro ve patient safety , and pre vent incidents such as self-harm. [31] Treatment outcomes prediction Machine learning (ML), RNN AI applications in iCBT predict mental health outcomes. The developed AI models demonstrated good accurac y in predicting patient outcomes, resulting in approximately 87% accuracy after three clinical re views. [32] Detects and diagnoses ML AI tool improves MHM assessment, using fewer questions while maintaining diagnostic precision. The system provided an accuracy le vel of 89% in diagnosing mental disorders with only 28 questions asked during diagnostic sessions, reducing questions volume, and encouraging better participation. [33] MH counseling/therapy AI chatbot AI chatbots’ emotional interactions enhance user satisfaction and retention. Emotional disclosure in chatbot significantly increases satisfaction and reuse intention. Users’ emotional disclosure intention and the perceiv ed intimacy with a chatbot also mediate the effect. [34] Peer support Con versational AI agent HAILEY AI system fosters empathy in text-based peer mental health support. The study showed a substantial growth of users’ empathic response due to the human–AI collaboration. [23] MH support Large language model (LLM) ChatCounselor , an AI tuned with real counseling data, ev aluated for mental health aid. ChatCounselor demonstrated improved performance in a counseling-specific benchmark, showing promising potential in providing mental health support. [28] MH support LLM, natural language processing (NLP) Studies chatbots’ personalities, like Big Fiv e T raits, on user engagement in mental health. The efficac y of chatbots with high conscientiousness driv es user engagement, and variations in preferences indicating a match between chatbot and user personalities could influence engagement. [27] MH support V arious, mainly NLP Survey re veals con versational AI’ s acceptance for mental health among students. The survey result highlights the increasing aw areness and positiv e perception among students toward the use of con versational AI technologies for mental health support. [29] Diagnosis, MH support Con versational AI agent Dev eloped a conv ersational AI agent that acts like a human therapist, providing accessible emotional support and mental health diagnosis. The logistic regression model produced improv ed accuracy of emotion prediction since the chatbot was able to effecti vely interact with users and quickly respond. [35] Diagnosis Deep learning model Dev eloped an early diagnostic system for detecting depression by analyzing textual data using deep learning techniques. The model achiev ed 99% accuracy in detecting depression from textual data, outperforming other frequency-based deep learning models. [36] MH counseling/therapy , patient monitoring NLP AI chatbot with Behavioral Acti vation enhances mood in users. The pilot study demonstrated effecti veness in mood improvement, with significant impro vements observed in mood scores from pre-usage to post-usage. [37] MH support Con versational AI agent Study on personality-adaptiv e conv ersational agents in mental health care outlines benefits and risks. The study highlighted both potentials in utilizing P ACAs to provide accessible mental health support and serious concerns regarding trust and pri vac y , indicating requirements. [38] Clinical education ML Clinical psychology students show strong interest in AI/ML education. Identifies high interest and perceiv ed importance among students for AI/ML in their education, with high needs for greater formal instruction in these domains. [39] Patient referral and clinical assessment AI chatbot Con versational AI “Limbic Access” boosts psychotherapy efficienc y and patient outcomes. Improved clinical ef ficiency and reduced the time that clinicians spent on assessments; improved patient outcomes such as shorter wait times, lower dropout rates, more accurate treatment allocation, and higher recovery rates. [40] MH support AI chatbot, NLP , ML RCT e valuates “T ess” AI in reducing depression and anxiety in college students. Showed significant decline in the symptoms of depression and anxiety , which indicated increased engagement and satisfaction with control group. [25] MH counseling/therapy AI chatbot VR empathy chatbot for college stress sho ws stress and sensitivity score reductions. Indicated a reduction in mean stress lev el and psychological sensitivity scores of the subjects follo wing the intervention by the VRECC system. [41] MH counseling/therapy , MH support AI chatbot, NLP Focuses on the challenges and needs of moderating digital interventions and counseling of fered at Kooth. Outlines some of the main challenges, including controlling time, interpreting user communication, and addressing hidden risks. 6 doi.org/10.3390/healthcare13101205 [42] MH monitoring Deep learning model AI-edge computing system monitors negativ e info’ s impact on pandemic-related mental health. The system is capable of measuring negati ve information and its influence on well-being, offering an ef fectiv e strategy for mental health monitoring during pandemic. [22] MH treatment ML Assessed the feasibility of recommender systems in personalizing treatment within digital mental health therapy . Suggested that recommender systems could successfully individualize therapeutic recommendations and impro ve outcomes. [43] Diagnosis, treatment outcomes prediction Deep learning model M2C model in art therapy accurately predicts stress levels through art analysis. The M2C model had 88.5% accuracy in predicting stress lev els based on the data from offered art psychotherapy tests. [44] MH counseling/therapy AI chatbot AI-driv en counselor’ s artificial empathy compared with human counselors. AI-based empathetic counseling was rated with less helpfulness. [45] Clinical education ChatGPT ChatGPT’ s effecti veness in simulating client interactions for counseling practice assessed. Suggested that while ChatGPT can simulate various aspects of client interactions, it has limitations, such as lack of non-verbal cues. [46] Diagnosis, MH support ChatGPT ChatGPT’ s emotional understanding in con versations ev aluated using LEAS. ChatGPT’ s emotional awareness w as significantly higher compared to human norms, with improvement o ver time. It achieved high accurac y in fitting emotions to scenarios. [26] MH treatment ChatGPT ChatGPT’ s response adaptability for BPD and SPD conditions analyzed. For BPD, ChatGPT sho wed the ability to respond with a higher emotional resonance than SPD, which suggests its potential use as personalized mental health intervention. [47] MH treatment ChatGPT Comparison of ChatGPT’ s and physicians’ depression treatment recommendations. ChatGPT recommended psychotherapy more frequently than physicians for mild depression and aligned with guidelines for sev ere depression treatment, showing no biases. [48] MH support Con versational AI agent A mental health app design in volving participatory protocol with therapists and users for stress management. Indicated notable symptom improvement and therapist endorsement for AI app integration, highlighting patient engagement benefits. [49] V arious, mainly MH treatment V arious Explores mental health professionals’ views on the needs of AI in optimizing care. Showed themes on practice change, readiness, and educational needs for faster AI adoption, organizational optimizing care with AI. [50] MH counseling/therapy AI chatbot Examined AI therapist MYLO’ s test among youth with mental health issues. Feedback was positiv e, noting decreased distress and improved goal conflict resolution. [51] MH support ML Use machine learning to enhance mental health hotline efficienc y by optimizing caller-to-counselor routing. Significantly increased call volume handled and improv ed chat quality , outperforming traditional methods, notably during peak times. [52] MH counseling/therapy LLM Evaluates ChatGPT 3.5’ s response to severe depression and suicidal tendencies in simulations. Agents escalated cases or shut down at critical risk; some provided crisis resources. [53] Diagnosis AI chatbot Research on DEPRA chatbot’ s depression detection effecti veness in Australians. DEPRA effecti vely identified depression le vels; high satisfaction reported. [54] MH support Con versational AI agent Assesses Lumen, a voice-based virtual coach for problem solving, through user experience, workload, and interviews. Users found Lumen highly usable and engaging, despite some feeling rushed in sessions. [55] MH counseling/therapy Con versational AI agent Study assessed TEO’ s effect on aging work ers’ stress and anxiety . No major differences in symptom reduction across groups; howe ver , TEO with therapy improved well-being. [56] MH treatment NLP Analyzed NLP model biases in psychiatry across demographics, examining health inequality risks. Underscored AI’ s role in enhancing patient management via administrativ e tasks, real-time support for clinicians, and personalized care through digital phenotyping. [57] MH treatment ML, NLP Explores AI to enhance patient flow in NHS mental health units, through literature and expert insights, targeting administrati ve and clinical ef ficiency . Underscored AI’ s role in enhancing patient management via administrativ e tasks, real-time support for clinicians, and personalized care through digital phenotyping. 3.3 Key Findings in AI T echnologies in Mental Health Care This section directly addresses RQ1 by cataloguing the primary AI modalities, rule-based chatbots, traditional NLP agents, ML/DL predicti ve models, and LLM-based agents, and summarizing the ke y empirical outcomes and limitations reported for each algorithm class. 3.3.1 Applying Natural Language Pr ocessing Through the applications dev eloped by natural language processing (NLP), machine learning (ML), and deep learning (DL), AI tools empo wer clinical services and treatments and make mental health services much more accessible [ 25 , 35 ]. First, NLP , with mainly applications of chatbots and AI agents, is extensiv ely employed to enhance the interactivity and ef ficacy of using conv ersational agents to help deliv er mental health care [ 25 , 37 , 40 ]. Many research-oriented AI-dri ven chatbots or agents, such as “Haile y”, “MYLO”, and “Limbic Access”, are utilizing sophisticated NLP algorithms to deeply understand human language and initiate, respond, and engage in meaningful conv ersations related to users’ well-being. Con versations empowered by NLP techniques, such as emotion detection and sentiment analysis, can effecti vely help provide mental health support and of fer computerized therapies [29, 36]. The capabilities of NLP in engaging users through text and v oice interactions foster mental health interventions with no limits on time and space. Many AI-based chatbots lev eraged NLP to pro vide mental health support through con v ersations [ 23 , 40 ]. An AI-enhanced cognitive behavioral therapy (CBT) chatbot named Haile y used NLP to analyze user text inputs and, specifically , detect emotions and train users to giv e empathic responses for facilitating peer-to-peer communications, and an agent called TEO provided similar techniques to enhance stress reduction [34, 48]. Moreov er , preliminary research on ChatGPT highlighted LLM-based con versational AI’ s potential to expand access to mental health services to all populations due to its incredible capabilities in understanding human language. It surpasses capabilities in identifying emotions, tailoring interventions for conditions such as borderline personality disorder (BPD) and sensory processing disorder (SPD), and offering advice that aligns with primary health care guidelines for 7 doi.org/10.3390/healthcare13101205 depression. In addition to the opportunities NLP brought, some NLP models in psychiatry still demonstrated significant biases related to religion, race, gender, nationality , sexuality , and age, which highlighted the need for further enhancing the technology [56]. Applying Machine Lear ning Machine learning (ML) has proven to be a po werful tool in predicting mental health outcomes and enhancing diagnostic accuracy , ultimately aiming to improve treatment ef ficiency and recov ery outcomes. By lev eraging predicti ve analytics and classification models, ML provides clinicians with v aluable decision-support systems and enables the creation of personalized treatment plans for clients [ 24 , 57 ]. For e xample, machine learning models, including logistic regression, ridge regression, and LASSO regression, have been used to develop AI-based assessment tools that can accurately predict mental disorders based on responses to a mental health assessment tool like the SCL-90-R [ 32 ]. This tool can diagnose mental disorders with an impressive accuracy of 89% using just 28 questions. Re garding remote patient monitoring, ML algorithms analyze data from devices monitoring vital signs and physical acti vity to identify subtle changes indicati ve of worsening mental health conditions, such as depression or anxiety [ 24 , 36 ]. This capability is crucial for timely intervention in these conditions, where even minor changes can be serious indicators of patient distress. ML has also been applied to optimize operational efficienc y in mental health services. For instance, it has been used to improv e caller-counselor matching in a mental health contact center , leading to more efficient use of resources and higher quality of service [ 51 ]. Furthermore, in larger health care organizations, ML algorithms hav e been used to streamline patient flow , which indirectly contributes to the improvement of mental health care [ 57 ]. In therapeutic settings, ML of fers valuable insights into the efficac y of interv entions on an indi vidual basis. By analyzing transcripts of therapy sessions, patient feedback, and progress ov er time, ML models can assist therapists in personalizing their approaches to meet the needs of each patient [ 22 , 31 ]. This ability to tailor treatment plans based on data-dri ven insights has the potential to enhance the effecti veness of mental health interventions significantly . 3.4 Applying Deep Lear ning Deep learning (DL) algorithms, a subset of machine learning, learn complex patterns directly from data, enabling accurate predictions and analyses for innovati ve applications such as real-time emotional state monitoring and predicti v e analytics for treatment outcomes. DL contributes to improving online cogniti ve behavioral therapy (CBT) and art psychotherapy by customizing mental health treatments to cater to individual needs [ 31 , 43 ]. In the realm of iCBT , DL algorithms and recurrent neural networks (RNNs) are emplo yed to analyze anon ymous patient data [ 31 ]. They detect patterns that accurately forecast treatment outcomes, assisting in identifying mental health issues and the customization of therapy for more ef fecti ve and personalized interv entions [31, 35]. Similarly , in art psychotherapy , DL models with co-attention mechanisms are re volutioniz ing the e v aluation of art therapy [ 43 ]. These models assess stress and mood levels based on multiple data points. DL ’ s capacity to interpret complex emotional e xpressions and provide insights that align closely with therapeutic goals [ 42 , 43 ]. Through these advancements, DL serves as a technological tool and acts as a bridge to compassionate, precise, and personalized mental health care. Having mapped the core AI modalities abo ve, we no w turn to ho w these tools (chatbots, predictiv e models, and LLMs) are deployed at each phase of mental health care. Key Findings in the A pplications in Mental Health Care Here, we answer RQ2 by showing how those same AI modalities are deployed across the five clinical phases (pre- treatment, treatment, post-treatment monitoring, clinical education, and pre vention), highlighting which technologies are most prev alent in each phase and where critical gaps remain. The revie w identified various applications of AI in mental health care that cater to di verse users, ranging from patients and clinicians to the general public and psychology students. These applications serve dif ferent purposes, including providing computerized therapies to patients, offering early-stage mental health support, assisting clinicians with diagnosis and treatment, and enhancing learning for psychology students. Overall, the studies sho wcased the positive impact of AI technology in improving mental health care and emphasized the significant potential of these applications to rev olutionize the industry . Interestingly , most of the studies focused on how these AI-powered applications can complement and enhance the e xisting services provided by clinicians rather than replacing them. Figure 2 illustrates the four pillars of AI applications in mental health care, demonstrating their extensi v e utilization across various stages of support and treatment. The four pillars encompass four key stages: pre-treatment, treatment, post-treatment, and general improv ement and pre vention. Artificial intelligence is le veraged throughout these stages to enhance v arious aspects of mental health services, creating a continuous source for optimized care and support. In the pre-treatment 8 doi.org/10.3390/healthcare13101205 phase, AI expedites assessment, facilitates initial diagnosis, and aids in referral. During treatment, AI refines diagnoses, personalizes treatment plans, predicts outcomes, and deli v ers AI-based therapeutic interventions. Post-treatment in v olves le veraging AI for remote monitoring and risk e v aluation. The general improv ement and pre vention stage focuses on providing proacti v e mental health support to the broader population through AI-enabled resources. Figure 2: Four pillars frame work where AI-driv en interventions are deployed. Throughout the four pillars framew ork, AI technologies benefit multiple stakeholders (see Figure 3). Patients gain access to fast-track services and more effecti v e, personalized interventions. Clinicians can make enhanced, data-dri ven decisions. Health org anizations improv e ef ficiency and treatment ef ficacy . The general public benefits from increased access to lo w-cost, AI-po wered mental health resources. This inte grated model sho wcases the v ast potential of AI in rev olutionizing mental health care. In the following sections, we turn to the existing literature to e xplore examples of AI implementation within each stage of the model. Figure 3: Benefits of AI technologies for multiple stakeholders in health care. 3.5 Applications in the Pr e-T reatment Stage First, the AI chatbot “Limbic Access” has sho wn great promise in the pre-treatment stage of mental health care. This tool, dev eloped by Shaik et al. [ 24 ] and further refined by Rollwage et al. [ 39 ], helps clinicians assess patients and refer them to appropriate services. One ke y feature of Limbic Access is its AI self-referral tool. This user-friendly tool collects important information from patients applying for the UK’ s National Health Service (NHS) T alking Therapy program. It asks about the patient’ s 9 doi.org/10.3390/healthcare13101205 symptoms using standardized questionnaires like the Patient Health Questionnaire-9 (PHQ-9) and the Generalized Anxiety Disorder Assessment-7 (GAD-7). The tool also gathers demographic details and other clinical information. By automating this intake process, Limbic Access makes it f aster and easier for clinicians to assess patients and get them started on the right track. Rollwage et al. ev aluated the impact of Limbic Access by comparing outcomes from patients who used the AI tool with those who follo wed traditional assessment methods [ 39 ]. The results were promising. Patients who used Limbic Access waited less time for both clinical assessment and treatment. They were also less likely to drop out of the program and more likely to reco ver o verall. As a result, Limbic Access has proven to be a v aluable asset in the pre-treatment stage, streamlining the assessment process and improving patient outcomes. 3.6 Applications in the T reatment and the P ost-T r eatment Stage The ef fectiveness of AI in assisting the treatment process is highlighted by its ability to aid in the detection and diagnosis of conditions, predict treatment outcomes, and monitor patient progress [ 32 , 36 , 41 , 43 , 53 ]. AI systems hav e achiev ed high accuracy in diagnosing mental health disorders and predicting stress le v els from art therapy tests [32, 43, 53]. AI also enhances treatment outcome predictions, leading to better personalization and planning [ 24 , 31 ]. By analyzing non-in v asi ve digital data, identifying patterns in mental health symptoms, and considering past preferences and ratings, AI provides strong support for clinicians to deliv er timely and personalized interventions [ 22 , 24 , 31 ]. The accuracy of AI in predicting outcomes and the continuous improvement of algorithms demonstrate its effecti v eness in personalizing treatment and improving outcomes [ 22 , 26 , 35 , 47 ]. In terms of clinical knowledge, ChatGPT exhibits high capabilities in providing treatment recommendations and aligning well with clinical guidelines [ 47 ]. AI-enabled Remote P atient Monitoring (RPM) systems support clinicians in deliv ering timely interventions, enhancing patient safety , and pre venting incidents like self-harm [24]. AI agents like T ess and MYLO have prov en they can deliver therapy and counseling services that riv al human clinicians, showing moderate to high ef ficacy in reducing symptoms, improving health, and increasing user satisfaction [ 25 , 31 , 33 , 36 , 40 , 48 , 50 , 55 ]. These agents are trained to use emotions and empathy to provide high-quality services, and users report increased satisfaction when these agents disclose emotions [ 33 , 44 , 50 , 55 ]. Howe ver , users still prefer human responses ov er AI empathy [33, 44]. Research has explored how AI agents’ emotions, empathy , and personalities impact intervention outcomes and satisfaction. While some aspects hav e positi ve ef fects, like driving user engagement, there is room for improv ement in adv ancing models and algorithms to better meet the high demand for counseling [ 33 , 44 ]. AI agents demonstrate flexibility in format, offering services through te xt, chat, videos, and e ven virtual reality [ 25 , 36 , 51 ]. They can also support specific mental health therapies like CBT , BA, problem-solving therap y , and problem-solving treatment [31, 36, 48, 54]. Implementing B A in a chatbot has sho wn improv ements in mood le vels [36]. Since therapies can be standardized, AI offers opportunities to automate services and make them widely accessible through digital applications. Howe ver , challenges e xist, such as risk assessment and ens uring timely human intervention for individuals in crisis [ 22 ]. Additionally , some AI systems, like ChatGPT , discontinue interventions for high-risk conditions and cannot provide effecti ve referrals [ 52 ]. This highlights the need for human monitoring of AI-based interventions to identify high risks and establish systems for timely referrals and crisis interv entions [22]. 3.7 Applications in General Support, Impr ov ement, and Pre vention Beyond the treatment period, AI provides ongoing support to clients and of fers general mental health resources for improvement and prev ention [ 23 , 37 ]. Numerous AI chatbots and platforms are designed to provide accessible, cost-effecti ve, and scalable mental health support outside of the clinical setting [23, 34, 36, 37, 40, 41, 44, 50]. Chatbots can identify human emotions and sentiments and assess psychological conditions to generate indi vidualized support, with studies showing high accuracy in measuring users’ psychological states [ 29 , 35 ]. AI agents have demonstrated significantly higher emotional aw areness than human norms and can provide useful interventions to low- and middle-risk conditions, ensuring safety [ 46 , 52 ]. F or example, T ess, a mental health support chatbot, reported significant reductions in symptoms of depression and anxiety among university students, demonstrating higher engagement and satisfaction compared to a control group [ 40 ]. Another chatbot, HAILEY , focused on improving users’ empathetic abilities and found substantial results in increasing users’ empathetic responses [25]. Research is ongoing to enhance AI’ s models, algorithms, and product design to deli v er mental health services better [ 23 , 29 ]. For instance, ChatCounselor de veloped a fine-tuned model by analyzing real counseling data and established quality benchmarks, outperforming all open-source models [ 23 ]. These adv ancements present exciting opportunities 10 doi.org/10.3390/healthcare13101205 to support the mental health of many populations, particularly those without access to traditional clinical care. For example, an Israeli mental health hotline applied AI to pro vide daily support to v arious populations [51]. AI-driv en tools also play a crucial role in stress management and emotional support for the general public. A voice-based virtual coach called Lumen offers problem-solving support through interacti ve sessions, with users reporting high usability and engagement [ 54 ]. Similarly , personality-adapti ve con versational agents (P A CAs) are being de veloped to provide tailored mental health support, though concerns about trust and priv acy remain important considerations [ 37 ]. These tools demonstrate the potential of AI to make mental health support more accessible and personalized for div erse users. 3.8 Applications in the T reatment and the P ost-T r eatment Stage The ef fectiveness of AI in assisting the treatment process is highlighted by its ability to aid in the detection and diagnosis of conditions, predict treatment outcomes, and monitor patient progress [ 32 , 36 , 41 , 43 , 53 ]. AI systems hav e achiev ed high accuracy in diagnosing mental health disorders and predicting stress le v els from art therapy tests [32, 43, 53]. AI also enhances treatment outcome predictions, leading to better personalization and planning [ 24 , 31 ]. By analyzing non-in v asi ve digital data, identifying patterns in mental health symptoms, and considering past preferences and ratings, AI provides strong support for clinicians to deliv er timely and personalized interventions [ 22 , 24 , 31 ]. The accuracy of AI in predicting outcomes and the continuous improvement of algorithms demonstrate its effecti v eness in personalizing treatment and improving outcomes [ 22 , 26 , 35 , 47 ]. In terms of clinical knowledge, ChatGPT exhibits high capabilities in providing treatment recommendations and aligning well with clinical guidelines [ 47 ]. AI-enabled Remote P atient Monitoring (RPM) systems support clinicians in deliv ering timely interventions, enhancing patient safety , and pre venting incidents like self-harm [24]. AI agents like T ess and MYLO have prov en they can deliver therapy and counseling services that riv al human clinicians, showing moderate to high ef ficacy in reducing symptoms, improving health, and increasing user satisfaction [ 25 , 31 , 33 , 36 , 40 , 48 , 50 , 55 ]. These agents are trained to use emotions and empathy to provide high-quality services, and users report increased satisfaction when these agents disclose emotions [ 33 , 44 , 50 , 55 ]. Howe ver , users still prefer human responses ov er AI empathy [33, 44]. Research has explored how AI agents’ emotions, empathy , and personalities impact intervention outcomes and satisfaction. While some aspects hav e positi ve ef fects, like driving user engagement, there is room for improv ement in adv ancing models and algorithms to better meet the high demand for counseling [ 33 , 44 ]. AI agents demonstrate flexibility in format, offering services through text, chat, video, and even virtual reality [ 25 , 36 , 51 ]. The y can also support specific mental health therapies like CBT , BA, problem-solving therap y , and problem-solving treatment [31, 36, 48, 54]. Implementing B A in a chatbot has sho wn improv ements in mood le vels [36]. Since therapies can be standardized, AI offers opportunities to automate services and make them widely accessible through digital applications. Howe ver , challenges e xist, such as risk assessment and ens uring timely human intervention for individuals in crisis [ 22 ]. Additionally , some AI systems, like ChatGPT , discontinue interventions for high-risk conditions and cannot provide effecti ve referrals [ 52 ]. This highlights the need for human monitoring of AI-based interventions to identify high risks and establish systems for timely referrals and crisis interv entions [22]. 3.9 Applications in General Support, Impr ov ement, and Pre vention Beyond the treatment period, AI provides ongoing support to clients and of fers general mental health resources for improvement and prev ention [ 23 , 37 ]. Numerous AI chatbots and platforms are designed to provide accessible, cost-effecti ve, and scalable mental health support outside of the clinical setting [23, 34, 36, 37, 40, 41, 44, 50]. Chatbots can identify human emotions and sentiments and assess psychological conditions to generate indi vidualized support, with studies showing high accuracy in measuring users’ psychological states [ 29 , 35 ]. AI agents have demonstrated significantly higher emotional awareness than human norms and can provide useful interventions to low-and middle-risk conditions, ensuring safety [ 46 , 52 ]. For example, T ess, a mental health support chatbot, reported significant reductions in symptoms of depression and anxiety among university students, demonstrating higher engagement and satisfaction compared to a control group [ 40 ]. Another chatbot, HAILEY , focused on improving users’ empathetic abilities and found substantial results in increasing users’ empathic responses [25]. Research is ongoing to enhance AI’ s models, algorithms, and product design to deli v er mental health services better [ 23 , 29 ]. For instance, ChatCounselor de veloped a fine-tuned model by analyzing real counseling data and established quality benchmarks, outperforming all open-source models [ 23 ]. These adv ancements present exciting opportunities 11 doi.org/10.3390/healthcare13101205 to support the mental health of many populations, particularly those without access to traditional clinical care. For example, an Israeli mental health hotline applied AI to pro vide daily support to v arious populations [51]. 3.10 Clinical Education AI has prov en its potential to support mental health care by aiding in the education of professionals. A study surveyed clinical psychology students from a Swiss univ ersity and found that students recognized the growing importance of AI in mental health care and expressed a significant interest in and need for instruction on AI in their clinical training [ 38 ]. A descriptive study of mental health professionals emphasized the need to adopt AI in practice and highlighted the importance of educational initiati ves for broader adoption [ 40 ]. Another study e xplored the feasibility of using ChatGPT to simulate client scenarios and help clinical students practice their counseling skills [ 45 ]. While ChatGPT has limitations, such as a lack of non-v erbal cues or ov erly idealized situations, it demonstrated capabilities in displaying authenticity , consistency , appropriate emotional expression, cultural sensiti vity , and empathy in simulations, presenting its effecti veness for clinical training [45]. ChatGPT’ s emotional awareness for aligning with psychotic symptoms indicated its feasibility as a useful research tool for understanding emotional experiences in psychopathology [ 26 ]. Furthermore, ChatGPT’ s prov en ability to provide unbiased treatment recommendations with clinical guidelines can ef fecti v ely support clinical students in their educational journey [46]. 4 Discussions Unlike recent systematic re vie ws focused exclusi v ely on generativ e AI or ethical risks, our mapping spans rule- based chatbots through LLMs and charts empirical outcomes across fi ve clinical phases. This revie w demonstrates AI applications with dif ferent efficac y and potential, simply displaying the different lev els of ef fectiv eness and applicability . 4.1 AI Modalities Applied Acr oss Phases First, AI is widely used as an emotional support agent and clinical assistant, most often deliv ered through chatbots [14, 25, 36]. Many cases of baseline emotional support and clinical assistance reported high ef ficacy and in volv ed the lowest le vel of risks; thus, these applications can be applied most quickly and universally [ 58 ]. It is foreseeable that an emotional support chatbot will be a core AI application for researchers paying attention in the short term, and AI can start its application by assisting clinical work before the treatment stage. Secondly , the main context in AI, which might hav e a relati v ely high ef ficacy and potential in mental health care, mainly focuses on supporting the clinical work in the real clinical setting. The strengths of AI in supporting treatment displayed high potential in improving treatment outcomes by dev eloping personalized treatment, providing treatment recommendations and diagnoses, and predicting treatment outcomes [ 21 , 22 , 29 ]. Thirdly , AI can complete partial therapies or counselling sessions, automating the treatments. Using AI to offer therapies or counselling sessions directly can potentially solve the problem of accessibility of mental health care [ 43 , 59 , 60 ]. Meanwhile, the significant risks and ethical concerns make this layer hard to realize in the short term [ 34 , 40 , 41 , 42 , 52 ]. It is expected to adv ance models and algorithms to improv e accuracy and reduce biases. Pilot tests and clinical trials should be done along with the de velopment and iteration processes. 4.2 AI-Clinician Collaboration All applications are percei ved as supporters of human clinicians rather than replacements, at present with their pro ven track record for ef fecti vely supporting clinicians’ work [ 32 , 42 , 43 , 47 , 57 ]. A pi v otal question of deploying AI in mental health care rev olv es around the work di vision and integration of AI and human clinicians, which w ould largely influence the future direction. This panel raises three main questions for debate. The first concerns the respectiv e responsibilities of AI and human clinicians. The second addresses how human clinicians integrate AI into their existing workflo w and the actual impact. The third is about the ethical and regulatory obligations of humans to ov ersee AI’ s acti vities. Regarding the first question, AI is currently vie wed as a complement to human clinicians’ existing jobs, b ut it will hav e a higher potential to replace parts of clinicians’ jobs, including diagnosis and providing therapies [ 24 , 29 ]. At the current stage, there is a lack of views seeing AI as a standalone method because of its high risks in resolving accuracy; instead, AI is boosting clinicians’ ef ficienc y as a tool [21, 24, 27, 31, 32, 34, 36]. The second question focuses on the integration of AI into clinical services. This will depend on organizational adaptability and will face concerns from various stakeholders, such as clinicians, management, board members, shareholders, and patients [ 61 , 62 ]. For example, if human clinicians are highly resistant to adopting specific AI applications, integration cannot go smoothly . The management and board might reject the inte gration to establish the AI 12 doi.org/10.3390/healthcare13101205 transformation as highly risky and likely to cause public concerns. T o see organizational readiness for AI adaptation, it is crucial to adopt actions like AI literacy and application training, conduct pilot tests to demonstrate benefits, and hold organizational meetings for discussion of concerns. At this stage, there is no universally recognized way to inte grate AI into clinical settings, and it will largely depend on ho w or ganizations practically lead changes. Detailed standards will be de veloped as industrial or ganizations, academic associations, and polic y authorities provide compliance guidelines. The third question addresses the responsibility of o verseeing AI’ s risks and problems. Currently , AI cannot be solely held accountable, so human clinicians and organizations will be responsible for overseeing any risks and problems caused by AI. This will increase the workload of clinicians and require a higher le vel of technological literacy . When organizations apply AI applications in the clinical setting, the y should provide comprehensi ve training to clinicians, and dev elop a scheme for monitoring and accountability management [ 34 , 40 , 41 , 42 , 52 ]. As AI becomes more accurate and reliable, the responsibility of overseeing will decrease. Howe ver , in the short term, clear definitions of accountability will be needed. 4.3 Strengths, W eaknesses, Opportunities, and Threats This section directly addresses RQ3 by synthesizing the ke y strengths, weaknesses, opportunities, and threats of AI- driv en digital interventions as reported across our 36 included studies. AI technologies have se veral k ey strengths that hold promise for mental health care. First, they can increase access to services by ov ercoming economic, geographical, and logistical barriers. Chatbots like Limbic Access can pro vide first-line support to an yone with an internet connection, democratizing access to mental health resources [ 21 ]. AI-powered chatbots also of fer personalized experiences, ele vating user satisfaction in counseling [ 33 ]. Some con versational agents, like HAILEY , are designed to foster empathy in peer support, potentially creating meaningful connections for users [ 34 ]. From an analytical standpoint, machine learning (ML) can uncover insights from large mental health datasets, and ML models are being used to predict treatment outcomes in internet-based therapies [ 31 ]. Finally , large language models lik e ChatCounselor , fine-tuned with actual counseling data, present a scalable path for providing support to a lar ge number of users simultaneously [23]. Despite these strengths, AI technologies also ha v e se veral weaknesses. One of the most pressing concerns is pri v ac y and security . AI chatbots hold sensiti v e patient information, and their implementation poses data security challenges [ 39 ]. There is also a risk of misinterpretation by AI models, highlighting the need for human oversight to pre v ent errors [ 63 ]. Questions remain about whether AI tools truly enhance assessment ef ficiency without compromising diagnostic precision [ 64 ]. F or instance, can AI agents like HAILEY genuinely foster empathy , and can chatbots with certain personalities effecti vely engage users? These questions are still under debate [ 28 , 34 ]. T echnically , AI is still far from effecti vely recognizing mental disorders, posing a significant limitation to its implementation [63]. The field of AI in mental health is rapidly ev olving, presenting sev eral opportunities for gro wth and improv ement. AI prediction modelling could refine patient monitoring, enabling earlier interventions and better health outcomes [ 63 ]. There is optimism about large language models creating novel, large-scale mental health solutions [ 65 ]. For instance, researchers are dev eloping interpretable mental health analysis tasks using generativ e models like LLMs [ 63 ]. Collaboration between stakeholders, including patients and AI researchers, is crucial for human-centered mental health AI research, presenting an opportunity to dev elop tools that meet real user needs [65]. Despite these opportunities, sev eral threats must be addressed to realize the potential of AI in mental health. Algorithmic bias could exacerbate e xisting health disparities, undermining the equity of AI-powered mental health tools [ 66 ]. The need to regulate and monitor AI-based mental health apps is pressing, and navigating these challenges while harnessing AI’ s strengths is crucial for its successful integration into care. From a technical standpoint, the need for ongoing model enhancement to prev ent accuracy decline is clear [ 66 ]. The field of ML is rapidly ev olving and requires ongoing assessment to ensure tools remain effecti ve and safe [ 63 , 66 ]. Failure to address these threats could hinder the adoption of AI technologies in mental health care. Recent models do not systematically disadv antage minority groups. 4.4 Advancing Mental Health Pr evention and Impr ov ement In responding to RQ4, we outline which AI modalities and applications show the greatest maturity for population-level prev ention and ongoing support, and we set the stage for the following sections, Policy Implications, which further elaborate the strategic priorities and barriers identified. It is expected that man y AI applications ha ve a high potential to advance mental health prev ention and impro vement. Prev ention is one of the biggest opportunities AI brings because AI can generate interventional services for a huge number of populations through emotional support chatbots, beha vioral interv ention apps, and peer support platforms [ 23 , 36 , 65 ]. All these services can effecti v ely improv e mood lev els and reduce stress lev els [ 23 , 34 , 36 , 37 , 40 , 41 , 44 , 50 ]. AI’ s capacity to analyze large datasets can uncov er patterns and risk factors that were previously hidden, allo wing 13 doi.org/10.3390/healthcare13101205 for more individualized and ef ficient early intervention strategies [ 24 , 23 ] In addition, the predictiv e analytics of AI can improv e screening processes, making them more ef fectiv e and less in vasi ve, thus increasing the number of people who take part in pre v enti ve measures [ 39 , 57 ]. The combination of AI and mobile health apps with we arable devices provides opportunities for continuous monitoring and real-time personalized feedback, thus improving pre venti v e care [ 67 ]. Utilizing these low-cost, widely applied AI applications, mental health prevention can be conducted at a population le vel and designed to adapt to a specific population’ s needs. If AI applications can reduce the number of patients with high-le vel symptoms, the demand for traditional mental health facilities can be reduced, contrib uting to the global mental health care industry . It is foreseeable that go vernment agencies can apply AI applications in different populations for prevention and improv ement work to reduce the number of at-risk populations, such as students, the elderly , and people with substance use [ 68 , 69 ]. Organizations like schools and local communities can develop and implement specific interventional programs that AI empo wers them to meet the specific challenges of each demographic. F or example, AI can provide personalized learning and coping mechanisms for students under academic and social pressures, support programs designed for the elderly to face loneliness and cognitive decline, and relapse pre vention tools for people with substance use. The integration of AI tools into the current health, educational, and social welfare systems enables these programs to provide adapti ve support and interventions in real time [ 70 ]. This approach is not only directed at immediate risk reduction but also at the dev elopment of mental resilience in the long run, which will result in a reduction of general mental health problems and a move toward a more preventi ve mental health care paradigm. The follo wing policy section builds on these findings to outline how re gulatory frame works can support the safe and equitable implementation of AI innov ations. 4.5 Policy Implications A contemporary policy architecture for AI in ment al health care must protect service users while still allowing technology to evolv e at pace. This balance can be struck by weaving together three strands (e.g., patient safety , inno v ation incentives, and bias mitigation) into a single, adaptiv e regulatory fabric. First, regulators must insist on secure data handling, algorithmic transparency , routine clinical v alidation, and clear lines of human oversight so that ne w tools do not erode public trust or amplify harm. Second, they should create pro-innovation pathways, regulatory sandboxes, adaptiv e licensing, and rapid ethics revie w that gi ve researchers and companies room to e xperiment without waiting for blanket legislation to catch up. Third, ev ery approv al process must include explicit checks for demographic performance and inclusiv e training data so that diagnostic road maps show what this regulation-for -innov ation ecosystem looks like in practice. New Zealand’ s Digital Mental Health and Addiction Service Deliv ery Frame work (DMHAS) advocates a co-regulatory model in which authorities define outcome standards, while innovators choose the technical means to meet them, employing instruments that range from v oluntary codes and accreditation seals to binding legislation, each calibrated to the lev el of risk and the speed of technological change [ 71 ]. In a similar vein, the Mental Health Commission of Canada ur ges policymak ers to e v aluate AI systems at three translational checkpoints: whether an algorithm that performs well in silo retains its accuracy with real patients, whether it integrates into clinical workflo ws without adding cognitiv e b urden for clinicians, and whether it scales equitably across regions and demographic groups [ 72 ]. T ogether, these examples suggest a constructi ve path forw ard. Policies should facilitate the rapid yet e vidence- based migration of laboratory prototypes to bedside tools, provide clinicians with clear operational guidance so that AI augments rather than complicates practice, and embed continuous outcome monitoring so that accountability keeps pace with innov ation. A progressive, layered framew ork, neither technology-neutral nor technology-prescripti ve, can therefore foster creativity , safeguard patients, and ensure that AI becomes a feasible, trustworthy ally in mental health services. 4.6 Limitations This revie w is subject to sev eral constraints that should interpret its findings with caution. First, AI for mental health is a fast-moving field; studies released after our January 2024 cutoff, including w ork on the latest large language model generations, are not reflected here. Additionally , the search strategy may not ha ve captured all relev ant literature. W e limited our search to English-language publications, so relev ant studies in other databases or non-English sources may hav e been ov erlooked. Second, the heterogeneity of study designs, intervention types, outcome measures, and reporting practices precluded a formal risk-of-bias appraisal and any quantitativ e synthesis. W e opted for a narrativ e, chart-based approach to preserve breadth, but this means we cannot pro vide pooled ef fect sizes or graded lev els of e vidence. Although the first author screened and extracted data, the assignment of each paper to one of the clinical phases required subjecti ve judgment, introducing a possibility of selection or interpretation biases. 14 doi.org/10.3390/healthcare13101205 Third, many included studies examined prototypes or pilot deployments in controlled or single-site settings, often with con venience samples. Real-world implementation factors, such as integration into existing clinical workflo ws, sustainability , and user div ersity , were rarely reported in depth. Consequently , the generalizability of reported benefits to routine practice remains uncertain, and future research should prioritize multi-site ev aluations, transparent reporting of model dev elopment, and rigorous assessments of ethical, pri v acy , and equity impacts. Finally , as a scoping revie w , this study aimed to map the landscape rather than critically appraise study quality . Some included studies may hav e been preliminary in vestigations, used correlational designs, or relied on small sample sizes, and no formal quality assessment or risk-of-bias e v aluation was conducted. Therefore, findings should be interpreted with caution, and future systematic re vie ws should incorporate structured quality appraisal to ev aluate the robustness and reliability of the evidence base. 5 Conclusions This scoping re vie w synthesized findings from 36 studies on artificial intelligence-dri ven digital interventions across screening, treatment, monitoring, clinical education, and pre vention in mental health care. Our mapping of chatbots, natural language processing, machine learning, deep learning, and large language models highlighted growing evidence of artificial intelligence’ s contributions to expanding access, enhancing symptom monitoring, and supporting personal- ized interventions. At the same time, challenges such as algorithmic bias, data pri v acy risks, and integration barriers underscore the need for ethical design, transparent model dev elopment, and human o versight. While previous review studies have provided valuable insights into specific aspects of AI in mental health, such as diagnostic applications of ChatGPT , ethical and regulatory discussions, or ev aluations of chatbots for anxiety treatment [ 73 , 74 , 75 , 76 , 77 , 78 ], these hav e often focused on single technologies, clinical phases, or conditions. In contrast, the current scoping re vie w of fers a broader synthesis by mapping multiple AI modalities across fi ve clinical phases. By linking conceptual insights with empirical outcomes, this re vie w complements and e xtends prior work, pro viding an integrated reference for research, practice, and polic y . Future research should prioritize multi-site ev aluations, longitudinal studies in diverse populations, and rigorous assessments of safety , pri vac y , and equity impacts. Collaboration among clinicians, artificial intelligence dev elopers, policymakers, and patients will be essential to ensure artificial intelligence systems are clinically ef fecti ve, ethically sound, and socially equitable. By offering an empirical map of artificial intelligence applications across the mental health care continuum, this revie w provides a foundation for guiding research, practice, and policy tow ard responsible integration of artificial intelligence in mental health services. A uthor Contributions Conceptualization, Y .N. and F .J.; methodology , Y .N. and F .J.; formal analysis, Y .N. and F .J.; in vestig ation, Y .N. and F .J.; resources, Y .N. and F .J.; data curation, Y .N. and F .J.; writing—original draft preparation, Y .N. and F .J.; writing—revie w and editing, Y .N. and F .J.; visualization, Y .N. and F .J.; supervision, F .J.; project administration, F .J.; funding acquisition, F .J. All authors have read and agreed to the published version of the manuscript. Funding This research receiv ed no e xternal funding. Institutional Review Board Statement Not applicable. Inf ormed Consent Statement Not applicable. Data A vailability Statement Not applicable. 15 doi.org/10.3390/healthcare13101205 Acknowledgments The authors would like to thank the academic editors and re viewers for their v aluable comments and suggestions. Conflicts of Interest The authors declare no conflict of interest. References [1] N.C. Coombs, W .E. Meriwether , J. Caringi, and S.R. Newcomer . Barriers to healthcare access among u.s. adults with mental health challenges: A population-based study . SSM P opul. Health , 21:1500847, 2021. [2] B.H. Hidaka. Depression as a disease of modernity: Explanations for increasing pre v alence. J . Af fect. Disor d. , 140:205–214, 2012. [3] A. Uutela. Economic crisis and mental health. Curr . Opin. Psychiatry , 23:127–130, 2010. [4] J. T orous, K.J. Myrick, N. Rauseo-Ricupero, and J. Firth. Digital mental health and covid-19: Using technology today to accelerate the curve on access and quality tomorro w . JMIR Ment. Health , 7:e18848, 2020. [5] P .A. Arean. Here to stay: Digital mental health in a post-pandemic world—looking at the past, present, and future of teletherapy and telepsychiatry . T echnol. Mind Behav . , 2:e00073, 2021. [6] E.A. Friis-Healy , G.A. Nagy , and S.H. K ollins. It is time to react: Opportunities for digital mental-health apps to reduce mental-health disparities in racially and ethnically minoritized groups. JMIR Ment. Health , 8:e25456, 2021. [7] M.R. Prescott, S.J. Sagui-Henson, C.E.W . Chamberlain, C.C. Sweet, and M. Altman. Real-world ef fectiv eness of digital mental-health services during the covid-19 pandemic. PLoS ONE , 17:e0272162, 2022. [8] T . W u, S. He, J. Liu, S. Sun, K. Liu, Q.L. Han, and Y . T ang. A brief overvie w of chatgpt: History , status-quo and potential future dev elopment. IEEE/IEEE CAA J . A utom. Sin. , 10:1122–1136, 2023. [9] E.G. Lattie, C. Stiles-Shields, and A.K. Graham. An overvie w of digital mental-health services and recommenda- tions for more accessible digital mental-health services. Nat. Rev . Psychol. , 1:87–100, 2022. [10] I. Adeshola and A.P . Adepoju. The opportunities and challenges of chatgpt in education. Interact. Learn. En vir on. , 32:6159–6172, 2023. [11] S.S. Biswas. Role of chatgpt in public health. Ann. Biomed. Eng. , 51:868–869, 2023. [12] S. D’Alfonso. Ai in mental health. Curr . Opin. Psychol. , 36:112–117, 2020. [13] S. Su, Y . W ang, W . Jiang, W . Zhao, R. Gao, Y . W u, J. Y ao, Y . Su, J. Zhang, K. Li, et al. Efficac y of artificial intelligence-assisted psychotherapy in patients with anxiety disorders: A prospectiv e, national multicentre randomized controlled trial protocol. F r ont. Psychiatry , 2022:799917, 2022. [14] J. Sedlakov a and M. Trachsel. Con versational artificial intelligence in psychotherapy: A new therapeutic tool or agent? Am. J. Bioeth. , 2022:23, 4–13, 2022. [15] P . Henson, H. W isnie wski, C. Hollis, M. Kesha v an, and J. T orous. Digital mental-health apps and the therapeutic alliance: Initial re vie w . BJPsych Open , 2019:5, e15, 2019. [16] G.N. V ilaza and D. McCashin. Is the automation of digital mental health ethical? F r ont. Digit. Health , 3:689736, 2021. [17] K.I. Roumeliotis and N.D. Tselikas. Chatgpt and open-ai models: A preliminary revie w . Futur e Internet , 15:192, 2023. [18] E. Adamopoulou and L. Moussiades. An ov ervie w of chatbot technology . In Artificial Intelligence Applications and Innovations: AIAI 2019 , pages 261–280. Springer, Cham, Switzerland, 2020. [19] A. Bayani, A. A yotte, and J.N. Nikiema. T ransformer -based tool for automated fact-checking of online health information: De velopment study . JMIR Infodemiol. , 2025:5, e56831, 2025. [20] C.N. Hang, P .D. Y u, S. Chen, C.W . T an, and G. Chen. Mega: Machine-learning-enhanced graph analytics for infodemic-risk management. IEEE J. Biomed. Health Inform. , 27:6100–6111, 2023. 16 doi.org/10.3390/healthcare13101205 [21] M. Rollwage, K. Juchems, J. Habicht, B. Carrington, T . Hauser, and R. Harper . Con versational ai facilitates mental-health assessments and is associated with improved reco very rates. medRxiv , page 2022.10.23.22881887, 2022. [22] R. Lewis, C. Fer guson, C. W ills, N. Jones, and R.W . Picard. A recommender system treatment personalisation in digital mental-health therapy from professionals. In A CM CHI 22 Extend Abstracts , page 3519840, Ne w Y ork, NY , USA, 2022. A CM. [23] J.M. Liu, D. Li, H. Cao, T . Ren, Z. Liao, J. W u, and I. Cha. A large-language-model-based system for mental-health support. arXiv , page 2309.15461, 2023. [24] T . Shaik, X. T ao, W .I. Higgins, R. Gururajan, X. Zhou, and U.R. Acharya. U.r . remote patient monitoring using ai: Current state, challenges, and limitations. Rev . Data Min. Knowl. Discov . , 2023:13, e1485, 2023. [25] A.J. T rappey , A.P .C. Lin, K.K. Hsu, C. T rappe y , and K.K. T u. Development of an empathy-centric counselling chatbot system capable of sentimental-dialogue analysis. In Pr oceedings 2022, 10, 990 , 2022. [26] D. Hadar -Sho v al, Z. Elyoseph, and M. Lvovsk y . The plasticity of chatgpt’ s mentalizing abilities. F ront. Psychiatry , 2023:124397, 2023. [27] A. Kapoor and S. Goel. Applications of con versational ai in mental health: A survey . In Pr oceedings of the ICOEI 2022 , pages 8–30, T rirun veli, India, 2022. IEEE. [28] J. Molanan, A. V isuri, S.A. Suryanarayana, A.A. Aloru, K. Y atani, and S. Hosio. Measuring the ef fect of mental-health-chatbot personality on user engagement. In The MUM 2022, Lisbon, P ortugal, 27–30 November 2022 , pages 138–151, New Y ork, NY , USA, 2022. A CM. [29] S. Moulya and T .R. Pragathi. Tr -mental-health assist and diagnosis con versational interface using logistic- regression model. J . Phys. Conf. Ser . , 2023:012099, 2023. [30] A.C. T ricco, E. Lillie, W . Zarin, K.K. O’Brien, H. Colquhoun, D. Lev ac, D. Moher, M.D.J. Peters, T . Horsley , L. W eeks, et al. Prisma-p (prisma for systematic re views): Checklist and e xplanation. Ann. Intern. Med. , 2018:168, 467–473, 2018. [31] A. Thieme, M. Harratty , M. L yons, J. Palacios, R.F . Marques, C. Morrison, and G. Doherty . Designing human- centred ai for mental health. A CM T rans. Comput. Hum. Interact. , 2023:2024, 35, 2023. [32] S. T utun, M.E. Johnson, A. Ahmed, A. Albizri, S. Irgli, I. Y esilkaya, E.N. Ucar , T . Sengun, and A. Harfouche. An ai-based decision-support system for predicting mental-health disorders. Isr . J . Syst. , 2023:1261–1276, 2023. [33] G.C. Park, J. Chung, S. Lee, D.C. Atkins, and T . Althof f. Effects of ai-chatbot emotional discourse on user satisfaction. Curr . Psychol. , 2023:42, 28663–28673, 2023. [34] A. Sharma, I.W . Lin, A.S. Miner, and D. Askin. Human-ai collaboration enables empathic con versations. arXiv , page 2022.02031.514, 2022. [35] A. Amant, M. Rizwan, A.R. Javed, M. Abdelhaq, R. Alsaqour, S. P andya, and M. Uddin. Deep learning for depression detection from textual data. Electr onics , 2022:11, 676, 2022. [36] P . Rathnayaka, N. Mills, D. Burnett, V . De Silva, D. Alahakoon, and R. Gray . A mental-health chatbot with cognitiv e skills. Sensors , 2022:22, 3653, 2022. [37] C. Siemon, R. Ahmad, H. Harms, and T . De Vreede. Requirements and solution approaches to personality-adaptiv e con v ersational agents in mental health care. Sustainability , 2022:14, 3832, 2022. [38] C. Blease, A. Coarco, M. Annoni, J. Gaab, and C. Locher . Machine learning in clinical psychology education. F r ont. Public Health , 2021:9, 623088, 2021. [39] M. Rollw age, K. Juchems, J. Habicht, B. Carrington, M. Stilianou, T . Hauser , and R. Harper . Using con versational ai to facilitate mental-health assessments. JMIR AI , 2023:2, e44356, 2023. [40] R. Fulmer , A. Jorrin, B. Gentile, L. Lakerink, and M. Rauws. Using psychological ai (te) to relie ve symptoms of depression and anxiety . JMIR Ment. Health , 2018:5, e664, 2018. [41] E. Nichele, A. Lav orgna, and S.E. Middleton. S.e. challenges in digital mental health moderation. Pr oc. Comput. Sci. , 2022:217, 217, 2022. [42] M. Chen, K. Shen, R. W ang, Y . Miao, Y . Jiang, H.W . Y ang, T . Kao, H. Gao, and Z. Liu. A ne w perspecti v e for mental health monitoring. A CM T rans. Internet T echnol. , 2022:22, 1–16, 2022. [43] S. Jin, H. Choi, and K. Han. Ai-augmented art psychotherapy . In Pr oceedings of the CIKM 2022, Atlanta, GA, USA, 17-21 October 2022 , pages 4089–4099, New Y ork, NY , USA, 2022. A CM. 17 doi.org/10.3390/healthcare13101205 [44] R. Shao. An empathetic ai for mental health interv ention: Conceptualizing and e xamining artificial empathy . In Empathy-Centric Design W orkshop 2023 , pages 1–6, New Y ork, NY , USA, 2023. ACM. [45] R.K. Maurya. Qualitativ e content analysis of chatgpt’ s client simulation role-play for practising counselling skills. Couns. Psychother . Res. , 2023:24, 614–630, 2023. [46] Z. Elyoseph, D. Hadar -Shov al, K. Asraf, and M. Lvo vsky . M. chatgpt outperforms humans in emotional-aw areness ev aluations. F r ont. Psychol. , 2023:14, 1199088, 2023. [47] I. Le vkovich and Z. Elyoseph. Identifying interaction and its determinants with chatgpt. F am. Med. Community Health , 2023:11, e002391, 2023. [48] M. Danieli, T . Ciulli, S.M. Mousavi, and G. Riccardi. Conv ersational ai agent for a mental-health-care app. JMIR F orm. Res. , 2021:5, e30053, 2021. [49] M. Zhang, J. Scandiffo, S. Y ounus, T . Jeyakumar , I. Karsan, R. Charow , M. Salhia, and D. W iljer . Adoption of ai in mental-health care—perspectiv es from professionals. JMIR F orm. Res. , 2023:7, e78747, 2023. [50] A.R. Wrightson-Hester, G. Anderson, J. Dunstan, P .M. McEvoy , C.J. Sutton, B. Myers, S. Egan, S. T ai, M. Johnston-Hollit, W . Chen, et al. An artificial therapist to support youth mental health. JMIR Hum. F actors , 2022:9, e06849, 2022. [51] A. Kleiman, A. Rosenfeld, and H. Rosemarin. Ml-based routing of callers in a mental-health hotline. Isr . J. Health P olicy Res. , 2022:11, 25, 2022. [52] T .F . Heston. Safety of artificial intelligence in addressing depression. Cureus , 2023:15, e05079, 2023. [53] P . Kayvan, K. Ahmed, A. Ibaida, Y . Miao, and B. Gu. Early detection of depression using a conv ersational ai bot. PLOS ONE , 2023:18, 2023. [54] T . Kannampalli, C.R. Roneberg, W . W ittels, V . Kumar , N. Lv , J.M. Smith, B.S. Gerber , E. a Kringle, J. A Johnson, P . Y u, et al. A virtual voice-based coach for problem-solving treatment. JMIR F orm. Res. , 2022:6, e08092, 2022. [55] M. Danieli, T . Ciulli, S.M. Mousavi, G. Silvestri, G. Barbato, L. Di Natale, and G. Riccardi. Assessing the impact of conv ersational ai for stress and anxiety in aging adults: Randomized controlled trial. JMIR Ment. Health , 2022:9, e03867, 2022. [56] I. Straw and C. Callison-Burch. C. ai in mental health and the biases of language-based models. PLOS ONE , 2020:15, e0240376, 2020. [57] F .M. Daw oodbhoy , J. Delaney , P . Cecula, J. Y u, I. Peacock, J. T an, and B. Cox. B.a.i in patient-flow management in mental-health units. JMIR Ment. Health , 2021:7, e06993, 2021. [58] S.A. Alow ais, S.S. Alghandi, N. Alshehabany , T . Alqahtani, A.I. Alshaya, S.N. Almohareb, A. Aldaierim, M. Alrashed, K.B. Saleh, H.A. Badreldin, et al. Ai in clinical practice: A revie w . BMC Med. Educ. , 2023:23, 689, 2023. [59] S.M.A. Rahman, S. Ibtisam, E. Bagzi, and T . Barai. The significance of machine learning in clinical-disease diagnosis: A re vie w . J. Comput. Appl. , 2023:15, 10–17, 2023. [60] A. Lejeune, A. Le Glaz, P .A. Perron, J. Sehti, M. W alter , C. Lemey , and S. Berrouiguet. S. artificial intelligence and suicide prev ention: A systematic revie w . Eur . Psychiatry , 2022:65, e19, 2022. [61] A. Kruszy ´ nska-Fischbach, S. Sysko-Roma ´ nczuk, T .M. Napórko wski, A. Napórkowska, and D. K ozakie wicz. Ho w to prepare the primary healthcare pro viders’ services for digital transformation. J . En viron. Res. Public Health , 2019:19, 3973, 2019. [62] A. Hunter and A. Riger . S. meaning of community in community mental health. J . Community Psychol. , 1986:14, 55–71, 1986. [63] H.R. Lawrence, A.X. Schneider , S.B. Rubin, M.J. Matric, D.J. McDuff, and M. Jones Bell. Opportunities and risks of llms in mental health care. arXiv , page 2024.03.14814, 2024. [64] A.C. Timmons, J.B. Duong, N. Simo Fiallo, T . Lee, H.P .Q. V o, J.S. M.W . Comer, L.C. Bre wer , S.L. Frazier , T . Chaspari, et al. A call to action on assessing and mitigating bias in ai for mental health. P erspect. Psyc hol. Sci. , 2023:18, 1062–1096, 2023. [65] M. Holohan and A. Fiske. Exploring transference in ai-based psychotherapy applications. Psychol. 2021, 12, 72076 , 2021. [66] B. Lin, G. Cecchi, and D. Brumfourt. Psychotherapy ai companion with reinforcement-learning recommendations. In W ebConf 2023 , page 3587623, New Y ork, NY , USA, 2023. ACM. 18 doi.org/10.3390/healthcare13101205 [67] A.J. Knight and N. Bidargaddi. Commonly av ailable activity tracker apps and wearables as a mental health outcome indicator: A prospecti ve observ ational cohort study among young adults with psychological distress. J . Affect. Disor d. , 2018:236, 31–36, 2018. [68] W .R. Beardslee, P .L. Chien, and C.C. Bell. Prev ention of mental disorders: A de v elopmental perspective. Psychiatr . Serv . , 2011:62, 247–254, 2011. [69] A.E. Kauzin. Adolescent mental health: Prev ention and treatment programs. Am. Psychol. , 1993:48, 127–141, 1993. [70] S. Garrido, C. Millington, D. Cheers, K. Boydell, E. Schubert, T . Mede, and Q.V . Nguyen. What works and what doesn’t work? a systematic revie w of digital mental-health interventions for depression and anxiety in young people. Psychiatry 2019 , pages 10, 759, 2019. [71] W orld Economic Forum. Global governance toolkit for digital mental health, 2021. [72] Mental Health Commission of Canada. Artificial intelligence in mental health services: An en vironmental scan. A v ailable online: https://mentalhealthcommission.ca. (accessed on 10 April 2025), 2025. [73] S. V olkmer , A. Meyer-Lindenber g, and E. Schwarz. Large language models in psychiatry: Opportunities and challenges. Psychiatry Res. , 2024:339, 116026, 2024. [74] X. Xian, A. Chang, Y .T . Xiang, and M.T . Liu. Debate and dilemmas regarding generativ e ai in mental health care: Scoping revie w . Internet J . Med. Res. , 2024:13, e53672, 2024. [75] Z. Guo, A. Lai, J. Thygesen, J. Farrington, T . K een, and K. Li. Lar ge language models for mental health applications: Systematic re vie w . JMIR Ment. Health , 2024:11, e57400, 2024. [76] Z. Liu, Y . Bao, S. Zeng, L. Y ang, X. Zhang, and Y . W ang. Lar ge language models in psychiatry: Current applications, limitations, and future scope. Big Data Min. Anal. , 2024:7, 1148–1168, 2024. [77] N. Obradovich, S.S. Khalsa, W .U. Khan, J. Suh, R.H. Perlis, O. Ajilore, and M.P . Paulus. Opportunities and risks of large language models in psychiatry . NPP Digit. Psychiatry Neur osci. , 2024:2, 8, 2024. [78] C. Blease and A. Rodman. Generative artificial intelligence in mental healthcare: An ethical ev aluation. Curr . T reat. Options Psyc hiatry , 2025:12, 5, 2025. [79] S. Kolding, R.M. Lundin, L. Hansen, and S.D. Øster gaard. Use of generati v e artificial intelligence (ai) in psychiatry and mental health care: A systematic re vie w . Acta Neur opsychiatr . , 2025:37, e37, 2025. [80] G. Holmes, B. T ang, S. Gupta, S. V enkatesh, H. Christensen, and A. Whitton. Applications of large language models in the field of suicide prev ention: Scoping revie w . J. Med. Internet Res. , 2025:27, e63126, 2025. 19

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment