AI Literacy, Safety Awareness, and STEM Career Aspirations of Australian Secondary Students: Evaluating the Impact of Workshop Interventions

AI Literacy, Safety Awareness, and STEM Career Aspirations of Australian Secondary Students: Evaluating the Impact of Workshop Interventions
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Deepfakes and other forms of synthetic media pose growing safety risks for adolescents, yet evidence on students’ exposure and related behaviours remains limited. This study evaluates the impact of Day of AI Australia’s workshop-based intervention designed to improve AI literacy and conceptual understanding among Australian secondary students (Years 7-10). Using a mixed-methods approach with pre- and post-intervention surveys (N=205 pre; N=163 post), we analyse changes in students’ ability to identify AI in everyday tools, their understanding of AI ethics, training, and safety, and their interest in STEM-related careers. Baseline data revealed notable synthetic media risks: 82.4% of students reported having seen deepfakes, 18.5% reported sharing them, and 7.3% reported creating them. Results show higher self-reported AI knowledge and confidence after the intervention, alongside improved recognition of AI in widely used platforms such as Netflix, Spotify, and TikTok. This pattern suggests a shift from seeing these tools as merely “algorithm-based” to recognising them as AI-driven systems. Students also reported increased interest in STEM careers post-workshop; however, effect sizes were small, indicating that sustained approaches beyond one-off workshops may be needed to influence longer-term aspirations. Overall, the findings support scalable AI literacy programs that pair foundational AI concepts with an explicit emphasis on synthetic media safety.


💡 Research Summary

This paper investigates the short‑term impact of a one‑day “Day of AI Australia” workshop on Australian secondary school students’ artificial intelligence (AI) literacy, synthetic‑media safety awareness, and interest in STEM (science, technology, engineering, and mathematics) careers. The study was conducted in three New South Wales government high schools with students from Years 7‑10 (approximately ages 12‑17). A total of 205 students completed a pre‑workshop questionnaire and 163 completed a post‑workshop questionnaire; the surveys were anonymous and paper‑based, preventing individual‑level pairing, so analyses compare independent samples before and after the intervention.

The questionnaire measured seven constructs: demographics, AI knowledge (self‑rated on a 5‑point Likert scale), ability to identify AI in everyday tools (multiple‑choice), self‑reported understanding of AI functions, confidence in using AI, frequency of AI tool usage across 12 applications, motivations for using AI, sources of AI information, exposure to deepfakes, attitudes toward AI ethics, privacy, bias, and STEM career interest. Reliability and validity of the scales were not detailed, but the authors used non‑parametric statistics due to non‑normal distributions.

Pre‑workshop findings revealed that social‑media platforms (Snapchat, TikTok, YouTube) were the most frequently used AI‑enhanced services, with daily or weekly engagement. Generative tools such as ChatGPT, Canva (Magic Write/Design), and Google Gemini were used weekly, while education‑specific AI tools (Khanmigo, EduChat) and companion bots (Replika) were rarely used. A Friedman test confirmed significant differences in usage frequency (Q = 667.724, p < .001). The primary motivations for using AI were schoolwork (65.9 %), problem solving (39.5 %), and fun (38.0 %). When seeking information about AI, students relied heavily on social media (74.1 %), followed by teachers (35.6 %) and friends (34.6 %). Notably, 82.4 % of students reported having seen deepfakes, 18.5 % had shared them, and 7.3 % had created them. Only half knew how to report a deepfake, and confidence in handling a personal deepfake was low (34.1 %).

The workshop comprised five sequential lessons: (1) what AI is and how it works, (2) AI ethics, (3) AI safety and data privacy, (4) AI‑generated misinformation, and (5) AI in careers and industries. Lessons combined short presentations, interactive activities, and discussion of real‑world examples.

Post‑workshop analysis showed statistically significant gains in students’ ability to identify AI in specific consumer platforms. Independent‑samples proportion tests indicated increases of 27.1 % for Netflix (p < .001), 12.8 % for TikTok (p < .001), and 16.0 % for Spotify (p < .001). Recognition of AI in ChatGPT was already high, suggesting a ceiling effect. Self‑reported AI knowledge and confidence rose by roughly 0.3–0.4 points on the 5‑point scale, with effect sizes (r) ranging from 0.18 to 0.22, classified as small. STEM career interest also increased modestly (average rise of 0.2 points, r ≈ 0.12).

Qualitative responses (open‑ended questions) were coded using NVivo’s auto‑code wizard, yielding 27, 18, and 24 initial codes for “what you liked”, “most useful part”, and “suggested improvements”, respectively. After thematic refinement, four high‑level themes emerged for likes (real‑world relevance, interactivity, collaborative discussion, and clear explanations), three for usefulness (hands‑on activities, ethical discussions, and career insights), and four for improvements (more time for practice, deeper technical content, stronger teacher facilitation, and broader tool exposure).

The authors interpret the findings as evidence that a single, well‑structured workshop can shift students’ conceptual framing of AI—from viewing platforms merely as “algorithm‑based” to recognizing them as AI‑driven systems—and can modestly raise enthusiasm for STEM pathways. However, the modest effect sizes for career interest and safety behaviours suggest that lasting change likely requires sustained, curriculum‑embedded interventions rather than isolated events.

Limitations include the inability to track individual change due to anonymous, unpaired surveys; a sample confined to three NSW schools, limiting external validity; and the short duration of exposure (one day), which may not capture deeper learning or retention. The study also lacks a control group, making it difficult to rule out maturation or testing effects.

Future research directions proposed are: (1) longitudinal designs with repeated measures to assess retention and long‑term impact; (2) integration of teacher professional development to boost educator confidence in delivering AI content; (3) scaling the program across diverse socioeconomic and geographic contexts while addressing digital infrastructure gaps; and (4) expanding the curriculum to include more technical depth (e.g., model training, data set concepts) alongside ethical and safety components.

In sum, this paper contributes empirical data on Australian secondary students’ baseline AI exposure, demonstrates that a concise workshop can improve AI literacy and platform awareness, and underscores the need for ongoing, systemic AI education to foster robust safety practices and sustained STEM interest.


Comments & Academic Discussion

Loading comments...

Leave a Comment