The Role of Artificial Intelligence (AI) in Adaptive eLearning System (AES) Content Formation: Risks and Opportunities involved
Artificial Intelligence (AI) plays varying roles in supporting both existing and emerging technologies. In the area of Learning and Tutoring, it plays key role in Intelligent Tutoring Systems (ITS). The fusion of ITS with Adaptive Hypermedia and Multimedia (AHAM) form the backbone of Adaptive eLearning Systems (AES) which provides personalized experiences to learners. This experience is important because it facilitates the accurate delivery of the learning modules in specific to the learner capacity and readiness. AES types vary, with Adaptive Web Based eLearning Systems (AWBES) being the popular type because of wider access offered by the web technology.The retrieval and aggregation of contents for any eLearning system is critical whichis determined by the relevance of learning material to the needs of the learner.In this paper, we discuss components of AES, role of AI in AES content aggregation, possible risks and available opportunities.
💡 Research Summary
The paper provides a comprehensive overview of how artificial intelligence (AI) supports content formation in Adaptive eLearning Systems (AES) and examines the associated risks and opportunities. It begins by defining AES as systems that dynamically tailor learning materials, interaction styles, and user preferences to individual learners. According to Vassileva (2012), an AES must be adaptable to environmental changes, have modular components that can respond to external shifts, and include monitoring and control tools for continuous productivity improvement. These requirements are realized through the integration of Intelligent Tutoring Systems (ITS) and Adaptive Hypermedia and Multimedia (AHAM), as illustrated in the paper’s Figure 1.
AI’s primary contributions lie in ontology construction and knowledge retrieval. Ontologies formalize domain concepts and relationships, enabling semantic matching between learner profiles and educational resources. AI algorithms process real‑time learner data to select the most relevant content, making relevance a critical success factor for any eLearning platform. The literature review highlights several AI‑driven approaches: Fouad (2012) employed fuzzy clustering on web‑log data to extract learner‑interest terms; Lin et al. (2013) used decision‑tree classifiers within a Personalized Creativity Learning System; and Baradwaj & Pal (2011) applied decision trees to model student performance and generate tailored feedback. These methods demonstrate that data‑driven personalization can significantly improve content relevance and learning outcomes.
Despite these advantages, the authors identify two major risks inherent in AI‑based autonomous content acquisition. The first risk concerns inadequate filtering of obscene, derogatory, or prohibited language. The paper cites social‑media practices where users bypass profanity filters by substituting characters with asterisks or exclamation marks (e.g., “sh!t”). If AI crawlers ingest such altered expressions without robust contextual analysis, learners—especially those whose primary language of instruction is English—may mistakenly treat them as legitimate academic terminology. The infamous Microsoft chatbot “Tay,” which began spewing extremist and misogynistic statements, serves as a cautionary example of how unchecked language models can propagate harmful content.
The second risk stems from the reliance on learners’ web‑log histories for content recommendation. According to SimilarWeb (2018), the top‑10 global websites by traffic are dominated by search engines, entertainment platforms, social networks, adult sites, and encyclopedias, with educational sites representing only a small fraction. Consequently, using raw web‑log data may lead AES to recommend non‑educational or even inappropriate material, undermining learning efficacy and potentially exposing students to misinformation.
To mitigate these threats, the paper proposes several opportunities. First, it advocates for multi‑layered ethical filtering pipelines that combine profanity detection, context‑aware semantic analysis, and human‑in‑the‑loop review. Second, it suggests prioritizing curated educational repositories and open‑educational‑resource (OER) platforms while treating general web activity as supplemental, low‑weight signals. Third, it emphasizes the importance of continuous human‑AI collaboration to validate automatically generated content before delivery.
Beyond risk mitigation, the authors argue that AI can substantially reduce instructor workload by autonomously searching, retrieving, and updating knowledge bases, thereby enabling more frequent and precise personalization. As AI research advances, many of today’s challenges—such as bias, content safety, and relevance—are expected to be addressed through improved algorithms, better training data, and stronger governance frameworks. Future work should focus on quantifying risk exposure, developing meta‑learning strategies that balance safety with personalization, and establishing standardized evaluation metrics for AI‑driven AES.
In summary, AI‑enabled Adaptive eLearning offers powerful personalization capabilities but must be deployed with rigorous safeguards against inappropriate content and non‑educational data contamination. By integrating ethical filtering, curated data sources, and human oversight, AES can harness AI’s opportunities while minimizing its inherent risks.
Comments & Academic Discussion
Loading comments...
Leave a Comment