Integrating Generative AI-enhanced Cognitive Systems in Higher Education: From Stakeholder Perceptions to a Conceptual Framework considering the EU AI Act
Many staff and students in higher education have adopted generative artificial intelligence (GenAI) tools in their work and study. GenAI is expected to enhance cognitive systems by enabling personalized learning and streamlining educational services. However, stakeholders perceptions of GenAI in higher education remain divided, shaped by cultural, disciplinary, and institutional contexts. In addition, the EU AI Act requires universities to ensure regulatory compliance when deploying cognitive systems. These developments highlight the need for institutions to engage stakeholders and tailor GenAI integration to their needs while addressing concerns. This study investigates how GenAI is perceived within the disciplines of Information Technology and Electrical Engineering (ITEE). Using a mixed-method approach, we surveyed 61 staff and 37 students at the Faculty of ITEE, University of Oulu. The results reveal both shared and discipline-specific themes, including strong interest in programming support from GenAI and concerns over response quality, privacy, and academic integrity. Drawing from these insights, the study identifies a set of high-level requirements and proposes a conceptual framework for responsible GenAI integration. Disciplinary-specific requirements reinforce the importance of stakeholder engagement when integrating GenAI into higher education. The high-level requirements and the framework provide practical guidance for universities aiming to harness GenAI while addressing stakeholder concerns and ensuring regulatory compliance.
💡 Research Summary
This paper investigates how generative artificial intelligence (GenAI) is perceived within a technical faculty and proposes a responsible integration framework that aligns with stakeholder needs and the European Union AI Act. Using a mixed‑methods survey, the authors collected responses from 61 staff members and 37 students at the Faculty of Information Technology and Electrical Engineering (ITEE) at the University of Oulu. The quantitative and qualitative data reveal a nuanced picture: both groups are enthusiastic about practical applications such as programming assistance, automated feedback, and content creation, yet they share deep concerns about response accuracy, bias, privacy, copyright infringement, and especially threats to academic integrity. Staff additionally note that GenAI often produces lengthy but generic answers that lack domain‑specific depth, while students worry about over‑reliance eroding critical thinking and creativity. Awareness of the EU AI Act is limited, but respondents recognize the necessity of compliance with its high‑risk AI provisions (pre‑deployment risk assessments, transparency, human oversight, data governance).
From these insights the authors extract high‑level requirements: (1) transparent model explanations and verifiable outputs; (2) robust data‑protection and minimisation practices; (3) policies and monitoring mechanisms to safeguard academic honesty; (4) domain‑tailored functionalities (e.g., code autocompletion, circuit‑simulation support); and (5) continuous AI‑literacy training for both faculty and students.
Building on the requirements, the paper introduces a five‑pillar conceptual framework for integrating GenAI‑enhanced cognitive systems in higher education:
- Strategy & Policy – Define institutional AI vision, align with the EU AI Act, and set governance objectives.
- Governance & Regulation – Establish an AI ethics board, conduct risk classifications, enforce transparency reporting, and embed human‑in‑the‑loop controls.
- Technology & Infrastructure – Provide secure cloud/on‑premise environments, select appropriate models, implement quality‑monitoring pipelines, and enable customization for disciplinary needs.
- Education & Support – Deliver AI‑literacy curricula, create usage guidelines, and offer technical support services for faculty and students.
- Evaluation & Feedback – Measure learning outcomes, system performance, and compliance metrics; feed results back into iterative improvement cycles.
Each pillar is linked to the identified requirements and includes discipline‑specific extensions for IT and Electrical Engineering (e.g., integration with GitHub Copilot‑style tools, support for hardware design workflows). The authors outline a staged rollout: (i) stakeholder analysis and requirement gathering; (ii) regulatory and ethical review; (iii) pilot implementation with rigorous testing; (iv) campus‑wide scaling and policy institutionalisation; and (v) ongoing monitoring and updates.
The study contributes both empirical evidence on GenAI attitudes in a technical university context and a practical, regulation‑aware roadmap for universities seeking to harness GenAI while mitigating risks. It underscores that successful adoption hinges on active stakeholder engagement, transparent governance, and alignment with emerging AI legislation, offering a template that can be adapted by other higher‑education institutions worldwide.
Comments & Academic Discussion
Loading comments...
Leave a Comment