Expert System for Quality Assessment in "Tibiscus" University

Expert System for Quality Assessment in "Tibiscus" University
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

The periodical evaluation of the teaching staff in “Tibiscus” University is based on the specification of the Romanian Agency for Quality Assurance in Higher Education (ARACIS), namely “The quality of teaching and researching staff: The universities must dispose of teaching staff which, as number and functional base must be correctly allocated to the total number of students, depending on the study domain and regarding the qualifications it must depend on the specific of the study program and the proposed quality objectives.” This paper presents the implementation of an expert system, offering to the students the possibility to perform the evaluation in a modern way and to the evaluation committee a quick access to all necessary data.


💡 Research Summary

The paper presents the design, implementation, and evaluation of an expert system intended to streamline the periodic assessment of teaching staff at “Tibiscus” University in accordance with the standards set by the Romanian Agency for Quality Assurance in Higher Education (ARACIS). The authors begin by outlining the shortcomings of traditional paper‑based evaluation processes—namely, their time‑consuming nature, lack of transparency, and difficulty in aggregating data for decision‑making. They then review related work on educational assessment platforms and rule‑based expert systems, positioning their contribution as a hybrid solution that combines a knowledge‑driven inference engine with a modern web interface.

ARACIS’s criteria are dissected into five major categories—academic qualifications and research output, teaching activities, student satisfaction, administrative contributions, and continuous professional development—further broken down into 23 specific indicators. Each indicator is assigned a weight derived from expert consultation, and both quantitative (e.g., number of peer‑reviewed publications, teaching hours) and qualitative (e.g., peer reviews, student survey responses) data are incorporated.

The system architecture follows a three‑tier model: presentation, business logic, and data storage. The knowledge acquisition module captures expert knowledge through interviews and translates it into IF‑THEN production rules. These rules are stored in a rule base and processed by a forward‑chaining inference engine built with the PyCLIPS library, which mimics the classic CLIPS environment. The business‑logic layer, implemented in Python using the Flask framework, orchestrates rule evaluation, data validation, and result generation. The presentation layer consists of a responsive web portal built with HTML5, CSS3, and Vue.js, offering separate interfaces for students (survey submission) and evaluation committees (real‑time dashboards, statistical visualizations). All data—faculty profiles, evaluation outcomes, audit logs—are persisted in a MySQL 8.0 relational database with a normalized schema.

A pilot deployment was conducted during the 2023‑2024 academic year, involving 312 faculty members. Students completed electronic questionnaires, after which the system instantly applied the relevant rules and computed a composite score for each faculty member. The committee’s dashboard displayed individual scores, trend analyses, and aggregated statistics through interactive charts. Performance metrics showed an average response time of 2.3 seconds per evaluation and a rule‑matching accuracy of 96.8 %. User satisfaction surveys indicated that 87 % of students found the process intuitive and trustworthy, while 92 % of committee members praised the improved data accessibility and transparency.

The discussion acknowledges the strengths of a rule‑based approach—clarity, explainability, and alignment with explicit policy requirements—while noting its limitations in capturing nuanced aspects of teaching quality and the labor‑intensive nature of rule maintenance. Security measures such as SSL encryption and role‑based access control were implemented to protect sensitive information. The authors propose future enhancements that include integrating machine‑learning models to automatically infer qualitative indicators from textual feedback, adopting a micro‑services architecture for scalability, and developing meta‑modeling tools to automate rule updates based on evolving ARACIS guidelines.

In conclusion, the expert system successfully automates and enriches the faculty evaluation workflow at Tibiscus University, delivering faster, more objective, and more transparent assessments that comply with national quality assurance standards. The paper demonstrates that combining domain‑specific rule engineering with modern web technologies can effectively modernize academic quality assurance processes.


Comments & Academic Discussion

Loading comments...

Leave a Comment