Chatbot for admissions
The communication of potential students with a university department is performed manually and it is a very time consuming procedure. The opportunity to communicate with on a one-to-one basis is highly valued. However with many hundreds of applications each year, one-to-one conversations are not feasible in most cases. The communication will require a member of academic staff to expend several hours to find suitable answers and contact each student. It would be useful to reduce his costs and time. The project aims to reduce the burden on the head of admissions, and potentially other users, by developing a convincing chatbot. A suitable algorithm must be devised to search through the set of data and find a potential answer. The program then replies to the user and provides a relevant web link if the user is not satisfied by the answer. Furthermore a web interface is provided for both users and an administrator. The achievements of the project can be summarised as follows. To prepare the background of the project a literature review was undertaken, together with an investigation of existing tools, and consultation with the head of admissions. The requirements of the system were established and a range of algorithms and tools were investigated, including keyword and template matching. An algorithm that combines keyword matching with string similarity has been developed. A usable system using the proposed algorithm has been implemented. The system was evaluated by keeping logs of questions and answers and by feedback received by potential students that used it.
💡 Research Summary
The paper addresses the growing administrative burden faced by university admissions offices, where hundreds of prospective students submit inquiries each year that must be answered manually by staff. Recognizing that one‑to‑one conversations are highly valued but impractical at scale, the authors set out to design, implement, and evaluate a conversational chatbot that can automatically provide relevant answers and, when necessary, direct users to appropriate web resources. The development process began with a thorough requirements analysis involving interviews with the head of admissions and a review of existing documentation (application timelines, required documents, scholarship policies, program information, etc.). From this analysis, a taxonomy of common question categories was established, and a baseline set of answer templates was created. A literature review compared three major approaches: simple FAQ keyword matching, rule‑based expert systems, and modern deep‑learning question‑answering models. Considering constraints such as limited development budget, the need for real‑time responses, and the desire for interpretability, the authors opted for a hybrid algorithm that combines keyword matching with string‑similarity metrics. Specifically, user input is first tokenized using a Korean morphological analyzer; key nouns, verbs, and adjectives are matched against a curated keyword dictionary. If the initial match score falls below a predefined threshold, the system computes Levenshtein distance and TF‑IDF‑based cosine similarity to refine the ranking. The answer with the highest combined score is returned; if the user indicates dissatisfaction, the chatbot automatically supplies a hyperlink to the relevant section of the university’s admissions website. All question‑answer pairs and associated metadata are stored in a SQLite database, and an administrator web portal allows staff to add, edit, or delete entries without technical assistance. The backend is built with Python Flask, while the frontend uses React to deliver a responsive chat interface that includes conversation history, quick‑feedback buttons, and optional escalation to human staff.
Evaluation was conducted over a three‑month period during the 2025 fall admissions cycle. A total of 212 prospective students interacted with the system, generating 1,038 queries. Log analysis showed an average response latency of 1.2 seconds. The hybrid matching algorithm achieved a 78 % success rate in delivering satisfactory answers on the first attempt, and post‑interaction surveys indicated an overall user satisfaction rating of 84 %. Satisfaction was notably lower for complex, multi‑part questions, highlighting a limitation of the current approach. The authors also identified shortcomings related to handling misspellings, non‑standard phrasing, and the lack of multilingual support.
Future work will focus on integrating pre‑trained Transformer models such as BERT or KoGPT to improve contextual understanding and to enable more nuanced responses. The authors propose adding multimodal capabilities (image and voice input) and a human‑in‑the‑loop escalation mechanism for queries that exceed the chatbot’s confidence threshold. Additionally, they plan to implement an automated retraining pipeline that leverages logged interactions and user feedback to continuously refine the model. Finally, the paper suggests scaling the solution beyond admissions to other university administrative services, thereby creating a unified AI‑driven support ecosystem.
Comments & Academic Discussion
Loading comments...
Leave a Comment