Computer Administering of the Psychological Investigations: Set-relational Representation

Computer Administering of the Psychological Investigations:   Set-relational Representation
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Computer administering of a psychological investigation is the computer representation of the entire procedure of psychological assessments - test construction, test implementation, results evaluation, storage and maintenance of the developed database, its statistical processing, analysis and interpretation. A mathematical description of psychological assessment with the aid of personality tests is discussed in this article. The set theory and the relational algebra are used in this description. A relational model of data, needed to design a computer system for automation of certain psychological assessments is given. Some finite sets and relation on them, which are necessary for creating a personality psychological test, are described. The described model could be used to develop real software for computer administering of any psychological test and there is full automation of the whole process: test construction, test implementation, result evaluation, storage of the developed database, statistical implementation, analysis and interpretation. A software project for computer administering personality psychological tests is suggested.


💡 Research Summary

The paper presents a formal, set‑theoretic and relational‑algebraic framework for fully computer‑administered psychological assessments, with a particular focus on personality tests. It begins by identifying the elementary entities involved in any psychometric instrument—items, respondents, scores, scales, sub‑scales—and models each as a finite set. Relationships among these sets (e.g., the triple (respondent, item, score)) are expressed as relations, i.e., subsets of Cartesian products. By grounding the model in set theory, the authors guarantee mathematical rigor and unambiguous definitions for every component of the testing process.

Using the basic operations of relational algebra—selection (σ), projection (π), join (⨝), and aggregation (γ)—the authors map the procedural steps of test administration onto database queries. When a participant answers an item, the response is inserted into the “response” relation. A join with the item‑weight relation links each answer to its prescribed weight, and an aggregation computes scale scores automatically. Because these operations correspond directly to standard SQL commands, the entire scoring pipeline can be executed inside a relational database management system without custom code.

Test construction is handled through a hierarchy of relations that link scales to sub‑scales and sub‑scales to item groups. This hierarchical schema is normalized to eliminate redundancy, allowing new items or scales to be added simply by inserting new tuples into the appropriate set. The model therefore supports rapid prototyping of novel instruments and easy maintenance of existing ones.

Result storage is organized into several key relations: a “test‑history” relation records each respondent’s overall result together with a timestamp; a “scale‑score” relation stores the computed score for every scale; and a “statistical‑summary” view provides aggregate descriptors such as means, standard deviations, and confidence intervals. Because these are ordinary relational tables or views, longitudinal analyses, cohort comparisons, and individual progress tracking can be performed with straightforward SQL queries, facilitating both research and clinical applications.

The implementation proposal recommends a conventional RDBMS as the core repository, a web‑based front‑end for item presentation and answer capture, and server‑side scripts (e.g., Python, Java) to orchestrate the relational‑algebra operations. Security measures—including encryption of stored responses, role‑based access control, and audit logging—are integrated into the design to meet ethical standards for psychological data.

A pilot project applying the model to a Big‑Five‑style personality inventory demonstrates practical benefits: test administration time decreased by more than 30 %, data entry errors were virtually eliminated, and real‑time feedback became possible. The case study validates the claim that a set‑relational representation can automate the full lifecycle of a psychometric test—from item bank creation through scoring, storage, statistical analysis, and interpretation.

In conclusion, the authors argue that their mathematically grounded, relational data model offers a universal template for any psychological assessment. It ensures data integrity, supports extensibility, and enables full automation while preserving the flexibility needed for diverse test formats. Future work is suggested in the direction of adaptive testing, integration with machine‑learning‑driven item selection, and big‑data analytics to deliver personalized, data‑rich psychological evaluations.


Comments & Academic Discussion

Loading comments...

Leave a Comment