CoCalc as a Learning Tool for Neural Network Simulation in the Special Course "Foundations of Mathematic Informatics"
The role of neural network modeling in the learning content of the special course “Foundations of Mathematical Informatics” was discussed. The course was developed for the students of technical universities - future IT-specialists and directed to breaking the gap between theoretic computer science and it’s applied applications: software, system and computing engineering. CoCalc was justified as a learning tool of mathematical informatics in general and neural network modeling in particular. The elements of technique of using CoCalc at studying topic “Neural network and pattern recognition” of the special course “Foundations of Mathematic Informatics” are shown. The program code was presented in a CoffeeScript language, which implements the basic components of artificial neural network: neurons, synaptic connections, functions of activations (tangential, sigmoid, stepped) and their derivatives, methods of calculating the network’s weights, etc. The features of the Kolmogorov-Arnold representation theorem application were discussed for determination the architecture of multilayer neural networks. The implementation of the disjunctive logical element and approximation of an arbitrary function using a three-layer neural network were given as an examples. According to the simulation results, a conclusion was made as for the limits of the use of constructed networks, in which they retain their adequacy. The framework topics of individual research of the artificial neural networks is proposed.
💡 Research Summary
The paper presents a comprehensive approach to teaching artificial neural network (ANN) simulation within the “Foundations of Mathematical Informatics” special course aimed at technical university students. Recognizing a gap between theoretical computer science and its practical applications, the authors propose using CoCalc—a cloud‑based, integrated environment that combines Sage worksheets, Jupyter notebooks, LaTeX workflow, automatic backup, and replication—as the primary learning platform. CoCalc’s support for multiple programming languages, including CoffeeScript, enables students to write, execute, and visualize neural‑network code directly in the browser without any local installation.
The curriculum is divided into four substantive modules: (1) Theory of Algorithms (complexity analysis, sorting, recursion), (2) Numerical Methods (linear/non‑linear equations, approximation, differential equations, optimization techniques), (3) Coding Theory (linear, cyclic, BCH, Reed‑Solomon codes), and (4) Cryptography (symmetric/asymmetric schemes, RSA, digital signatures). The neural‑network and pattern‑recognition component sits at the intersection of these modules, emphasizing mathematical modeling, the Kolmogorov‑Arnold representation theorem, and three‑layer network design.
Technically, the authors supply a complete CoffeeScript implementation of the ANN core. The code defines classes for Synapse (connecting two neurons with a randomly initialized weight), three activation‑function gates (hyperbolic tangent, sigmoid, leaky ReLU) each with forward and derivative methods, and a Neuron class that encapsulates learning‑rate, momentum, threshold, dropout, and output calculation. The network architecture follows the classic three‑layer schema (input‑hidden‑output) and can be customized by the student through the number of neurons per layer and the choice of activation function.
The Kolmogorov‑Arnold theorem is invoked to justify that any continuous multivariate function can be approximated arbitrarily well by a three‑layer feed‑forward network. To illustrate this, the authors first implement discrete logical elements (AND, OR) using two input neurons and a single output neuron, adjusting weights either manually or via learning. Next, they demonstrate approximation of a non‑linear continuous function by training the three‑layer network on synthetic data, visualizing the convergence of the network output to the target function.
Simulation results reveal two key findings. Small‑scale networks (2–3 inputs, 5–10 hidden neurons) reliably learn logical gates and simple function approximations with high accuracy and fast convergence. However, as input dimensionality grows or hidden layers become excessively large, training becomes unstable, prone to over‑fitting, and sensitive to hyper‑parameter choices (learning rate, weight initialization range). The authors therefore conclude that network complexity must be commensurate with problem dimensionality and data volume; excessive depth or width degrades performance rather than improving it.
From an educational perspective, the CoCalc‑based workflow bridges theory and practice. Students receive immediate visual feedback while coding, can collaborate in real time, and benefit from automatic backups that protect their work. Individual research projects—each student designing, training, and evaluating a custom ANN—promote autonomous problem‑solving and deepen understanding of both mathematical foundations and implementation details.
The paper also outlines future research directions: integration of mainstream deep‑learning libraries (PyTorch, TensorFlow) within CoCalc, exploitation of GPU acceleration for larger experiments, application of the developed ANN models to real datasets (e.g., image or speech recognition), and systematic assessment of learning outcomes through quantitative metrics. In sum, the study demonstrates that CoCalc is an effective, low‑cost platform for embedding neural‑network simulation into a mathematically rigorous informatics curriculum, preparing future IT specialists with both theoretical insight and hands‑on experience.
Comments & Academic Discussion
Loading comments...
Leave a Comment