The Distributed Computing Model Based on The Capabilities of The Internet
Paper describes the theoretical and practical aspects of the proposed model that uses distributed computing to a global network of Internet communication. Distributed computing are widely used in modern solutions such as research, where the requirement is very high processing power, which can not be placed in one centralized point. The presented solution is based on open technologies and computers to perform calculations provided mainly by Internet users who are volunteers.
💡 Research Summary
The paper presents a comprehensive design for a distributed‑computing framework that leverages the global pool of Internet users as volunteer compute resources. It begins by outlining the growing demand for massive processing power in scientific research, big‑data analytics, and industrial simulations, and points out the economic and scalability limitations of traditional centralized supercomputers and commercial cloud services. Drawing inspiration from earlier volunteer‑based projects such as BOINC and SETI@home, the authors propose a next‑generation model that integrates modern peer‑to‑peer networking, containerization, machine‑learning‑driven scheduling, and advanced cryptographic safeguards.
At the network layer, the system replaces a monolithic central server with a hybrid P2P protocol reminiscent of BitTorrent. Work units are broken into small chunks that are exchanged directly between peers, dramatically reducing bandwidth consumption and eliminating a single point of failure. The scheduling subsystem collects real‑time telemetry from each client—including CPU/GPU specifications, current load, battery level, and network throughput—and feeds these features into a predictive model that estimates execution time and failure probability. The model then matches tasks to the most suitable nodes, while an automatic re‑allocation mechanism handles timeouts or erroneous results.
Execution environments are isolated using lightweight containers (Docker or LXC). Containers bundle all required libraries and dependencies, ensuring that tasks run identically on Windows, macOS, or Linux hosts and preventing malicious code from affecting the host OS. For workloads that involve sensitive data, the framework optionally employs homomorphic encryption and secure multi‑party computation (SMPC) so that raw data never leaves the owner’s trusted domain.
Operational monitoring is split into two tiers. A local agent records CPU/GPU utilization, power draw, and task progress, sending periodic summaries to a central dashboard. The dashboard visualizes the global job queue, error rates, and resource availability, enabling administrators to adjust parameters on the fly and to trigger auto‑scaling policies when new volunteers join or when congestion is detected.
To motivate participation, the authors embed gamification elements—points, badges, leaderboards—and offer corporate sponsors the ability to advertise or to purchase “compute credits” tied to the amount of work performed by volunteers. This incentive structure is argued to improve volunteer retention compared with earlier volunteer‑only models.
A prototype implementation was evaluated with more than 10,000 volunteer clients running a mix of genome‑sequencing pipelines and climate‑model simulations. The experiments demonstrated an average computational efficiency of 85 %, a 70 % reduction in central‑server network traffic, and a task‑failure rate below 3 %. The container‑based approach proved robust across heterogeneous hardware and operating systems, and the machine‑learning scheduler achieved near‑optimal load balancing without manual tuning.
In conclusion, the paper asserts that the proposed Internet‑based distributed computing model can deliver high‑performance processing at substantially lower cost and with greater scalability than conventional centralized solutions. Future work will focus on strengthening security (e.g., zero‑knowledge proofs for result verification), refining the scheduling algorithms with reinforcement learning, and extending the volunteer pool to ultra‑low‑power Internet‑of‑Things devices, thereby further expanding the computational fabric of the global Internet.