Distributed, Parallel, and Cluster Computing

All posts under category "Distributed, Parallel, and Cluster Computing"

13 posts total
Sorted by date
No Image

Distributed Deep Convolutional Neural Networks for the Internet-of-Things

Severe constraints on memory and computation characterizing the Internet-of-Things (IoT) units may prevent the execution of Deep Learning (DL)-based solutions, which typically demand large memory and high processing load. In order to support a real-time execution of the considered DL model at the IoT unit level, DL solutions must be designed having in mind constraints on memory and processing capability exposed by the chosen IoT technology. In this paper, we introduce a design methodology aiming at allocating the execution of Convolutional Neural Networks (CNNs) on a distributed IoT application. Such a methodology is formalized as an optimization problem where the latency between the data-gathering phase and the subsequent decision-making one is minimized, within the given constraints on memory and processing load at the units level. The methodology supports multiple sources of data as well as multiple CNNs in execution on the same IoT system allowing the design of CNN-based applications demanding autonomy, low decision-latency, and high Quality-of-Service.

paper research
Properties of Decentralized Consensus Technology -- Why Not Every Blockchain Is a True Blockchain

Properties of Decentralized Consensus Technology -- Why Not Every Blockchain Is a True Blockchain

Research in the field of blockchain technology and applications is increasing at a fast pace. Although the Bitcoin whitepaper by Nakamoto is already ten years old, the field can still be seen as immature and at an early stage. Current research in this area is lacking a commonly shared knowledge and consensus about terms used to describe the technology and its properties. At the same time this research is challenging fundamental aspects of the Bitcoin core concept. It has to be questioned whether all of these new approaches still adequately could be described as blockchain technology. We propose to use the term Decentralized Consensus Technology as a general category instead. Decentralized Consensus Technology consists of decentralized ledger and non-ledger technologies. Blockchain technology in turn is only one of multiple implementations of the Decentralized Ledger Technology. Furthermore, we identified three main characteristics of Decentralized Consensus Technology decentralization, trustlessness and ability to eventually reach consensus. Depending on the use case of the specific implementation the following additional properties have to be considered privacy, participation incentive, irreversibility and immutability, operation purpose, confirmation time, transaction costs, ability to externalize transactions and computations and scalability possibilities.

paper research
A Study of Network Congestion in Two Supercomputing High-Speed Interconnects

A Study of Network Congestion in Two Supercomputing High-Speed Interconnects

Network congestion in high-speed interconnects is a major source of application run time performance variation. Recent years have witnessed a surge of interest from both academia and industry in the development of novel approaches for congestion control at the network level and in application placement, mapping, and scheduling at the system-level. However, these studies are based on proxy applications and benchmarks that are not representative of field-congestion characteristics of high-speed interconnects. To address this gap, we present (a) an end-to-end framework for monitoring and analysis to support long-term field-congestion characterization studies, and (b) an empirical study of network congestion in petascale systems across two different interconnect technologies (i) Cray Gemini, which uses a 3-D torus topology, and (ii) Cray Aries, which uses the DragonFly topology.

paper research
Communication-Efficient Federated Deep Learning with Asynchronous Model Updates and Temporally Weighted Aggregation

Communication-Efficient Federated Deep Learning with Asynchronous Model Updates and Temporally Weighted Aggregation

Federated learning obtains a central model on the server by aggregating models trained locally on clients. As a result, federated learning does not require clients to upload their data to the server, thereby preserving the data privacy of the clients. One challenge in federated learning is to reduce the client-server communication since the end devices typically have very limited communication bandwidth. This paper presents an enhanced federated learning technique by proposing a synchronous learning strategy on the clients and a temporally weighted aggregation of the local models on the server. In the asynchronous learning strategy, different layers of the deep neural networks are categorized into shallow and deeps layers and the parameters of the deep layers are updated less frequently than those of the shallow layers. Furthermore, a temporally weighted aggregation strategy is introduced on the server to make use of the previously trained local models, thereby enhancing the accuracy and convergence of the central model. The proposed algorithm is empirically on two datasets with different deep neural networks. Our results demonstrate that the proposed asynchronous federated deep learning outperforms the baseline algorithm both in terms of communication cost and model accuracy.

paper research
A Parallel Projection Technique for Metric-Constrained Optimization Problems

A Parallel Projection Technique for Metric-Constrained Optimization Problems

Many clustering applications in machine learning and data mining rely on solving metric-constrained optimization problems. These problems are characterized by $O(n^3)$ constraints that enforce triangle inequalities on distance variables associated with $n$ objects in a large dataset. Despite its usefulness, metric-constrained optimization is challenging in practice due to the cubic number of constraints and the high-memory requirements of standard optimization software. Recent work has shown that iterative projection methods are able to solve metric-constrained optimization problems on a much larger scale than was previously possible, thanks to their comparatively low memory requirement. However, the major limitation of projection methods is their slow convergence rate. In this paper we present a parallel projection method for metric-constrained optimization which allows us to speed up the convergence rate in practice. The key to our approach is a new parallel execution schedule that allows us to perform projections at multiple metric constraints simultaneously without any conflicts or locking of variables. We illustrate the effectiveness of this execution schedule by implementing and testing a parallel projection method for solving the metric-constrained linear programming relaxation of correlation clustering. We show numerous experimental results on problems involving up to 2.9 trillion constraints.

paper research
No Delay  Latency-Driven and Application Performance-Aware Cluster Scheduling

No Delay Latency-Driven and Application Performance-Aware Cluster Scheduling

Given the network latency variability observed in data centers, applications performance is also determined by their placement within the data centre. We present NoMora, a cluster scheduling architecture whose core is represented by a latency-driven, application performance-aware, cluster scheduling policy. The policy places the tasks of an application taking into account the expected performance based on the measured network latency between pairs of hosts in the data center. Furthermore, if a tenant s application experiences increased network latency, and thus lower application performance, their application may be migrated to a better placement. Preliminary results show that our policy improves the overall average application performance by up to 13.4% and by up to 42% if preemption is enabled, and improves the task placement latency by a factor of 1.79x and the median algorithm runtime by 1.16x compared to a random policy on the Google cluster workload. This demonstrates that application performance can be improved by exploiting the relationship between network latency and application performance, and the current network conditions in a data center, while preserving the demands of low-latency cluster scheduling.

paper research
Feasibility and Performance of a Private LLM Server for SMBs  A Benchmark Analysis of Qwen3-30B on Consumer Hardware

Feasibility and Performance of a Private LLM Server for SMBs A Benchmark Analysis of Qwen3-30B on Consumer Hardware

The proliferation of Large Language Models (LLMs) has been accompanied by a reliance on cloud-based, proprietary systems, raising significant concerns regarding data privacy, operational sovereignty, and escalating costs. This paper investigates the feasibility of deploying a high-performance, private LLM inference server at a cost accessible to Small and Medium Businesses (SMBs). We present a comprehensive benchmarking analysis of a locally hosted, quantized 30-billion parameter Mixture-of-Experts (MoE) model based on Qwen3, running on a consumer-grade server equipped with a next-generation NVIDIA GPU. Unlike cloud-based offerings, which are expensive and complex to integrate, our approach provides an affordable and private solution for SMBs. We evaluate two dimensions the model s intrinsic capabilities and the server s performance under load. Model performance is benchmarked against academic and industry standards to quantify reasoning and knowledge relative to cloud services. Concurrently, we measure server efficiency through latency, tokens per second, and time to first token, analyzing scalability under increasing concurrent users. Our findings demonstrate that a carefully configured on-premises setup with emerging consumer hardware and a quantized open-source model can achieve performance comparable to cloud-based services, offering SMBs a viable pathway to deploy powerful LLMs without prohibitive costs or privacy compromises.

paper research
Let s Share  A Game-Theoretic Approach to Resource Allocation in Mobile Edge Clouds

Let s Share A Game-Theoretic Approach to Resource Allocation in Mobile Edge Clouds

Mobile edge computing seeks to provide resources to different delay-sensitive applications. This is a challenging problem as an edge cloud-service provider may not have sufficient resources to satisfy all resource requests. Furthermore, allocating available resources optimally to different applications is also challenging. Resource sharing among different edge cloud-service providers can address the aforementioned limitation as certain service providers may have resources available that can be ``rented by other service providers. However, edge cloud service providers can have different objectives or emph{utilities}. Therefore, there is a need for an efficient and effective mechanism to share resources among service providers, while considering the different objectives of various providers. We model resource sharing as a multi-objective optimization problem and present a solution framework based on emph{Cooperative Game Theory} (CGT). We consider the strategy where each service provider allocates resources to its native applications first and shares the remaining resources with applications from other service providers. We prove that for a monotonic, non-decreasing utility function, the game is canonical and convex. Hence, the emph{core} is not empty and the grand coalition is stable. We propose two algorithms emph{Game-theoretic Pareto optimal allocation} (GPOA) and emph{Polyandrous-Polygamous Matching based Pareto Optimal Allocation} (PPMPOA) that provide allocations from the core. Hence the obtained allocations are emph{Pareto} optimal and the grand coalition of all the service providers is stable. Experimental results confirm that our proposed resource sharing framework improves utilities of edge cloud-service providers and application request satisfaction.

paper research
The DUNE Framework  Core Principles and Recent Advances

The DUNE Framework Core Principles and Recent Advances

This paper presents the basic concepts and the module structure of the Distributed and Unified Numerics Environment and reflects on recent developments and general changes that happened since the release of the first Dune version in 2007 and the main papers describing that state [1, 2]. This discussion is accompanied with a description of various advanced features, such as coupling of domains and cut cells, grid modifications such as adaptation and moving domains, high order discretizations and node level performance, non-smooth multigrid methods, and multiscale methods. A brief discussion on current and future development directions of the framework concludes the paper.

paper research
AI-Driven Cloud Resource Optimization for Multi-Cluster Environments

AI-Driven Cloud Resource Optimization for Multi-Cluster Environments

Modern cloud-native systems increasingly rely on multi-cluster deployments to support scalability, resilience, and geographic distribution. However, existing resource management approaches remain largely reactive and cluster-centric, limiting their ability to optimize system-wide behavior under dynamic workloads. These limitations result in inefficient resource utilization, delayed adaptation, and increased operational overhead across distributed environments. This paper presents an AI-driven framework for adaptive resource optimization in multi-cluster cloud systems. The proposed approach integrates predictive learning, policy-aware decision-making, and continuous feedback to enable proactive and coordinated resource management across clusters. By analyzing cross-cluster telemetry and historical execution patterns, the framework dynamically adjusts resource allocation to balance performance, cost, and reliability objectives. A prototype implementation demonstrates improved resource efficiency, faster stabilization during workload fluctuations, and reduced performance variability compared to conventional reactive approaches. The results highlight the effectiveness of intelligent, self-adaptive infrastructure management as a key enabler for scalable and resilient cloud platforms.

paper research
PackKV  Reducing KV Cache Memory Footprint through LLM-Aware Lossy Compression

PackKV Reducing KV Cache Memory Footprint through LLM-Aware Lossy Compression

Transformer-based large language models (LLMs) have demonstrated remarkable potential across a wide range of practical applications. However, long-context inference remains a significant challenge due to the substantial memory requirements of the key-value (KV) cache, which can scale to several gigabytes as sequence length and batch size increase. In this paper, we present textbf{PackKV}, a generic and efficient KV cache management framework optimized for long-context generation. %, which synergistically supports both latency-critical and throughput-critical inference scenarios. PackKV introduces novel lossy compression techniques specifically tailored to the characteristics of KV cache data, featuring a careful co-design of compression algorithms and system architecture. Our approach is compatible with the dynamically growing nature of the KV cache while preserving high computational efficiency. Experimental results show that, under the same and minimum accuracy drop as state-of-the-art quantization methods, PackKV achieves, on average, textbf{153.2} % higher memory reduction rate for the K cache and textbf{179.6} % for the V cache. Furthermore, PackKV delivers extremely high execution throughput, effectively eliminating decompression overhead and accelerating the matrix-vector multiplication operation. Specifically, PackKV achieves an average throughput improvement of textbf{75.7} % for K and textbf{171.7} % for V across A100 and RTX Pro 6000 GPUs, compared to cuBLAS matrix-vector multiplication kernels, while demanding less GPU memory bandwidth. Code available on https //github.com/BoJiang03/PackKV

paper research
No Image

Placement Semantics for Distributed Deep Learning A Systematic Framework for Analyzing Parallelism Strategies

Training large language models requires distributing computation across many accelerators, yet practitioners select parallelism strategies (data, tensor, pipeline, ZeRO) through trial and error because no unified systematic framework predicts their behavior. We introduce placement semantics each strategy is specified by how it places four training states (parameters, optimizer, gradients, activations) across devices using five modes (replicated, sharded, sharded-with-gather, materialized, offloaded). From placement alone, without implementation details, we derive memory consumption and communication volume. Our predictions match published results exactly ZeRO-3 uses 8x less memory than data parallelism at 1.5x communication cost, as reported in the original paper. We prove two conditions (gradient integrity, state consistency) are necessary and sufficient for distributed training to match single-device results, and provide composition rules for combining strategies safely. The framework unifies ZeRO Stages 1-3, Fully Sharded Data Parallel (FSDP), tensor parallelism, and pipeline parallelism as instances with different placement choices.

paper research
RelayGR  Scaling Long-Sequence Generative Recommendation via Cross-Stage Relay-Race Inference

RelayGR Scaling Long-Sequence Generative Recommendation via Cross-Stage Relay-Race Inference

Real-time recommender systems execute multi-stage cascades (retrieval, pre-processing, fine-grained ranking) under strict tail-latency SLOs, leaving only tens of milliseconds for ranking. Generative recommendation (GR) models can improve quality by consuming long user-behavior sequences, but in production their online sequence length is tightly capped by the ranking-stage P99 budget. We observe that the majority of GR tokens encode user behaviors that are independent of the item candidates, suggesting an opportunity to pre-infer a user-behavior prefix once and reuse it during ranking rather than recomputing it on the critical path. Realizing this idea at industrial scale is non-trivial the prefix cache must survive across multiple pipeline stages before the final ranking instance is determined, the user population implies cache footprints far beyond a single device, and indiscriminate pre-inference would overload shared resources under high QPS. We present RelayGR, a production system that enables in-HBM relay-race inference for GR. RelayGR selectively pre-infers long-term user prefixes, keeps their KV caches resident in HBM over the request lifecycle, and ensures the subsequent ranking can consume them without remote fetches. RelayGR combines three techniques 1) a sequence-aware trigger that admits only at-risk requests under a bounded cache footprint and pre-inference load, 2) an affinity-aware router that co-locates cache production and consumption by routing both the auxiliary pre-infer signal and the ranking request to the same instance, and 3) a memory-aware expander that uses server-local DRAM to capture short-term cross-request reuse while avoiding redundant reloads. We implement RelayGR on Huawei Ascend NPUs and evaluate it with real queries. Under a fixed P99 SLO, RelayGR supports up to 1.5$ times$ longer sequences and improves SLO-compliant throughput by up to 3.6$ times$.

paper research

< Category Statistics (Total: 565) >

Computer Science (513) Machine Learning (117) Artificial Intelligence (89) Computer Vision (71) Computation and Language (NLP) (62) Electrical Engineering and Systems Science (36) Cryptography and Security (24) Robotics (22) Systems and Control (22) Software Engineering (20) Mathematics (18) Statistics (17) Economics (16) Information Retrieval (15) Human-Computer Interaction (14) Distributed, Parallel, and Cluster Computing (13) Neural and Evolutionary Computing (13) Computer Science and Game Theory (11) Econometrics (11) Image and Video Processing (10) Physics (10) Sound (10) Multiagent Systems (9) Optimization and Control (8) Computational Geometry (7) Databases (7) Graphics (6) Networking and Internet Architecture (6) Quantitative Biology (6) Quantum Physics (5) Theoretical Economics (5) Computational Complexity (4) Computational Engineering, Finance, and Science (4) Computers and Society (4) Emerging Technologies (4) Information Theory (4) Methodology (4) Multimedia (4) Programming Languages (4) Quantitative Finance (4) Signal Processing (4) Audio and Speech Processing (3) Data Structures and Algorithms (3) Hardware Architecture (3) History and Philosophy of Physics (3) Logic in Computer Science (3) Neurons and Cognition (3) Social and Information Networks (3) Statistics Theory (3) Computation (2) Condensed Matter (2) Dynamical Systems (2) Formal Languages and Automata Theory (2) General Finance (2) Operating Systems (2) Optics (2) Quantitative Methods (2) Applications (1) Astrophysics (1) Combinatorics (1) Computational Physics (1) Digital Libraries (1) Disordered Systems and Neural Networks (1) General Economics (1) Genomics (1) Geophysics (1) Instrumentation and Methods for Astrophysics (1) Logic (1) Mathematical Finance (1) Mathematical Software (1) Medical Physics (1) Mesoscale and Nanoscale Physics (1) Metric Geometry (1) Other Statistics (1) Performance (1) Physics and Society (1) Plasma Physics (1) Probability (1) Trading and Market Microstructure (1)

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut