Joint Network-and-Server Congestion in Multi-Source Traffic Allocation: A Convex Formulation and Price-Based Decentralization

Joint Network-and-Server Congestion in Multi-Source Traffic Allocation: A Convex Formulation and Price-Based Decentralization
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

This paper studies an important rate allocation problem that arises in many networked and distributed systems: steady-state traffic rate allocation from multiple sources to multiple service nodes when both (i) the access-path delay on each source-node route is rate-dependent (capacity-constrained) and convex, and (ii) each service node (also capacity-constrained) experiences a load-dependent queueing delay driven by aggregate load from all sources. We show that the resulting flow-weighted end-to-end delay minimization is a convex program, yielding a global system-optimal solution characterized by KKT conditions that equalize total marginal costs (a path marginal access term plus a node congestion price) across all utilized routes. This condition admits a Wardrop-type interpretation: for each source, all utilized options equalize total marginal cost, while any option with strictly larger total marginal cost receives no flow. Building on this structure, we develop a lightweight distributed pricing-based algorithm in which each service node locally computes and broadcasts a scalar congestion price from its observed aggregate load, while each source updates its traffic split by solving a small separable convex allocation problem under the advertised prices. Numerical illustrations demonstrate convergence of the distributed iteration to the centralized optimum and highlight the trade-offs induced by jointly modeling access and service congestion.


💡 Research Summary

The paper tackles a fundamental traffic allocation problem that appears in many modern distributed systems: how to split fixed-rate traffic from multiple sources among several service nodes while jointly accounting for (i) rate‑dependent access‑path delays and (ii) load‑dependent server queueing delays. The authors first formalize the system model. Sources (i\in\mathcal I) generate a constant rate (\lambda_i) that can be arbitrarily split across servers (j\in\mathcal J) as (\lambda_{ij}\ge0). Flow‑conservation constraints (\sum_j\lambda_{ij}=\lambda_i) and capacity limits (\lambda_{ij}<\mu_{ij}), (\Lambda_j=\sum_i\lambda_{ij}<\mu_j) are imposed.

Access‑path delay (D_{ij}(\lambda_{ij})) and server delay (D_j(\Lambda_j)) are assumed to be continuous, twice differentiable, strictly increasing, convex, and to blow up as the respective capacities are approached (e.g., M/M/1 forms (1/(\mu_{ij}-\lambda_{ij})) and (1/(\mu_j-\Lambda_j))). The end‑to‑end delay for traffic on route ((i,j)) is (eD_{ij}=D_{ij}(\lambda_{ij})+D_j(\Lambda_j)). The objective is to minimize the flow‑weighted sum of these delays: \


Comments & Academic Discussion

Loading comments...

Leave a Comment