Combining Declarative and Linear Programming for Application Management in the Cloud-Edge Continuum
This work investigates the data-aware multi-service application placement problem in Cloud-Edge settings. We previously introduced EdgeWise, a hybrid approach that combines declarative programming with Mixed-Integer Linear Programming (MILP) to determine optimal placements that minimise operational costs and unnecessary data transfers. The declarative stage pre-processes infrastructure constraints to improve the efficiency of the MILP solver, achieving optimal placements in terms of operational costs, with significantly reduced execution times. In this extended version, we improve the declarative stage with continuous reasoning, presenting EdgeWiseCR, which enables the system to reuse existing placements and reduce unnecessary recomputation and service migrations. In addition, we conducted an expanded experimental evaluation considering multiple applications, diverse network topologies, and large-scale infrastructures with dynamic failures. The results show that EdgeWiseCR achieves up to 65% faster execution compared to EdgeWise, while preserving placement stability under dynamic conditions.
💡 Research Summary
This paper addresses the data‑aware multi‑service application placement problem in Cloud‑Edge environments by extending the authors’ previous hybrid framework, EdgeWise, into a new system called EdgeWiseCR. EdgeWise combined a declarative pre‑processing stage, implemented in Prolog, with a Mixed‑Integer Linear Programming (MILP) optimizer. The declarative stage filtered out infeasible node‑component mappings based on hardware, software, security, and IoT constraints, thereby reducing the size of the MILP problem and achieving optimal placements with significantly lower execution times compared to a pure MILP approach.
EdgeWiseCR introduces a “continuous reasoning” mechanism that reuses existing placements across successive optimization cycles. Instead of recomputing the entire placement whenever the infrastructure or workload changes, the system records the previous mapping as a set of Prolog facts. When a change is detected (e.g., node failure, link latency increase, workload spike), only the affected components and their directly connected data flows are reconsidered; the rest of the placement remains fixed. This selective recomputation dramatically cuts unnecessary migrations and shortens the overall solving time.
The paper first presents a realistic use‑case, the “SpeakToMe” application, which implements a text‑to‑speech pipeline composed of six functions and five services. Detailed metadata for each component (software stack, architecture, hardware demand, monthly request volume, processing duration) and for each data flow (size, rate, security requirements, latency bound) are modeled as Prolog facts. The infrastructure is similarly described by facts about nodes (software capabilities, architecture, hardware capacity, security capabilities, attached IoT devices) and links (latency, bandwidth). A cost model is defined: for services, cost aggregates software licensing and hardware usage; for functions, cost includes computational expense proportional to request volume and duration, plus request‑processing fees. Unit costs are derived from public cloud pricing (AWS EC2, Lambda).
The declarative model encodes all constraints as logical rules, allowing the system to answer feasibility queries and to compute the cost of any candidate placement. Continuous reasoning adds rules that preserve previously selected mappings unless a constraint violation forces a change. The filtered candidate set is then fed to an MILP formulation where binary variables indicate whether a component is placed on a particular node. The objective minimizes total provisioning cost while penalizing the number of migrations relative to the prior placement. Constraints enforce architecture compatibility, hardware capacity, security policies, latency/bandwidth limits, and the immutability of components fixed by continuous reasoning.
The experimental evaluation covers three representative applications (speech synthesis, video streaming, smart factory) across three network topologies (cloud‑centric, edge‑centric, hybrid) and scales the infrastructure from 256 up to 2 048 nodes. Dynamic scenarios simulate node crashes, link degradations, and workload fluctuations. Results show that EdgeWiseCR achieves 58 %–65 % faster execution than the original EdgeWise, while incurring at most a 33 % increase in total cost. More importantly, the number of service migrations and the cumulative service downtime drop by over 40 %, demonstrating superior placement stability under dynamic conditions. The continuous reasoning stage alone accounts for the majority of the speed‑up, as it reduces the MILP variable count by roughly 70 %.
The authors acknowledge limitations: maintaining previous placement state introduces memory overhead, and in environments with extremely rapid workload changes the benefit of selective recomputation diminishes. The cost model currently relies on static unit prices and does not capture energy consumption or spot‑market price dynamics. Future work is proposed in three directions: (1) enriching the cost model with energy‑aware and time‑varying pricing, (2) integrating reinforcement‑learning predictors to anticipate workload shifts and pre‑emptively adjust placements, and (3) extending the framework to multi‑cloud/multi‑edge federations with policy‑based negotiation mechanisms.
In summary, EdgeWiseCR demonstrates that a tightly coupled declarative‑preprocessing and MILP pipeline, augmented with continuous reasoning, can deliver near‑optimal, cost‑effective, and stable application placements in large‑scale, dynamic Cloud‑Edge continua, bridging the gap between fast heuristics and exact optimization.
Comments & Academic Discussion
Loading comments...
Leave a Comment