SHARP-QoS: Sparsely-gated Hierarchical Adaptive Routing for joint Prediction of QoS

SHARP-QoS: Sparsely-gated Hierarchical Adaptive Routing for joint Prediction of QoS
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Dependable service-oriented computing relies on multiple Quality of Service (QoS) parameters that are essential to assess service optimality. However, real-world QoS data are extremely sparse, noisy, and shaped by hierarchical dependencies arising from QoS interactions, and geographical and network-level factors, making accurate QoS prediction challenging. Existing methods often predict each QoS parameter separately, requiring multiple similar models, which increases computational cost and leads to poor generalization. Although recent joint QoS prediction studies have explored shared architectures, they suffer from negative transfer due to loss-scaling caused by inconsistent numerical ranges across QoS parameters and further struggle with inadequate representation learning, resulting in degraded accuracy. This paper presents an unified strategy for joint QoS prediction, called SHARP-QoS, that addresses these issues using three components. First, we introduce a dual mechanism to extract the hierarchical features from both QoS and contextual structures via hyperbolic convolution formulated in the Poincaré ball. Second, we propose an adaptive feature-sharing mechanism that allows feature exchange across informative QoS and contextual signals. A gated feature fusion module is employed to support dynamic feature selection among structural and shared representations. Third, we design an EMA-based loss balancing strategy that allows stable joint optimization, thereby mitigating the negative transfer. Evaluations on three datasets with two, three, and four QoS parameters demonstrate that SHARP-QoS outperforms both single- and multi-task baselines. Extensive study shows that our model effectively addresses major challenges, including sparsity, robustness to outliers, and cold-start, while maintaining moderate computational overhead, underscoring its capability for reliable joint QoS prediction.


💡 Research Summary

The paper introduces SHARP‑QoS, a unified framework for jointly predicting multiple Quality‑of‑Service (QoS) metrics in service‑oriented computing. Real‑world QoS data are typically sparse, noisy, and exhibit hierarchical dependencies arising from interactions among QoS attributes, geographic regions, and network‑level factors. Existing approaches either train a separate model per QoS metric—incurring high computational cost and poor generalization—or adopt multi‑task learning (MTL) but suffer from negative transfer due to disparate value ranges and insufficient representation of hierarchical structures.

SHARP‑QoS tackles these challenges with three core components. First, a Hierarchical Feature Extraction Block (HFEB) employs hyperbolic graph convolutional networks (HyGCN) on the Poincaré ball to capture latent hierarchies in both QoS‑specific interaction graphs and contextual graphs (region and autonomous system). By using Möbius addition, matrix‑vector multiplication, and wrapped activations, the model preserves the exponential expansion of hierarchical data with low distortion, outperforming Euclidean GCNs.

Second, a Feature Sharing and Fusion Block (FSFB) introduces a dual feature‑exchange mechanism inspired by Sub‑Network Routing (SNR). Separate sub‑networks process QoS‑specific and context‑specific features; cross‑routing matrices allow information flow across different QoS tasks, while a gated fusion module dynamically selects the most relevant combination of structural, shared, and task‑specific representations. This design enables effective parameter sharing without the over‑use of experts that plagues prior MoE‑based MTL methods.

Third, an Exponential Moving Average (EMA)‑based loss balancing strategy smooths short‑term fluctuations in each task’s loss, generating adaptive weights that mitigate the dominance of tasks with larger numeric scales. The overall loss is a weighted sum of EMA‑stabilized task losses, ensuring stable joint optimization and faster convergence.

The pipeline consists of preprocessing (graph construction and initial feature extraction via non‑negative matrix factorization and auto‑encoders), HFEB, FSFB, and a Joint QoS Prediction Module (JQPM) that predicts each QoS matrix by inner‑product of user and service embeddings produced by the fused features. All hyperbolic operations stay in the Poincaré space, while sharing and prediction happen in Euclidean space.

Experiments on three benchmark datasets—WS‑DREAM‑2T (2 QoS), small‑3T (3 QoS), and gRPC‑4T (4 QoS)—show that SHARP‑QoS consistently outperforms strong baselines, including single‑task MF/MLP/GCN, recent joint models such as PMT, DW‑A, and MoE‑based approaches. Gains range from 4 % to 7 % absolute improvement in MAE/RMSE. The model demonstrates robustness to extreme sparsity (≥90 % missing entries), outliers (up to 20 % noisy observations), and cold‑start scenarios (new users/services). Ablation studies confirm that removing any of the three components degrades performance by 2–3 % points, highlighting their complementary roles. Sensitivity analysis indicates stable performance across curvature initialization, routing gate dimensions, and EMA decay rates. Computationally, SHARP‑QoS adds modest overhead from hyperbolic operations but reduces overall parameters and FLOPs compared to existing multi‑task architectures.

Limitations include reliance on static graphs (no real‑time topology updates) and potential instability when learning curvature parameters. Future work may explore dynamic hyperbolic graphs and meta‑learning for curvature initialization.

In summary, SHARP‑QoS presents a novel combination of hyperbolic hierarchical representation, adaptive cross‑task feature sharing, and EMA‑driven loss balancing, delivering accurate, robust, and efficient joint QoS prediction suitable for modern service recommendation and orchestration systems.


Comments & Academic Discussion

Loading comments...

Leave a Comment