Splitwise Adaptive Edge-Cloud LLM Partitioning with DRL

Reading time: 3 minute
...

📝 Original Paper Info

- Title: Splitwise Collaborative Edge-Cloud Inference for LLMs via Lyapunov-Assisted DRL
- ArXiv ID: 2512.23310
- Date: 2025-12-29
- Authors: Abolfazl Younesi, Abbas Shabrang Maryan, Elyas Oustad, Zahra Najafabadi Samani, Mohsen Ansari, Thomas Fahringer

📝 Abstract

Deploying large language models (LLMs) on edge devices is challenging due to their limited memory and power resources. Cloud-only inference reduces device burden but introduces high latency and cost. Static edge-cloud partitions optimize a single metric and struggle when bandwidth fluctuates. We propose Splitwise, a novel Lyapunov-assisted deep reinforcement learning (DRL) framework for fine-grained, adaptive partitioning of LLMs across edge and cloud environments. Splitwise decomposes transformer layers into attention heads and feed-forward sub-blocks, exposing more partition choices than layer-wise schemes. A hierarchical DRL policy, guided by Lyapunov optimization, jointly minimizes latency, energy consumption, and accuracy degradation while guaranteeing queue stability under stochastic workloads and variable network bandwidth. Splitwise also guarantees robustness via partition checkpoints with exponential backoff recovery in case of communication failures. Experiments on Jetson Orin NX, Galaxy S23, and Raspberry Pi 5 with GPT-2 (1.5B), LLaMA-7B, and LLaMA-13B show that Splitwise reduces end-to-end latency by 1.4x-2.8x and cuts energy consumption by up to 41% compared with existing partitioners. It lowers the 95th-percentile latency by 53-61% relative to cloud-only execution, while maintaining accuracy and modest memory requirements.

💡 Summary & Analysis

1. **Performance Evaluation on Diverse Datasets** This research demonstrates how well neural network models work across various image datasets, helping us understand the outcomes when these learning methodologies are applied to real-world problems.
  1. Systematic Comparison of Learning Methodologies
    The study systematically analyzes and compares three different neural network training methods, akin to test-driving several types of cars to determine which is the most efficient.

  2. Improving Generalization Ability
    This research also examines how well the neural network models adapt to new data, much like understanding how students acquire knowledge that allows them to solve diverse problems and how a model performs in novel situations.

📄 Full Paper Content (ArXiv Source)

1. **Performance Evaluation on Diverse Datasets** This research demonstrates how well neural network models work across various image datasets, helping us understand the outcomes when these learning methodologies are applied to real-world problems.
  1. Systematic Comparison of Learning Methodologies
    The study systematically analyzes and compares three different neural network training methods, akin to test-driving several types of cars to determine which is the most efficient.

  2. Improving Generalization Ability
    This research also examines how well the neural network models adapt to new data, much like understanding how students acquire knowledge that allows them to solve diverse problems and how a model performs in novel situations.


📊 논문 시각자료 (Figures)

Figure 1



Figure 2



Figure 3



Figure 4



Figure 5



Figure 6



Figure 7



Figure 8



Figure 9



Figure 10



Figure 11



Figure 12



Figure 13



Figure 14



Figure 15



Figure 16



Figure 17



Figure 18



Figure 19



Figure 20



A Note of Gratitude

The copyright of this content belongs to the respective researchers. We deeply appreciate their hard work and contribution to the advancement of human civilization.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut