Bridging On-Device and Cloud LLMs for Collaborative Reasoning: A Unified Methodology for Local Routing and Post-Training
Device-cloud collaboration holds promise for deploying large language models (LLMs), leveraging lightweight on-device models for efficiency while relying on powerful cloud models for superior reasoning. A central challenge in this setting is determining, for each incoming query, whether it should be processed locally or offloaded to the cloud. Existing approaches typically rely on external routers, which often struggle to determine difficulty from the prompt itself, especially for tasks involving complex reasoning. Motivated by this limitation, we propose enabling on-device LLMs to decide internally whether to invoke cloud assistance at inference time, with this capability instilled through reinforcement learning based post-training. Casting on-device LLM post-training as a reward maximization problem, we design hierarchical rewards to encourage local problem solving and judicious cloud offloading. To solve the resulting problem, we develop an algorithm featuring a group-level policy gradient that stabilizes optimization, together with adaptive prompt filtering that provides complementary learning signals to mitigate policy collapse (i.e., exclusive local execution or exclusive cloud offloading). Extensive experiments on on-device-scale LLaMA and Qwen models across multiple reasoning benchmarks show that our method consistently outperforms baselines and significantly narrows the gap to full cloud LLMs.
💡 Research Summary
The paper tackles the practical problem of deploying large language models (LLMs) in a device‑cloud setting, where a lightweight on‑device model handles most queries but may need to offload difficult ones to a powerful cloud model. Existing solutions separate the two stages: (1) fine‑tune the on‑device model for the target task, often using reinforcement learning (RL) methods such as GRPO, and (2) train an external binary router that decides, based solely on the prompt, whether to invoke the cloud model. This two‑stage pipeline suffers from several drawbacks. The router lacks access to the model’s internal confidence, making it unreliable for complex reasoning tasks where surface‑level prompt features do not reflect difficulty. Moreover, maintaining a separate router adds engineering overhead and duplicates computation because the router must often run a full reasoning pass to make its decision.
To overcome these limitations, the authors propose a unified RL‑based post‑training framework that embeds routing directly into the on‑device LLM’s behavior. The on‑device model is trained to output either a complete answer or a special “need‑help” token that triggers a deterministic call to the cloud LLM. The training objective is a constrained reward maximization problem:
maxθ Eₓ∼D E_{y∼πθ(x)}
Comments & Academic Discussion
Loading comments...
Leave a Comment