A Joint Time and Energy-Efficient Federated Learning-based Computation Offloading Method for Mobile Edge Computing

A Joint Time and Energy-Efficient Federated Learning-based Computation Offloading Method for Mobile Edge Computing
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Computation offloading at lower time and lower energy consumption is crucial for resource limited mobile devices. This paper proposes an offloading decision-making model using federated learning. Based on the task type and the user input, the proposed decision-making model predicts whether the task is computationally intensive or not. If the predicted result is computationally intensive, then based on the network parameters the proposed decision-making model predicts whether to offload or locally execute the task. According to the predicted result the task is either locally executed or offloaded to the edge server. The proposed method is implemented in a real-time environment, and the experimental results show that the proposed method has achieved above 90% prediction accuracy in offloading decision-making. The experimental results also present that the proposed offloading method reduces the response time and energy consumption of the user device by ~11-31% for computationally intensive tasks. A partial computation offloading method for federated learning is also proposed and implemented in this paper, where the devices which are unable to analyse the huge number of data samples, offload a part of their local datasets to the edge server. For secure data transmission, cryptography is used. The experimental results present that using encryption and decryption the total time is increased by only 0.05-0.16%. The results also present that the proposed partial computation offloading method for federated learning has achieved a prediction accuracy of above 98% for the global model.


💡 Research Summary

The paper addresses the critical problem of deciding, in a mobile edge computing (MEC) environment, whether a computational task should be executed locally on a resource‑constrained device or offloaded to an edge server (or further to the cloud). To this end, the authors propose a two‑stage decision‑making framework called FLDec that leverages federated learning (FL) to build models without sharing raw user data.

In the first stage, the framework determines if a given task is computationally intensive. The authors employ a Multi‑Layer Perceptron (MLP) as the underlying classifier because it handles nonlinear decision boundaries well and can be trained with relatively few samples, which is typical for mobile devices. Input features include the task type (e.g., matrix multiplication, sorting, searching) and quantitative parameters such as matrix dimensions or data size. The MLP is trained collaboratively across devices; each client trains a local copy on its own task logs and periodically sends model updates to an edge server that aggregates them into a global model.

If the task is deemed intensive, the second stage decides whether to offload it. Here, a Long Short‑Term Memory (LSTM) network is used because network conditions (uplink/downlink traffic, latency, throughput) evolve over time and exhibit temporal correlations. Each device feeds recent network metrics into the LSTM, which outputs a binary offload/local decision. Again, the LSTM model is trained in a federated manner, ensuring that no raw network traces leave the device. The edge server aggregates updates and redistributes the improved global model to all participants.

Beyond pure decision‑making, the authors recognize that some devices may possess large local datasets that are too big to be processed entirely on‑device. To accommodate this, they introduce a secure partial computation offloading scheme named FedOff. In FedOff, a device splits its dataset into two subsets: one remains local for model training, while the other is encrypted with a symmetric key unique to the device and transmitted to the edge server. The server decrypts the received portion, performs additional training, and stores the resulting model update. During each FL round, the server aggregates both the locally generated updates and the stored update from the offloaded data, then broadcasts the new global model. The encryption/decryption overhead is reported to be only 0.05–0.16 % of total execution time, indicating negligible impact on real‑time performance.

Experimental evaluation was conducted on a prototype consisting of actual mobile devices and an edge server. The key findings are:

  • Prediction Accuracy – The FLDec model achieved >90 % accuracy in correctly classifying tasks as intensive or not, and >90 % accuracy in the subsequent offload/local decision.
  • Performance Gains – For computationally intensive tasks, the proposed offloading strategy reduced response time and device energy consumption by 11 % to 31 % compared with purely local execution.
  • Partial FL Effectiveness – FedOff attained >98 % accuracy for the global model despite the dataset being split and encrypted, demonstrating that privacy‑preserving partial offloading does not degrade model quality.
  • Scalability – The authors present a detailed computational complexity analysis, showing that model initialization, local training, aggregation, and update exchange scale linearly with the number of participating devices and model size.

The paper’s contributions can be summarized as follows:

  1. Two‑stage FL‑based decision framework that jointly considers task computational intensity and dynamic network conditions.
  2. Use of LSTM for sequential network data, enabling more reliable offloading decisions under fluctuating bandwidth and latency.
  3. Secure partial computation offloading (FedOff) that allows devices with large datasets to contribute to federated learning without exposing raw data, using lightweight symmetric‑key encryption.
  4. Comprehensive real‑world evaluation demonstrating significant latency and energy savings, high decision accuracy, and negligible cryptographic overhead.

Limitations noted by the authors include the potential increase in initial latency due to multiple FL rounds before the global model stabilizes, and the management overhead of symmetric keys when scaling to thousands of devices. Future work is suggested in the direction of reducing the number of required FL rounds through more efficient aggregation (e.g., hierarchical FL, compression techniques) and automating key distribution and rotation to maintain security at scale.

Overall, the study provides a practical, privacy‑preserving solution for dynamic offloading in MEC, bridging the gap between theoretical federated learning concepts and concrete performance improvements for battery‑limited mobile devices.


Comments & Academic Discussion

Loading comments...

Leave a Comment