A Selective Homomorphic Encryption Approach for Faster Privacy-Preserving Federated Learning

A Selective Homomorphic Encryption Approach for Faster Privacy-Preserving Federated Learning
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Federated learning (FL) has come forward as a critical approach for privacy-preserving machine learning in healthcare, allowing collaborative model training across decentralized medical datasets without exchanging clients’ data. However, current security implementations for these systems face a fundamental trade-off: rigorous cryptographic protections like fully homomorphic encryption (FHE) impose prohibitive computational overhead, while lightweight alternatives risk vulnerable data leakage through model updates. To address this issue, we present FAS (Fast and Secure Federated Learning), a novel approach that strategically combines selective homomorphic encryption, differential privacy, and bitwise scrambling to achieve robust security without compromising practical usability. Our approach eliminates the need for model pretraining phases while dynamically protecting high-risk model parameters through layered encryption and obfuscation. We implemented FAS using the Flower framework and evaluated it on a cluster of eleven physical machines. Our approach was up to 90% faster than applying FHE on the model weights. In addition, we eliminated the computational overhead that is required by competitors such as FedML-HE and MaskCrypt. Our approach was up to 1.5$\times$ faster than the competitors while achieving comparable security results. Experimental evaluations on medical imaging datasets confirm that FAS maintains similar security results to conventional FHE against gradient inversion attacks while preserving diagnostic model accuracy. These results position FAS as a practical solution for latency-sensitive healthcare applications where both privacy preservation and computational efficiency are requirements.


💡 Research Summary

The paper introduces FAS (Fast and Secure Federated Learning), a novel privacy‑preserving framework designed for federated learning (FL) in healthcare where both data confidentiality and low latency are essential. Traditional security approaches for FL fall into two extremes: fully homomorphic encryption (FHE) offers strong cryptographic guarantees but incurs prohibitive computational overhead, while lightweight methods such as differential privacy (DP) reduce latency at the cost of model utility and may still leak information through gradient updates.

FAS reconciles this trade‑off by combining three lightweight mechanisms in a layered fashion:

  1. Selective Homomorphic Encryption – only a fixed, uniformly sampled subset of model weights (e.g., 10 % of parameters) is encrypted using a homomorphic scheme (the authors reference CKKS). The encrypted portion can be aggregated on the server without decryption, preserving the exact arithmetic benefits of FHE for the most sensitive parameters while cutting encryption cost by up to 90 % compared with encrypting the entire model.

  2. Differential Noise Injection – the remaining unencrypted parameters receive calibrated Laplace noise according to ε‑DP principles. This noise makes it difficult for an adversary to isolate individual contributions, thereby mitigating membership inference attacks while having a minimal impact on overall accuracy because it is applied to low‑sensitivity weights.

  3. Bitwise Scrambling – a lightweight cryptographic key permutes the bits of the unencrypted parameters after noise addition. This non‑linear transformation destroys statistical patterns that could be exploited by gradient inversion attacks, providing an additional obfuscation layer without significant computational burden.

The authors implemented FAS on the Flower FL framework and evaluated it on a cluster of eleven physical machines using three medical imaging datasets (Kidney, Lung, COVID‑19) and several CNN architectures (MobileNetV2, EfficientNet‑B0). The evaluation focused on three axes: computational efficiency, security against model inversion, and model utility.

Key results:

  • Speed: Compared with full‑model FHE, FAS reduced training time by up to 90 % (e.g., MobileNetV2 training dropped from 610 minutes to 52 minutes). Against state‑of‑the‑art selective‑encryption baselines FedML‑HE and MaskCrypt, FAS achieved a 1.5× speedup (e.g., EfficientNet‑B0 69 min vs. 99 min).
  • Security: Using MSSIM (structural similarity) and VIFP (visual information fidelity) as proxies for inversion resistance, FAS attained MSSIM ≈ 58 % and VIFP scores that stabilized within five communication rounds, outperforming FedML‑HE (52 %) and MaskCrypt (55 %).
  • Utility: Diagnostic accuracy on the imaging tasks remained on par with full‑FHE models, with less than 0.2 % degradation, demonstrating that the added noise and scrambling do not materially harm performance.

A notable advantage of FAS is the elimination of any pre‑training or per‑round mask recalibration phases required by FedML‑HE and MaskCrypt. This simplification enables consistent privacy guarantees from the very first FL round and reduces overall system complexity.

Limitations and future work: The current implementation relies on manually chosen encryption ratios and DP budgets; an automated, adaptive scheme (e.g., meta‑learning driven sensitivity estimation) could further optimize the balance between speed and security. Additionally, the paper does not detail key management for the scrambling layer, which would be critical for long‑term deployments and potential quantum‑resistant extensions.

In summary, FAS provides a practical, high‑performance alternative to full homomorphic encryption for privacy‑preserving federated learning in latency‑sensitive healthcare applications. By encrypting only the most critical parameters and protecting the rest with calibrated noise and bitwise scrambling, it achieves near‑FHE security while dramatically reducing computational overhead, thereby moving FL closer to real‑world clinical deployment.


Comments & Academic Discussion

Loading comments...

Leave a Comment