Enhancing Edge Computing with Federated Learning: Privacy and Performance Trade-offs
ID:211
View protection:Participant Only
Updated time:2025-12-24 14:18:41 Views:133
Online
Abstract
Federated learning (FL), which allows decentralized model training with no trade-offs in terms of data privacy, has been perceived as an important way to promote edge computing. The application of FL in edge computing systems is proposed in the paper with an emphasis on privacy-performance trade-offs. FL allows for compliance with laws related to privacy while allowing for cooperative learning among distributed edge devices without exposing sensitive information. Nevertheless, disadvantages exist in the decentralized method, including high communication costs, model drift, and limited resources. The research puts forward a privacy-conscious federated learning model founded on differential privacy and secure aggregation with the aim of ensuring maximum security for the data at the expense of losing minimum model accuracy. The model is tested with real-world edge computing setups with significant reduction in communication latency and superior convergence rates in the model. The findings indicate that the incorporation of privacy-preserving mechanisms has negligible adverse effects in terms of model performance, proving the feasibility of trading off between privacy and computational efficiency in edge-based FL systems. The paper culminates with remarks on the potential for optimizing model aggregation methodologies and lessening system heterogeneity with the aim of enhancing scalability and robustness in applications in edge intelligence
Keywords
Federated Learning, Edge Computing, Privacy Preservation, Differential Privacy, Secure Aggregation, Model Convergence
Post comments