A cross-border community for researchers with openness, equality and inclusion
Dynamic Noise Injection for Gradient Leakage Resistance in Federated Learning
ID:28 View protection:Participant Only Updated time:2025-11-19 09:19:53 Views:73 Oral (In-person)

Start Time:No start time yet

Duration:No duration yet

Session:[No session yet] [No session block yet]

No file yet

Abstract
Federated learning enables collaborative machine learning on decentralized data, but faces a critical privacy challenge from gradient leakage attacks, which can reconstruct sensitive user data from shared model updates. While differential privacy defenses like static noise injection are common, they often establish a poor privacy-utility trade-off by indiscriminately adding noise, thereby degrading model accuracy. Conventional dynamic methods also fall short, as they typically fail to adapt to the fine-grained, contextual dynamics of local training. To overcome these limitations, we propose FedDynaNoise, a novel privacy-preserving framework that introduces a triple-adaptive noise injection mechanism. The noise level is dynamically and intelligently calibrated based on three key factors such as the training round, the layer-wise gradient sensitivity, and the prediction entropy. This multi-faceted approach ensures that the privacy budget is used efficiently and effectively. We conducted a comprehensive evaluation of FedDynaNoise on four image classification benchmarks against gradient inversion attacks. Our experiments show that FedDynaNoise provides robust privacy protection, achieving a high reconstruction mean square error of approximately 0.65. This is a significant improvement over static noise and conventional dynamic noise baselines around 0.13 and 0.31. Remarkably, this strong defense is achieved with minimal impact on model utility, with FedDynaNoise reaching a test accuracy of 93.9%, only a slight decrease from the 95.1% of a non-private model. Our work demonstrates that FedDynaNoise offers a superior privacy-utility balance, presenting a practical and effective solution for building more secure, trustworthy, and accurate federated learning systems.
Keywords
Adaptive Noise,Differential Privacy,Edge Computing,Gradient Inversion,Privacy Utility
Speaker
Kundjanasith Thonglek
Kasetsart University

Post comments
Verification Code Change Another
All comments
Important Dates
  • Conference date

    12-29

    2025

    -

    12-31

    2025

  • 12-16 2025

    Draft paper submission deadline

  • 12-30 2025

    Presentation submission deadline

  • 12-30 2025

    Registration deadline

Sponsored By

United Societies of Science

Organized By

扎尔卡大学

Contact info