A cross-border community for researchers with openness, equality and inclusion
Optimizing Differential Privacy: The Role of Model Parallelism and Iteration Subsampling
ID:199 View protection:Participant Only Updated time:2025-12-24 14:17:18 Views:118 Online

Start Time:2025-12-29 18:00

Duration:15min

Session:[S3] Track 3: Privacy, Security for Networks [S3] Track 3: Privacy, Security for Networks

No file yet

Abstract
Guaranteeing data privacy when working with machine learning is a difficult issue, especially in the federated learning or distributed learning settings. Differential privacy (DP) is often used as an approach to resolve this issue, which works by introducing noise into the training process. Excessive noise can, however, impair model performance. This paper investigates a different strategy by exploiting structured randomness in model parallelism and iteration subsampling to boost privacy without compromising accuracy and introduce a coherent framework that systematically combines model partitioning—where every client updates only part of model parameters—and balanced iteration subsampling—where points are involved in a constant number of rounds of training. Our analysis provides privacy amplification guarantees for both approaches, showing that these structured randomization methods lead to much better privacy than traditional Poisson subsampling or independent dropout techniques. This research also empirically verifies our solution on deep learning models, exhibiting better trade-offs between utility of the model and privacy protection. The proposed solution is able to diminish dependency on the high levels of noise by being a scalable and efficient privacy-preserving machine learning approach. The paper adds to the wider scope of secure AI through its unique contribution to the area of optimization in differential privacy, balancing the areas of privacy, efficiency of computation, and accuracy of the model
Keywords
Differential Privacy, Model Parallelism, Federated Learning, Iteration Subsampling, Privacy Amplification, Machine Learning Security
Speaker
Kanchan Yadav
GLA University Mathura

Post comments
Verification Code Change Another
All comments
Important Dates
  • Conference date

    12-29

    2025

    -

    12-31

    2025

  • 12-30 2025

    Presentation submission deadline

  • 02-10 2026

    Draft paper submission deadline

  • 02-10 2026

    Registration deadline

Sponsored By

United Societies of Science

Organized By

扎尔卡大学

Contact info
×

USS WeChat Official Account

USSsociety

Please scan the QR code to follow
the wechat official account.