Optimizing Differential Privacy: The Role of Model Parallelism and Iteration Subsampling
ID:199
View protection:Participant Only
Updated time:2025-12-24 14:17:18 Views:118
Online
Abstract
Guaranteeing data privacy when working with machine learning is a difficult issue, especially in the federated learning or distributed learning settings. Differential privacy (DP) is often used as an approach to resolve this issue, which works by introducing noise into the training process. Excessive noise can, however, impair model performance. This paper investigates a different strategy by exploiting structured randomness in model parallelism and iteration subsampling to boost privacy without compromising accuracy and introduce a coherent framework that systematically combines model partitioning—where every client updates only part of model parameters—and balanced iteration subsampling—where points are involved in a constant number of rounds of training. Our analysis provides privacy amplification guarantees for both approaches, showing that these structured randomization methods lead to much better privacy than traditional Poisson subsampling or independent dropout techniques. This research also empirically verifies our solution on deep learning models, exhibiting better trade-offs between utility of the model and privacy protection. The proposed solution is able to diminish dependency on the high levels of noise by being a scalable and efficient privacy-preserving machine learning approach. The paper adds to the wider scope of secure AI through its unique contribution to the area of optimization in differential privacy, balancing the areas of privacy, efficiency of computation, and accuracy of the model
Keywords
Differential Privacy, Model Parallelism, Federated Learning, Iteration Subsampling, Privacy Amplification, Machine Learning Security
Post comments