← Back to Presentations
Online

Optimizing Differential Privacy: The Role of Model Parallelism and Iteration Subsampling

Speakers: Kanchan Yadav

Track: Track 3: Privacy, Security for Networks

📑 No Slides 🎬 No Video

Abstract

Guaranteeing data privacy when working with machine learning is a difficult issue, especially in the federated learning or distributed learning settings. Differential privacy (DP) is often used as an approach to resolve this issue, which works by introducing noise into the training process. Excessive noise can, however, impair model performance. This paper investigates a different strategy by exploiting structured randomness in model parallelism and iteration subsampling to boost privacy without compromising accuracy and introduce a coherent framework that systematically combines model partitioning—where every client updates only part of model parameters—and balanced iteration subsampling—where points are involved in a constant number of rounds of training. Our analysis provides privacy amplification guarantees for both approaches, showing that these structured randomization methods lead to much better privacy than traditional Poisson subsampling or independent dropout techniques. This research also empirically verifies our solution on deep learning models, exhibiting better trade-offs between utility of the model and privacy protection. The proposed solution is able to diminish dependency on the high levels of noise by being a scalable and efficient privacy-preserving machine learning approach. The paper adds to the wider scope of secure AI through its unique contribution to the area of optimization in differential privacy, balancing the areas of privacy, efficiency of computation, and accuracy of the model

Speakers

Kanchan Yadav
GLA University
GLA University Mathura

Details

Type
Online
Model
OFFLINE
Language
EN
Timezone
UTC+8
Views
372
Likes
13