A Sample Paper Review which I tried during my Ph.D. interview!!!

Unleashing the Tiger: Inference Attacks on Split Learning

Split Learning; Image Courtesy : Research gate
Examples of inference of private training instances of inference of private training instances ; Image Courtesy : The orginal Paper
  • MNIST- Single channel and low entropy
  • Fashion MNIST — Single channel and low entropy
  • Omniglot — Though high variance is available, but single channel
  • CelebA- Though it has a large number of samples- all the individual instances are faces that have the same global/common features like eyes, nose, and mouth, etc.
Architecture of the client’s network ; Image courtesy : The orginal Paper
  1. Differential Privacy: Exploring novel Differential Privacy (DP) techniques and analyzing the tradeoff between Record level and participant level DP. One major problem in DP is the constant tug of war between data privacy and model utility. Trying to solve this problem using some ensemble approach by using machine learning or trying to generate differentially private datasets using GAN, or developing data-aware/gradient aware DP are some interesting open research problems. We can try adding skewed noise (non-normal noise) in a controlled way to explore if it can prevent such adversarial attacks.
  2. Making the attack data Agnostic: Currently, there is a dependency on the client’s data distribution to carry out the attack. Future research could look into developing similar approaches without depending on the client’s data distribution.
  3. Exploring other defense mechanisms: Developing novel defense mechanisms like Selective gradient sharing and feature anonymization to overcome this attack. We could also try using the principles of the disentangled autoencoder to develop a highly uncorrelated forward vector to see if we can prevent such attacks since the adversary would observe fewer gradients. We could also explore the combination of Differential Privacy and techniques like Knowledge distillation or Network pruning or Model quantization to see 1) If we can achieve the desired privacy without compromising much on the model utility, 2) If we can understand exactly information is present in the forward vector and 3) if we can prevent such adversary attacks. We can also explore novel gradient clipping mechanisms to reduce the effect of such attacks. We should also evaluate the attack’s effectiveness on the recent defenses like DISCO [3], Prediction Purification [4], and Shredder [5].
  4. Exploring other metrics for Providing quantitative evaluation: This paper uses MSE as the metric for assessing how similar the actual and the reconstructed images are, but to quantify the results accurately, we could also try using PSNR, SSIM, and Inception scores[2], CNN based reidentification to obtain both subjective and objective evaluation. Similarly, we could explore other loss functions for GAN like Min Max loss, etc.
  5. Zero-shot learning: Exploring zero-shot learning for quicker convergence of the Adversary network.
  6. Unsupervised learning: During the property inference attack, we could try using unsupervised learning methods to find other unintended feature leakages to reverse engineer better defense mechanisms.
  • The abstract is clear but it misses the prominent quantitative results of the paper.
  • Though the paper is novel and the introduction fails to capture the essence of the work which is exactly recovering the private training instances, unlike prototypical examples.
  • The usage of Wasserstein loss to train the discriminator is a great approach but it would have been good if the authors could justify the reason for choosing such loss functions and other methods.
  • While the study appears to be sound the authors have missed citations in a few places and have missed the citation of the seminal paper in this field as well.
  • The authors could provide the reconstruction error for the client-side attack carried out in the split learning framework.
  • Though the authors have provided the number of iterations taken by the network to converge, it would be more practical if the authors could provide the time taken to converge, since the time taken for each iteration might vary from dataset to dataset and having time as a metric could help in evaluating network latency as well.
  • The flow of gradients from the server to the client especially in the client-side attack of the private label scenario requires a pictorial depiction to improve the flow and readability of the content.
  • The usage of the Omniglot dataset is a good approach to prove few-shot learning.

--

--

--

Machine learning Researcher, IIT Madras

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

{UPDATE} Beats Shot Hack Free Resources Generator

Check out! 👇

Your Guide to the Creation of HIPAA-Compliant Software

On Privacy and Power

On a $20m bug in Jet Protocol

The UX of cybersecurity: What can you do to better understand your personas?

Make your Bluetooth Low Energy IoT device more secure with Visible Light Communication

Bidirectional Network of Dynamic Documents

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Sethuramantv

Sethuramantv

Machine learning Researcher, IIT Madras

More from Medium

Most efficient way to do Matrix Multiplication —Python vs PyTorch #Deep Learning

The Future of Vision: Machines vs. Humans

🏋️‍♀️ Workout with me? 🏋️ — AI-based competitional sport.

Gentle Introduction to Cycle GANs