ABSTRACT
Due to the high training costs of deep learning, model developers often rent cloud GPU servers to achieve better efficiency. However, this practice raises privacy concerns. An adversarial party may be interested in 1) personal identifiable information encoded in the training data and the learned models, 2) misusing the sensitive models for its own benefits, or 3) launching model inversion (MIA) and generative adversarial network (GAN) attacks to reconstruct replicas of training data (e.g., sensitive images). Learning from encrypted data seems impractical due to the large training data and expensive learning algorithms, while differential-privacy based approaches have to make significant trade-offs between privacy and model quality. We investigate the use of image disguising techniques to protect both data and model privacy. Our preliminary results show that with block-wise permutation and transformations, surprisingly, disguised images still give reasonably well performing deep neural networks (DNN). The disguised images are also resilient to the deep-learning enhanced visual discrimination attack and provide an extra layer of protection from MIA and GAN attacks.
- M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang. Deep learning with differential privacy. 2016.Google ScholarDigital Library
- M. Fredrikson, S. Jha, and T. Ristenpart. Model inversion attacks that exploit confidence information and basic countermeasures. Conference on Computer and Communications Security, page 1322, 2015. Google ScholarDigital Library
- T. Graepel, K. Lauter, and M. Naehrig. Ml confidential: Machine learning on encrypted data. In Proceedings of the 15th International Conference on Information Security and Cryptology, ICISC'12, Berlin, Heidelberg, 2013. Springer-Verlag. Google ScholarDigital Library
- K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015.Google Scholar
- B. Hitaj, G. Ateniese, and F. Pérez-Cruz. Deep models under the GAN: information leakage from collaborative deep learning. CoRR, abs/1702.07464, 2017. Google ScholarDigital Library
- T. Jeon. Classifying mnist dataset using cnn. http://yann.lecun.com/exdb/mnist/.Google Scholar
- M. Li, L. Lai, N. Suda, V. Chandra, and D. Z. Pan. Privynet: A flexible framework for privacy-preserving deep neural network training with A fine-grained privacy control. CoRR, abs/1709.06161, 2017.Google Scholar
- P. Mohassel and Y. Zhang. Secureml: A system for scalable privacy-preserving machine learning. In 2017 IEEE Symposium on Security and Privacy (SP), 2017.Google ScholarCross Ref
- A. Narayanan. Data privacy: The story of a paradigm shift, 2010.Google Scholar
- V. Nikolaenko, U.Weinsberg, S. Ioannidis, M. Joye, D. Boneh, and N. Taft. Privacypreserving ridge regression on hundreds of millions of records. In Proceedings of the 2013 IEEE Symposium on Security and Privacy. IEEE Computer Society, 2013. Google ScholarDigital Library
- R. Shokri and V. Shmatikov. Privacy-preserving deep learning. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, 2015. Google ScholarDigital Library
- S. S. Vempala. The Random Projection Method. American Mathematical Society, 2005.Google ScholarCross Ref
Index Terms
- Image Disguising for Privacy-preserving Deep Learning
Recommendations
A Survey on Deep Learning Techniques for Privacy-Preserving
Machine Learning for Cyber SecurityAbstractThere are challenges and issues when machine learning algorithm needs to access highly sensitive data for the training process. In order to address these issues, several privacy-preserving deep learning techniques, including Secure Multi-Party ...
Multi-level privacy preserving data publishing
Policedata is an important source of social media data and can be regarded as a technical assistance to increase government accountability and transparency. Notably, it contains large amounts of personal private information that should be preserved ...
Comments