Design and Realization of Efficient and Secured Privacy Preserving Algorithm for Public Data
Framework is not immune if over fitting problems may occur since Training data
contains additional noises and training data is insufficient. To solve this case a
central aggregator can call the evaluation model function based on the testing
data set to solve the mentioned problems and the task publisher and the central
aggregator should keep the tested data unreleased during the model update phase.
In this report, we have used a differential privacy technique and imple mented this technique with existing machine learning algorithms to provide pri vacy for the trained data. In particular, we have wrapped the existing optimizer
into their deferentially private counterparts using TensorFlow and MNIST data
set to be able to test the differential privacy method, tuning hyper parameters
introduced by deferentially private machine learning and measuring privacy guar antee provided using analysis tools included in TensorFlow Privacy. Through the
advancement of data collection and retrieval technologies, the question of privacy
leakage relevant to individuals is eventually revealed while revealing or exchanging
data to extract valuable judgment details and expertise, it should be extensively
explored on a technology basis to resolve such problems. As for future work,
many techniques could be studied related to privacy preserving such as finding
other possible privacy breaches based on Anonymization techniques. Also, the
33
integration of anonymous authentication mechanism that could be used in well
known social media and Email applications such as Facebook,twitter, Gmail...
Many proposed methods could be widely studied and helpful but in the end the
privacy preserving is realized by knowledge and awareness of each individual.