V. CONCLUSION
In this paper, we first introduce the architecture of collaborative deep learning and the issue of privacy leakage. Secondly, we analyze the application of the commonly used privacy-preserving technology in the two phases of collaborative deep learning and its advantages and disadvantages. Although secure multi-party computing and homomorphic encryption can achieve a high level of privacy and accuracy, the cost is high computational and communication overhead for the users. A more practical and efficient approach is to use differential privacy, where the users insert random noise into their data before sending them to the server. However, it will reduce the accuracy of the model. Compared with traditional machine learning, many factors need to be taken into account such as the hardware performance of user equipment, transmission costs and time constraints when using privacy-preserving technology in collaborative deep learning. When organizations such as hospitals or banks that have large amounts of sensitive data act as users, homomorphic encryption technology is required to ensure the security of the model. When a large number of individuals with weak computing power act as users, differential privacy technology is required to ensure the efficiency of the model. Each privacy-preserving technology has its own characteristics, and more and more studies are currently focusing on providing a reasonable trade-off between data privacy and utility through a combination of secure multiparty computing, homomorphic encryption, and differential privacy.
With the improvement of mobile devices performance, more and more applications use collaborative deep learning to provide users with useful personalized services. Researching the privacy-preserving technology in a complex mobile environment is the next step in our work.