Learning Rate Clipping Gradient Optimization Privacy Protection Scheme for Federated Learning
-
Abstract
In federated learning, attackers recover the training data set through model gradient attack, which threatens the privacy of the training data set and leads to privacy leakage. In order to protect data privacy, differential privacy technology is introduced into federated learning. However, in the process of neural network training, there is a problem that the learning rate is so large that causes the gradient to explode and not to converge or the learning rate is so small that causes the gradient convergence to be too slow, which reduces the accuracy of learning. In view of the above problems, this paper proposes a gradient optimization algorithm with adaptive learning rate (CAdabelief algorithm). This algorithm introduces the concept of learning rate clipping dynamic boundary in neural network, dynamically adjusts the learning rate to reach the ideal value and tends to be stable. The CAdabelief algorithm is introduced into the federated learning differential privacy framework, and a learning rate clipping gradient optimization privacy protection scheme for federated learning is proposed. The MNIST data set is used for test verification. Under the same privacy budget, the accuracy of the training results of the CAdabelief algorithm is higher than that of the commonly used SGD, Adam, and Adabelief algorithms.
-
-