• 《万方数据——数字化期刊群》上网期刊
  • 科技部西南信息中心资源部中文科技期刊数据库上网期刊
  • 《中国期刊网》、《中国学术期刊(光盘版)全文收录期刊》

面向联邦学习的学习率裁剪梯度优化隐私保护方案

孟向前, 刘腾飞, 谢绒娜

孟向前, 刘腾飞, 谢绒娜. 面向联邦学习的学习率裁剪梯度优化隐私保护方案[J]. 北京电子科技学院学报, 2024, 32(4): 46-54.
引用本文: 孟向前, 刘腾飞, 谢绒娜. 面向联邦学习的学习率裁剪梯度优化隐私保护方案[J]. 北京电子科技学院学报, 2024, 32(4): 46-54.
MENG Xiangqian, LIU Tengfei, XIE Rongna. Learning Rate Clipping Gradient Optimization Privacy Protection Scheme for Federated Learning[J]. Journal of Beijing Electronic Science and Technology Institute, 2024, 32(4): 46-54.
Citation: MENG Xiangqian, LIU Tengfei, XIE Rongna. Learning Rate Clipping Gradient Optimization Privacy Protection Scheme for Federated Learning[J]. Journal of Beijing Electronic Science and Technology Institute, 2024, 32(4): 46-54.

面向联邦学习的学习率裁剪梯度优化隐私保护方案

基金项目: 

国家重点研发计划项目(2017YFB0801803)。

详细信息
    作者简介:

    孟向前(1997-),男,通信作者,研究生在读。研究方向:网络空间安全。E-mail:sdsdpxmxq@126.com;刘腾飞(1994-),男,硕士。研究方向:计算机应用技术。E-mail:1149069821@qq.com;谢绒娜(1976-),女,博士,教授。研究方向:网络与系统安全、访问控制、密码工程。

  • 中图分类号: TN919

Learning Rate Clipping Gradient Optimization Privacy Protection Scheme for Federated Learning

  • 摘要: 联邦学习中,攻击者通过模型梯度攻击来恢复训练数据集,使训练数据集的隐私性受到威胁,存在隐私泄露。为保护数据隐私性,差分隐私技术被引入到联邦学习中,但在神经网络训练过程中存在学习率过大导致梯度爆炸不收敛或学习率过小导致梯度收敛过慢的问题,降低学习的准确率。针对上述问题,本文提出一种具有自适应学习率的梯度优化算法(CAdabelief算法),该算法在神经网络中引入学习率裁剪动态界限的概念,动态调整学习率达到理想的值,并趋于稳定。将CAdabelief算法引入联邦学习差分隐私框架,提出了面向联邦学习的学习率裁剪梯度优化隐私保护方案。并采用MNIST数据集进行测试实证。在相同的隐私预算下,CAdabelief算法训练结果的准确率高于常用的SGD、Adam、Adabelief算法。
    Abstract: In federated learning, attackers recover the training data set through model gradient attack, which threatens the privacy of the training data set and leads to privacy leakage. In order to protect data privacy, differential privacy technology is introduced into federated learning. However, in the process of neural network training, there is a problem that the learning rate is so large that causes the gradient to explode and not to converge or the learning rate is so small that causes the gradient convergence to be too slow, which reduces the accuracy of learning. In view of the above problems, this paper proposes a gradient optimization algorithm with adaptive learning rate (CAdabelief algorithm). This algorithm introduces the concept of learning rate clipping dynamic boundary in neural network, dynamically adjusts the learning rate to reach the ideal value and tends to be stable. The CAdabelief algorithm is introduced into the federated learning differential privacy framework, and a learning rate clipping gradient optimization privacy protection scheme for federated learning is proposed. The MNIST data set is used for test verification. Under the same privacy budget, the accuracy of the training results of the CAdabelief algorithm is higher than that of the commonly used SGD, Adam, and Adabelief algorithms.
  • [1]

    Zhu L, Liu Z, Han S. Deep Leakage from Gradients[J]. 2019.

    [2]

    Abadi M, Chu A, Goodfellow I, et al. Deep Learning with Differential Privacy[C]//the 2016 ACM SIGSAC Conference. ACM, 2016.

    [3]

    Smith M, Lopez M, Zwiessele M, et al. Differentially private regression with Gaussian processes[C]//International Conference on Artificial Intelligence & Statistics. 2018.

    [4]

    Zhuang J, Tang T, Tatikonda S, et al. AdaBelief Optimizer:Adapting Stepsizes by the Belief in Observed Gradients[J]. 2020.

    [5]

    Hitaj B, Ateniese G, Perez-Cruz F. Deep Models Under the GAN:Information Leakage from Collaborative Deep Learning[J]. ACM, 2017.

    [6]

    Wang Z, Song M, Zhang Z, et al. Beyond Inferring Class Representatives:User-Level Privacy Leakage From Federated Learning[C]//IEEE INFOCOM 2019-IEEE Conference on Computer Communications. IEEE, 2019.

    [7]

    Mohassel P, Zhang Y. SecureML:A System for Scalable Privacy-Preserving Machine Learning[C]//Security & Privacy. IEEE, 2017:19-38.

    [8]

    Jeong E, Oh S, Kim H, et al. Communication-Efficient On-Device Machine Learning:Federated Distillation and Augmentation under Non-IID Private Data[J]. 2018.

    [9]

    Wang L, Gu Q. A Knowledge Transfer Framework for Differentially Private Sparse Learning[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(4):6235-6242.

    [10]

    Le T P, Aono Y, Hayashi T, et al. PrivacyPreserving Deep Learning:Revisited and Enhanced[J]. 2017.

    [11]

    Ammad-Ud-Din M, Ivannikova E, Khan S A, et al. Federated Collaborative Filtering for Privacy-Preserving Personalized Recommendation System[J]. 2019.

    [12]

    Zhang W, Li X, Ma H, et al. Federated learning for machinery fault diagnosis with dynamic validation and self-supervision[J]. Knowledge-Based Systems, 2021, 213(1):106679.

    [13]

    Sijing, Duan, Deyu, et al. JointRec:A Deep-Learning—Based Joint Cloud Video Recommendation Framework for Mobile IoT[J]. IEEE Internet of Things Journal, 2019, PP(99):1-1.

    [14]

    Dwork C. Differential Privacy[J]. Encyclopedia of Cryptography and Security, 2011:338-340.

    [15]

    Mercier Q, Poirion F, Désidéri J A. A stochastic multiple gradient descent algorithm[J]. European Journal of Operational Research, 2018, 271(3):808-817.

    [16]

    Zhou Y, Zhang M, Zhu J, et al. A randomized block-coordinate adam online learning optimization algorithm[J]. Neural Computing and Applications, 2020, 32(16):12671-12684.

    [17]

    Zhang J, Xiao X, Yang Y, et al. PrivGene:Differentially Private Model Fitting Using Genetic Algorithms[C]//Proceedings of the 2013 ACM SIGMOD International Conference on Management of Data. 2013:665-676.

    [18]

    Talwar K, Thakurta A G, Zhang L. Nearly Optimal Private Lasso[C]//Proceedings of the Advances in Neural Information Processing Systems. 2015:3025-3033.

    [19]

    Wu X, Zhang Y, Shi M, et al. An adaptive federated learning scheme with differential privacy preserving[J]. Future Generation Computer Systems, 2022, 127:362-372.

    [20]

    Luo L, Xiong Y, Liu Y, et al. Adaptive gradient methods with dynamic bound of learning rate[J]. arXiv preprint arXiv:1902.09843, 2019.

计量
  • 文章访问数:  0
  • HTML全文浏览量:  0
  • PDF下载量:  0
  • 被引次数: 0
出版历程
  • 网络出版日期:  2025-01-16

目录

    /

    返回文章
    返回