Abstract
The issue of data privacy protection must be considered in distributed federated learning (FL) so as to ensure that sensitive information is not leaked. In this paper, we propose a two-stage differential privacy (DP) framework for FL based on edge intelligence. Various levels of privacy preservation can be provided according to the degree of data sensitivity. In the first stage, the randomized response mechanism is used to perturb the original feature data by the user terminal for data desensitization, and the user can self-regulate the level of privacy preservation. In the second stage, noise is added to the local models by the edge server to further guarantee the privacy of the models. Finally, the model updates are aggregated in the cloud. In order to evaluate the performance of the proposed end-edge-cloud FL framework in terms of training accuracy and convergence, extensive experiments are conducted on a real electrocardiogram (ECG) signal dataset. Bi-directional long-short-term memory (BiLSTM) neural network is adopted to training classification model. The effect of different combinations of feature perturbation and noise addition on the model accuracy is analyzed depending on different privacy budgets and parameters. The experimental results demonstrate that the proposed privacy-preserving framework provides good accuracy and convergence while ensuring privacy.
Original language | English |
---|---|
Number of pages | 12 |
Journal | IEEE journal of biomedical and health informatics |
Early online date | 18 Aug 2023 |
DOIs | |
Publication status | E-pub ahead of print - 18 Aug 2023 |
Bibliographical note
This work was supported by National Natural Science Foundation of China under Grant 61872138, Natural Science Foundation of Hunan Province under Grant 2021JJ30278, and Scientific Research Fund of Hunan Provincial Education Department under Grant 22B0497Keywords
- differential privacy
- edge computing
- federated learning
- smart healthcare