Robust Partial Share Federated Learning Algorithm against Model Poisoning Attack 


Vol. 48,  No. 11, pp. 1387-1398, Nov.  2023
10.7840/kics.2023.48.11.1387


PDF
  Abstract

The exponential growth of decentralized data sources has propelled Federated Learning to the forefront of research. This approach facilitates the training of models across multiple devices without the need for direct data exchange. Nevertheless, the conventional federated learning method encounters inherent challenges when confronted with heterogeneous data distributions among clients. Furthermore, it remains susceptible to Byzantine attacks. To address these challenges, we propose a novel partial share algorithm. This algorithm trains local models by partitioning them into personalized and shared components, enabling clients to create personalized models that are tailored to their local data. Concurrently, it preserves robustness against potential attacks by exposing only the shared portion of the local model. Through an extensive series of experiments, we comprehensively evaluate the performance of the proposed algorithm in terms of personalization and robustness against attacks.

  Statistics
Cumulative Counts from November, 2022
Multiple requests among the same browser session are counted as one view. If you mouse over a chart, the values of data points will be shown.


  Related Articles
  Cite this article

[IEEE Style]

H. Park, M. Kim, M. Kwon, "Robust Partial Share Federated Learning Algorithm against Model Poisoning Attack," The Journal of Korean Institute of Communications and Information Sciences, vol. 48, no. 11, pp. 1387-1398, 2023. DOI: 10.7840/kics.2023.48.11.1387.

[ACM Style]

Heewon Park, Miru Kim, and Minhae Kwon. 2023. Robust Partial Share Federated Learning Algorithm against Model Poisoning Attack. The Journal of Korean Institute of Communications and Information Sciences, 48, 11, (2023), 1387-1398. DOI: 10.7840/kics.2023.48.11.1387.

[KICS Style]

Heewon Park, Miru Kim, Minhae Kwon, "Robust Partial Share Federated Learning Algorithm against Model Poisoning Attack," The Journal of Korean Institute of Communications and Information Sciences, vol. 48, no. 11, pp. 1387-1398, 11. 2023. (https://doi.org/10.7840/kics.2023.48.11.1387)
Vol. 48, No. 11 Index