Model Pruning Enables Localized and Efficient Federated Learning for Yield Forecasting and Data Sharing

Research output: Working paperPreprint

1 Downloads (Pure)

Abstract

Federated Learning (FL) presents a decentralized approach to model training in the agri-food sector and offers the potential for improved machine learning performance, while ensuring the safety and privacy of individual farms or data silos. However, the conventional FL approach has two major limitations. First, the heterogeneous data on individual silos can cause the global model to perform well for some clients but not all, as the update direction on some clients may hinder others after they are aggregated. Second, it is lacking with respect to the efficiency perspective concerning communication costs during FL and large model sizes. This paper proposes a new technical solution that utilizes network pruning on client models and aggregates the pruned models. This method enables local models to be tailored to their respective data distribution and mitigate the data heterogeneity present in agri-food data. Moreover, it allows for more compact models that consume less data during transmission. We experiment with a soybean yield forecasting dataset and find that this approach can improve inference performance by 15.5% to 20% compared to FedAvg, while reducing local model sizes by up to 84% and the data volume communicated between the clients and the server by 57.1% to 64.7%.
Original languageEnglish
PublisherArXiv
Number of pages31
DOIs
Publication statusPublished - 19 Apr 2023

Bibliographical note

31 pages, 4 figures, 4 tables

Version History

[v1] Wed, 19 Apr 2023

Keywords

  • cs.LG

Fingerprint

Dive into the research topics of 'Model Pruning Enables Localized and Efficient Federated Learning for Yield Forecasting and Data Sharing'. Together they form a unique fingerprint.
  • "Maxwell" HPC for Research

    Katie Wilde (Manager) & Andrew Phillips (Manager)

    Research Facilities: Facility

Cite this