Evaluating the Efficacy of Segment Anything Model for Delineating Agriculture and Urban Green Spaces in Multiresolution Aerial and Spaceborne Remote Sensing Images

Baoling Gui, Anshuman Bhardwaj*, Lydia Sam

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Segmentation of Agricultural Remote Sensing Images (ARSIs) stands as a pivotal component within the intelligent development path of agricultural information technology. Similarly, quick and effective delineation of urban green spaces (UGSs) in high-resolution images is also increasingly needed as input in various urban simulation models. Numerous segmentation algorithms exist for ARSIs and UGSs; however, a model with exceptional generalization capabilities and accuracy remains elusive. Notably, the newly released Segment Anything Model (SAM) by META AI is gaining significant recognition in various domains for segmenting conventional images, yielding commendable results. Nevertheless, SAM’s application in ARSI and UGS segmentation has been relatively limited. ARSIs and UGSs exhibit distinct image characteristics, such as prominent boundaries, larger frame sizes, and extensive data types and volumes. Presently, there is a dearth of research on how SAM can effectively handle various ARSI and UGS image types and deliver superior segmentation outcomes. Thus, as a novel attempt in this paper, we aim to evaluate SAM’s compatibility with a wide array of ARSI and UGS image types. The data acquisition platform comprises both aerial and spaceborne sensors, and the study sites encompass most regions of the United States, with images of varying resolutions and frame sizes. It is noteworthy that the segmentation effect of SAM is significantly influenced by the content of the image, as well as the stability and accuracy across images of different resolutions and sizes. However, in general, our findings indicate that resolution has a minimal impact on the effectiveness of conditional SAM-based segmentation, maintaining an overall segmentation accuracy above 90%. In contrast, the unsupervised segmentation approach, SAM, exhibits performance issues, with around 55% of images (3 m and coarser resolutions) experiencing lower accuracy on low-resolution images. Whereas frame size exerts a more substantial influence, as the image size increases, the accuracy of unsupervised segmentation methods decreases extremely fast, and conditional segmentation methods also show some degree of degradation. Additionally, SAM’s segmentation efficacy diminishes considerably in the case of images featuring unclear edges and minimal color distinctions. Consequently, we propose enhancing SAM’s capabilities by augmenting the training dataset and fine-tuning hyperparameters to align with the demands of ARSI and UGS image segmentation. Leveraging the multispectral nature and extensive data volumes of remote sensing images, the secondary development of SAM can harness its formidable segmentation potential to elevate the overall standard of ARSI and UGS image segmentation.

Original languageEnglish
Article number414
Number of pages32
JournalRemote Sensing
Volume16
Issue number2
DOIs
Publication statusPublished - 20 Jan 2024

Data Availability Statement

The remote sensing data used for this study are publicly available, and the download web links have been provided. This was an evaluation study, and no new data have been created.

Keywords

  • aerial imaging
  • agriculture
  • Landsat
  • remote sensing
  • Segment Anything Model (SAM)
  • Sentinel-2
  • unmanned aerial vehicles (UAVs)
  • urban green spaces
  • vegetation

Fingerprint

Dive into the research topics of 'Evaluating the Efficacy of Segment Anything Model for Delineating Agriculture and Urban Green Spaces in Multiresolution Aerial and Spaceborne Remote Sensing Images'. Together they form a unique fingerprint.

Cite this