Progressive Multi-Scale Fusion Network for RGB-D Salient Object Detection

Guangyu Ren, Yanchun Xie, Tianhong Dai* (Corresponding Author), Tania Stathaki

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)
2 Downloads (Pure)

Abstract

Salient object detection (SOD) aims at locating the most significant object within a given image. In recent years, great progress has been made in applying SOD on many vision tasks. The depth map could provide additional spatial prior and boundary cues to boost the performance. Combining the depth information with image data obtained from standard visual cameras has been widely used in recent SOD works, however, introducing depth information in a suboptimal fusion strategy may have negative influence in the performance of SOD. In this paper, we discuss about the advantages of the so-called progressive multi-scale fusion method and propose a mask-guided feature aggregation module (MGFA). The proposed framework can effectively combine the two features of different modalities and, furthermore, alleviate the impact of erroneous depth features, which are inevitably caused by the variation of depth quality. We further introduce a mask-guided refinement module (MGRM) to complement the high-level semantic features and reduce the irrelevant features from multi-scale fusion, leading to an overall refinement of detection. Experiments on five challenging benchmarks demonstrate that the proposed method outperforms 11 state-of-the-art methods under different evaluation metrics.
Original languageEnglish
Article number103529
Number of pages9
JournalComputer Vision and Image Understanding
Volume223
Early online date24 Aug 2022
DOIs
Publication statusPublished - 1 Oct 2022

Keywords

  • Multi-scale fusion
  • Mask guided
  • Salient object detection

Fingerprint

Dive into the research topics of 'Progressive Multi-Scale Fusion Network for RGB-D Salient Object Detection'. Together they form a unique fingerprint.

Cite this