Motion-supervised Co-part segmentation

  • Aliaksandr Siarohin*
  • , Subhankar Roy*
  • , Stéphane Lathuilière
  • , Sergey Tulyakov
  • , Elisa Ricci*
  • , Nicu Sebe*
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingPublished conference contribution

14 Citations (Scopus)

Abstract

Recent co-part segmentation methods mostly operate in a supervised learning setting, which requires a large amount of annotated data for training. To overcome this limitation, we propose a self-supervised deep learning method for co-part segmentation. Differently from previous works, our approach develops the idea that motion information inferred from videos can be leveraged to discover meaningful object parts. To this end, our method relies on pairs of frames sampled from the same video. The network learns to predict part segments together with a representation of the motion between two frames, which permits reconstruction of the target image. Through extensive experimental evaluation on publicly available video sequences we demonstrate that our approach can produce improved segmentation maps with respect to previous self-supervised co-part segmentation approaches.

Original languageEnglish
Title of host publicationProceedings of ICPR 2020 - 25th International Conference on Pattern Recognition
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages9650-9657
Number of pages8
ISBN (Electronic)9781728188089
DOIs
Publication statusPublished - 2020
Externally publishedYes
Event25th International Conference on Pattern Recognition, ICPR 2020 - Virtual, Milan, Italy
Duration: 10 Jan 202115 Jan 2021

Conference

Conference25th International Conference on Pattern Recognition, ICPR 2020
Country/TerritoryItaly
CityVirtual, Milan
Period10/01/2115/01/21

Bibliographical note

Publisher Copyright:
© 2020 IEEE

Fingerprint

Dive into the research topics of 'Motion-supervised Co-part segmentation'. Together they form a unique fingerprint.

Cite this