Abstract
Recent co-part segmentation methods mostly operate in a supervised learning setting, which requires a large amount of annotated data for training. To overcome this limitation, we propose a self-supervised deep learning method for co-part segmentation. Differently from previous works, our approach develops the idea that motion information inferred from videos can be leveraged to discover meaningful object parts. To this end, our method relies on pairs of frames sampled from the same video. The network learns to predict part segments together with a representation of the motion between two frames, which permits reconstruction of the target image. Through extensive experimental evaluation on publicly available video sequences we demonstrate that our approach can produce improved segmentation maps with respect to previous self-supervised co-part segmentation approaches.
| Original language | English |
|---|---|
| Title of host publication | Proceedings of ICPR 2020 - 25th International Conference on Pattern Recognition |
| Publisher | Institute of Electrical and Electronics Engineers Inc. |
| Pages | 9650-9657 |
| Number of pages | 8 |
| ISBN (Electronic) | 9781728188089 |
| DOIs | |
| Publication status | Published - 2020 |
| Externally published | Yes |
| Event | 25th International Conference on Pattern Recognition, ICPR 2020 - Virtual, Milan, Italy Duration: 10 Jan 2021 → 15 Jan 2021 |
Conference
| Conference | 25th International Conference on Pattern Recognition, ICPR 2020 |
|---|---|
| Country/Territory | Italy |
| City | Virtual, Milan |
| Period | 10/01/21 → 15/01/21 |
Bibliographical note
Publisher Copyright:© 2020 IEEE