Abstract
Contrastive instance discrimination methods outperform supervised learning in downstream tasks such as image classification and object detection. However, these methods rely heavily on data augmentation during representation learning, which can lead to suboptimal results if not implemented carefully. A common augmentation technique in contrastive learning is random cropping followed by resizing. This can degrade the quality of representation learning when the two random crops contain distinct semantic content. To tackle this issue, we introduce LeOCLR (Leveraging Original Images for Contrastive Learning of Visual Representations), a framework that employs a novel instance discrimination approach and an adapted loss function. This method prevents the loss of important semantic features caused by mapping different object parts during representation learning. Our experiments demonstrate that LeOCLR consistently improves representation learning across various datasets, outperforming baseline models. For instance, LeOCLR surpasses MoCo-v2 by 5.1% on ImageNet-1K in linear evaluation and outperforms several other methods on transfer learning and object detection tasks.
Original language | English |
---|---|
Article number | 3016 |
Pages (from-to) | 1-16 |
Number of pages | 16 |
Journal | Transactions on Machine Learning Research |
Publication status | Published - 15 Oct 2024 |
Bibliographical note
We would like to thank the University of Aberdeen’s High Performance Computing facility for enabling thiswork and the anonymous reviewers for their constructive feedback.
Keywords
- Machine learning
- deep learning
- Self-Supervised Learning