Simplifying Open-Set Video Domain Adaptation with Contrastive Learning

Giacomo Zara* (Corresponding Author), Victor Guilherme Turrisi da Costa, Subhankar Roy, Paolo Rota, Elisa Ricci

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In an effort to reduce annotation costs in action recognition, unsupervised video domain adaptation methods have been proposed that aim to adapt a predictive model from a labelled dataset (i.e., source domain) to an unlabelled dataset (i.e., target domain). In this work we address a more realistic scenario, called open-set video domain adaptation (OUVDA), where the target dataset contains "unknown" semantic categories that are not shared with the source. The challenge lies in aligning the shared classes of the two domains while separating the shared classes from the unknown ones. In this work we propose to address OUVDA with an unified contrastive learning framework that learns discriminative and well-clustered features. We also propose a video-oriented temporal contrastive loss that enables our method to better cluster the feature space by exploiting the freely available temporal information in video data. We show that discriminative feature space facilitates better separation of the unknown classes, and thereby allows us to use a simple similarity based score to identify them. We conduct thorough experimental evaluation on multiple OUVDA benchmarks and show the effectiveness of our proposed method against the prior art.
Original languageEnglish
Article number103953
Number of pages10
JournalComputer Vision and Image Understanding
Volume241
Early online date16 Feb 2024
DOIs
Publication statusPublished - 1 Apr 2024
Externally publishedYes

Bibliographical note

We acknowledge the support of the MUR PNRR project FAIR - Future AI Research (PE00000013) funded by the NextGenerationEU. E.R. is partially supported by the PRECRISIS, funded by the EU Internal Security Fund (ISFP-2022-TFI-AG-PROTECT-02-101100539), the EU project SPRING (No. 871245), and the by the PRIN project LEGO-AI (Prot. 2020TA3K9N). The work was carried out in the Vision and Learning joint laboratory of FBK and UNITN, and supported by the Caritro Deep Learning lab of the ProM facility.

Data Availability Statement

The data involved in the experimental evaluation is already publicly available and has been used as provided by the original releasers, according to the declared setting used in previous works.

Keywords

  • Open-set video domain adaptation
  • Video Action Recognition
  • Contrastive learning

Fingerprint

Dive into the research topics of 'Simplifying Open-Set Video Domain Adaptation with Contrastive Learning'. Together they form a unique fingerprint.

Cite this