Visual Transformer for Task-aware Active Learning

Razvan Caramalau* (Corresponding Author), Binod Bhattarai, Tae-Kyun Kim

*Corresponding author for this work

Research output: Working paperPreprint

Abstract

Pool-based sampling in active learning (AL) represents a key framework for an-notating informative data when dealing with deep learning models. In this paper, we present a novel pipeline for pool-based Active Learning. Unlike most previous works, our method exploits accessible unlabelled examples during training to estimate their co-relation with the labelled examples. Another contribution of this paper is to adapt Visual Transformer as a sampler in the AL pipeline. Visual Transformer models non-local visual concept dependency between labelled and unlabelled examples, which is crucial to identifying the influencing unlabelled examples. Also, compared to existing methods where the learner and the sampler are trained in a multi-stage manner, we propose to train them in a task-aware jointly manner which enables transforming the latent space into two separate tasks: one that classifies the labelled examples; the other that distinguishes the labelling direction. We evaluated our work on four different challenging benchmarks of classification and detection tasks viz. CIFAR10, CIFAR100,FashionMNIST, RaFD, and Pascal VOC 2007. Our extensive empirical and qualitative evaluations demonstrate the superiority of our method compared to the existing methods
Original languageEnglish
PublisherArXiv
Number of pages12
Publication statusPublished - 7 Jun 2021

Publication series

NamearXiv preprint arXiv:2106.03801

Bibliographical note

Code availability: https://github.com/razvancaramalau/Visual-Transformer-for-Task-aware-Active-Learning

Fingerprint

Dive into the research topics of 'Visual Transformer for Task-aware Active Learning'. Together they form a unique fingerprint.

Cite this