S-JEA: Stacked Joint Embedding Architectures for Self-Supervised Visual Representation Learning

Alzbeta Manova, Aiden Durrant, Georgios Leontidis*

*Corresponding author for this work

Research output: Working paperPreprint

3 Downloads (Pure)

Abstract

The recent emergence of Self-Supervised Learning (SSL) as a fundamental paradigm for learning image representations has, and continues to, demonstrate high empirical success in a variety of tasks. However, most SSL approaches fail to learn embeddings that capture hierarchical semantic concepts that are separable and interpretable. In this work, we aim to learn highly separable semantic hierarchical representations by stacking Joint Embedding Architectures (JEA) where higher-level JEAs are input with representations of lower-level JEA. This results in a representation space that exhibits distinct sub-categories of semantic concepts (e.g., model and colour of vehicles) in higher-level JEAs. We empirically show that representations from stacked JEA perform on a similar level as traditional JEA with comparative parameter counts and visualise the representation spaces to validate the semantic hierarchies.
Original languageEnglish
PublisherArXiv
Pages1-9
Number of pages9
DOIs
Publication statusPublished - 19 May 2023

Keywords

  • Deep Learning
  • Self-Supervised Learning
  • Computer vision

Fingerprint

Dive into the research topics of 'S-JEA: Stacked Joint Embedding Architectures for Self-Supervised Visual Representation Learning'. Together they form a unique fingerprint.
  • "Maxwell" HPC for Research

    Katie Wilde (Manager) & Andrew Phillips (Manager)

    Research Facilities: Facility

Cite this