Image Synthesis with a Convolutional Capsule Generative Adversarial Network

Cher Bass, Tianhong Dai, Benjamin Billot, Kai Arulkumaran, Antonia Creswell, Claudia Clopath, Vincenzo De Paola, Anil Anthony Bharath

Research output: Chapter in Book/Report/Conference proceedingPublished conference contribution

16 Citations (Scopus)

Abstract

Machine learning for biomedical imaging often suffers from a lack of labelled training data. One solution is to use generative models to synthesise more data. To this end, we introduce CapsPix2Pix, which combines convolutional capsules with the pix2pix framework, to synthesise images conditioned on class segmentation labels. We apply our approach to a new biomedical dataset of cortical axons imaged by two-photon microscopy, as a method of data augmentation for small datasets. We evaluate performance both qualitatively and quantitatively. Quantitative evaluation is performed by using image data generated by either CapsPix2Pix or pix2pix to train a U-net on a segmentation task, then testing on real microscopy data. Our method quantitatively performs as well as pix2pix, with an order of magnitude fewer parameters. Additionally, CapsPix2Pix is far more capable at synthesising images of different appearance, but the same underlying geometry. Finally, qualitative analysis of the features learned by CapsPix2Pix suggests that individual capsules capture diverse and often semantically meaningful groups of features, covering structures such as synapses, axons and noise.
Original languageEnglish
Title of host publicationProceedings of The 2nd International Conference on Medical Imaging with Deep Learning
EditorsGozde Unal, Tom Vercauteren
PublisherPMLR
Pages39-62
Number of pages24
Volume102
Publication statusPublished - 1 Aug 2019

Bibliographical note

© 2019 C. Bass, T. Dai, B. Billot, K. Arulkumaran, A. Creswell, C. Clopath, V. De Paola & A.A. Bharath.

Fingerprint

Dive into the research topics of 'Image Synthesis with a Convolutional Capsule Generative Adversarial Network'. Together they form a unique fingerprint.

Cite this