A Stable Variational Autoencoder for Text Modelling

Ruizhe Li*, Xiao Li, Chenghua Lin, Matthew Collinson, Rui Mao

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingPublished conference contribution

16 Citations (Scopus)

Abstract

Variational Autoencoder (VAE) is a powerful method for learning representations of high-dimensional data. However, VAEs can suffer from an issue known as latent variable collapse (or KL loss vanishing), where the posterior collapses to the prior and the model will ignore the latent codes in generative tasks. Such an issue is particularly prevalent when employing VAE-RNN architectures for text modelling (Bowman et al., 2016). In this paper, we present a simple architecture called holistic regularisation VAE (HR-VAE), which can effectively avoid latent variable collapse. Compared to existing VAE-RNN architectures, we show that our model can achieve much more stable training process and can generate text with significantly better quality.
Original languageEnglish
Title of host publicationProceedings of the 12th International Conference on Natural Language Generation
Place of PublicationTokyo, Japan
PublisherAssociation for Computational Linguistics (ACL)
Pages594-599
Number of pages6
EditionW19-8673
ISBN (Electronic)9781950737949
DOIs
Publication statusPublished - 30 Nov 2019
EventThe 12th International Conference on Natural Language Generation (INLG 2019) - National Museum of Emerging Science and Innovation (Miraikan), Tokyo, Japan
Duration: 29 Oct 20191 Nov 2019

Publication series

NameINLG 2019 - 12th International Conference on Natural Language Generation, Proceedings of the Conference

Conference

ConferenceThe 12th International Conference on Natural Language Generation (INLG 2019)
Country/TerritoryJapan
CityTokyo
Period29/10/191/11/19

Bibliographical note

Acknowledgement
This work is supported by the award made by the UK Engineering and Physical Sciences Research Council (Grant number: EP/P011829/1).

Cite this