Unity by Diversity: Improved Representation Learning in Multimodal VAEs
Published in Advances in Neural Information Processing Systems 38 (NeurIPS 2024), 2024
Variational Autoencoders for multimodal data hold promise for many tasks in data analysis, such as representation learning, conditional generation, and imputation. Current architectures either share the encoder output, decoder input, or both across modalities to learn a shared representation. Such architectures impose hard constraints on the model. In this work, we show that a better latent representation can be obtained by replacing these hard constraints with a soft constraint. We propose a new mixture-of-experts prior, softly guiding each modality’s latent representation towards a shared aggregate posterior. This approach results in a superior latent representation and allows each encoding to preserve information from its uncompressed original features better. In extensive experiments on multiple benchmark datasets and a challenging real-world neuroscience data set, we show improved learned latent representations and imputation of missing data modalities compared to existing methods.
Keywords: Multimodal Learning; Representation Learning; Variational Autoencoders; Contrastive Learning; Vamp Prior; Text-to-image Generation; Neuroscience
Recommended citation: Sutter T. M., Meng Y., Agostini A., Chopard D., Fortin N., Vogt J. E., Shahbaba B., Mandt S. (2024). "Unity by Diversity: Improved Representation Learning in Multimodal VAEs" arXiv Preprint arXiv: 2403.05300, 2024. https://arxiv.org/pdf/2403.05300.pdf