Learning Stable Representations with Full Encoder

25 Mar 2021  ·  Zhouzheng Li, Kun Feng ·

While the beta-VAE family is aiming to find disentangled representations and acquire human-interpretable generative factors, like what an ICA (from the linear domain) does, we propose Full Encoder, a novel unified autoencoder framework as a correspondence to PCA in the non-linear domain. The idea is to train an autoencoder with one latent variable first, then involve more latent variables progressively to refine the reconstruction results. The Full Encoder is also a latent variable predictive model that the latent variables acquired are stable and robust, as they always learn the same representation regardless of the network initial states. Full Encoder can be used to determine the degrees of freedom in a simple non-linear system and can be useful for data compression or anomaly detection. Full Encoder can also be combined with the beta-VAE framework to sort out the importance of the generative factors, providing more insights for non-linear system analysis. These qualities will make FE useful for analyzing real-life industrial non-linear systems. To validate, we created a toy dataset with a custom-made non-linear system to test it and compare its properties to those of VAE and beta-VAE's.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods