Submitted by AutoModerator t3_xznpoh in MachineLearning
grid_world t1_isc9zs1 wrote
Variational Autoencoder automatic latent dimensionality selection
For a given dataset (say, CIFAR-10), if you intentionally keep the latent space dimensionality to be large, 1000-d, I am assuming that during learning, the model will automatically not use the dimensions it doesn't need to optimize the reconstruction and KL-divergence losses. Consequently, these variables will be either or very close to a multivariable, standard, Gaussian distribution(s). Is my hand wavy thought correct? And if yes, are there any research paper which prove this?
Viewing a single comment thread. View all comments