InfoVAE Connections to the Current Loss Structure
InfoVAE Connections to the Current Loss Structure
This note formalizes the relationship between the InfoVAE objective (Zhao et al., 2019) and the current disentanglement loss stack, showing that the existing infrastructure already supports InfoVAE-style training through independent tuning of existing hyperparameters.
The Two Roles of the KL Term
The standard ELBO's KL term simultaneously serves two distinct roles:
-
Mutual information penalty: reduces , discouraging the posterior from encoding information about the data in . This hurts disentanglement by pushing the posterior toward the prior and erasing strategy information.
-
Marginal matching: pushes the aggregate posterior toward the prior , ensuring that the prior is a good approximation of the marginal posterior for generation. This helps generation quality.
These roles conflict: reducing the KL improves generation-time prior quality but simultaneously discourages the posterior from being informative.
The InfoVAE Decomposition
InfoVAE separates these roles explicitly. The KL can be decomposed as:
where is the aggregate posterior.
InfoVAE replaces the standard ELBO with:
where is a divergence measure (e.g., MMD) and are hyperparameters.
Key regimes:
- : standard VAE (MI penalty active).
- : remove MI penalty, keep only marginal matching via . This is the "MMD-VAE" regime.
- : -VAE with extra marginal regularization.
Mapping to Current Project Infrastructure
The current loss structure already has two independent controls that map onto the InfoVAE decomposition:
betacorresponds to : weight on the per-example KL term . Controls the MI penalty.router_marginal_kl_to_prior_weightcorresponds to : weight on the marginal matching term . Controls how well the prior covers the latent space.
The existing infrastructure thus supports InfoVAE-style objectives by independently tuning these two weights. No code changes are required.
The InfoVAE Regime
The specific regime of interest is:
Effect of : Removes the MI penalty. The posterior is free to encode maximum information about without being pushed toward the prior. This should prevent the observed sampled-z probe decay during training, which is consistent with the KL term eroding strategy information that the posterior had learned.
Effect of strong marginal matching: Keeps the aggregate prior close to uniform. This ensures that at generation time, sampling explores all latent values, not just a collapsed subset.
Prior Experimental Gap
The Mar19 sparse loss sweep tested marginal KL weights at 0.1, 0.5, 1.0 but always alongside . The interaction of very low beta with strong marginal matching was not explored. This is precisely the InfoVAE regime.
Compound Intervention: InfoVAE + Inter-Latent JSD
The InfoVAE regime and the inter-latent JSD loss (see
inter_latent_divergence.typ) address complementary failure modes:
-
InfoVAE addresses "how much information flows through ": by removing the MI penalty, the posterior is free to encode strategy information without being pushed toward the prior.
-
Inter-latent JSD addresses "what kind of information flows through ": by maximizing divergence between latent-conditioned distributions, the model is incentivized to use for strategy-discriminating information specifically, not just any predictive partition.
The combined objective for the discrete case can be written as
MMD Alternative for Continuous Latents
For continuous latent variables, the marginal KL may not fully capture the aggregate posterior's distributional properties (it only matches first and second moments via the moment-matching approximation).
InfoVAE suggests replacing marginal KL with Maximum Mean Discrepancy (MMD):
where and is a positive-definite kernel (e.g., Gaussian or inverse multiquadratic).
Implementation: compute kernel-based MMD on minibatch samples. For a batch of examples with posterior samples and prior samples , the U-statistic MMD estimator is and unbiased.
This is a natural extension when the continuous CVAE experiments move to the InfoVAE regime.
References
- Zhao, Song, Ermon. "InfoVAE: Balancing Learning and Inference in Variational Autoencoders." AAAI 2019.
- Tolstikhin, Bousquet, Gelly, Schoelkopf. "Wasserstein Auto-Encoders." ICLR 2018. (related MMD-based approach)