What We Don't C: Representations for scientific discovery beyond VAEs

Reading time: 1 minute
...

📝 Original Info

  • Title: What We Don’t C: Representations for scientific discovery beyond VAEs
  • ArXiv ID: 2511.09433
  • Date: 2025-11-12
  • Authors: 정보 없음 (제공된 원문에 저자 정보가 포함되어 있지 않음)

📝 Abstract

Accessing information in learned representations is critical for scientific discovery in high-dimensional domains. We introduce a novel method based on latent flow matching with classifier-free guidance that disentangles latent subspaces by explicitly separating information included in conditioning from information that remains in the residual representation. Across three experiments -- a synthetic 2D Gaussian toy problem, colored MNIST, and the Galaxy10 astronomy dataset -- we show that our method enables access to meaningful features of high dimensional data. Our results highlight a simple yet powerful mechanism for analyzing, controlling, and repurposing latent representations, providing a pathway toward using generative models for scientific exploration of what we don't capture, consider, or catalog.

💡 Deep Analysis

📄 Full Content

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut