LouvreSAE: Sparse Autoencoders for Interpretable and Controllable Style Transfer
Reading time: 2 minute
...
📝 Original Info
- Title: LouvreSAE: Sparse Autoencoders for Interpretable and Controllable Style Transfer
- ArXiv ID: 2512.18930
- Date: 2025-12-22
- Authors: ** 논문에 명시된 저자 정보가 제공되지 않았습니다. (예시: 김민수, 이지은, 박현우 등) **
📝 Abstract
Artistic style transfer in generative models remains a significant challenge, as existing methods often introduce style only via model fine-tuning, additional adapters, or prompt engineering, all of which can be computationally expensive and may still entangle style with subject matter. In this paper, we introduce a training- and inference-light, interpretable method for representing and transferring artistic style. Our approach leverages an art-specific Sparse Autoencoder (SAE) on top of latent embeddings of generative image models. Trained on artistic data, our SAE learns an emergent, largely disentangled set of stylistic and compositional concepts, corresponding to style-related elements pertaining brushwork, texture, and color palette, as well as semantic and structural concepts. We call it LouvreSAE and use it to construct style profiles: compact, decomposable steering vectors that enable style transfer without any model updates or optimization. Unlike prior concept-based style transfer methods, our method requires no fine-tuning, no LoRA training, and no additional inference passes, enabling direct steering of artistic styles from only a few reference images. We validate our method on ArtBench10, achieving or surpassing existing methods on style evaluations (VGG Style Loss and CLIP Score Style) while being 1.7-20x faster and, critically, interpretable.💡 Deep Analysis
📄 Full Content
Reference
This content is AI-processed based on open access ArXiv data.