IBNorm: Information-Bottleneck Inspired Normalization for Representation Learning
📝 Original Info
- Title: IBNorm: Information-Bottleneck Inspired Normalization for Representation Learning
- ArXiv ID: 2510.25262
- Date: 2025-10-29
- Authors: ** 제공된 논문 초록에 저자 정보가 포함되어 있지 않습니다. (정보 없음) **
📝 Abstract
Normalization is fundamental to deep learning, but existing approaches such as BatchNorm, LayerNorm, and RMSNorm are variance-centric by enforcing zero mean and unit variance, stabilizing training without controlling how representations capture task-relevant information. We propose IB-Inspired Normalization (IBNorm), a simple yet powerful family of methods grounded in the Information Bottleneck principle. IBNorm introduces bounded compression operations that encourage embeddings to preserve predictive information while suppressing nuisance variability, yielding more informative representations while retaining the stability and compatibility of standard normalization. Theoretically, we prove that IBNorm achieves a higher IB value and tighter generalization bounds than variance-centric methods. Empirically, IBNorm consistently outperforms BatchNorm, LayerNorm, and RMSNorm across large-scale language models (LLaMA, GPT-2) and vision models (ResNet, ViT), with mutual information analysis confirming superior information bottleneck behavior. Code will be released publicly.💡 Deep Analysis
📄 Full Content
Reference
This content is AI-processed based on open access ArXiv data.