Rethinking Test-Time Training: Tilting The Latent Distribution For Few-Shot Source-Free Adaptation
Often, constraints arise in deployment settings where even lightweight parameter updates e.g. parameter-efficient fine-tuning could induce model shift or tuning instability. We study test-time adaptation of foundation models for few-shot classification under a completely frozen-model regime, where additionally, no upstream data are accessible. We propose arguably the first training-free inference method that adapts predictions to the new task by performing a change of measure over the latent embedding distribution induced by the encoder. Using task-similarity scores derived from a small labeled support set, exponential tilting reweights latent distributions in a KL-optimal manner without modifying model parameters. Empirically, the method consistently competes with parameter-update-based methods across multiple benchmarks and shot regimes, while operating under strictly and universally stronger constraints. These results demonstrate the viability of inference-level distributional correction for test-time adaptation even with a fully-frozen model pipeline.
💡 Research Summary
The paper tackles the problem of test‑time adaptation (TTA) under the most restrictive setting: both the feature encoder and the classifier head are completely frozen, no gradients or optimizer states are allowed at inference, and only a few labeled examples (few‑shot support set) are available for the downstream task. Instead of updating model parameters, the authors propose to adapt predictions by re‑weighting the probability distribution over latent representations induced by the frozen encoder.
Formally, let (P_0(z)) denote the empirical distribution of latent vectors (z = f(x)) produced by the fixed encoder on the available data. Given a scalar task‑relevant score function (s(z)) built from the support set, they pose a variational problem: find a distribution (P(z)) that satisfies an expectation constraint (\mathbb{E}_{P}
Comments & Academic Discussion
Loading comments...
Leave a Comment