Subliminal Effects in Your Data: A General Mechanism via Log-Linearity
Training modern large language models (LLMs) has become a veritable smorgasbord of algorithms and datasets designed to elicit particular behaviors, making it critical to develop techniques to understand the effects of datasets on the model’s properties. This is exacerbated by recent experiments that show datasets can transmit signals that are not directly observable from individual datapoints, posing a conceptual challenge for dataset-centric understandings of LLM training and suggesting a missing fundamental account of such phenomena. Towards understanding such effects, inspired by recent work on the linear structure of LLMs, we uncover a general mechanism through which hidden subtexts can arise in generic datasets. We introduce Logit-Linear-Selection (LLS), a method that prescribes how to select subsets of a generic preference dataset to elicit a wide range of hidden effects. We apply LLS to discover subsets of real-world datasets so that models trained on them exhibit behaviors ranging from having specific preferences, to responding to prompts in a different language not present in the dataset, to taking on a different persona. Crucially, the effect persists for the selected subset, across models with varying architectures, supporting its generality and universality.
💡 Research Summary
The paper tackles a puzzling phenomenon observed in large language model (LLM) training: datasets can embed “subliminal” signals that are not obvious from any individual example, yet after fine‑tuning the model exhibits systematic behaviors aligned with those hidden signals. The authors propose a unified theoretical and algorithmic framework built on the concept of log‑linearity—the empirical observation that the log‑probabilities produced by modern LLMs are approximately low‑rank and can be expressed as a linear inner product between a system‑prompt embedding ψ(s) and a joint prompt‑response embedding ϕ(p, r). Formally, for a model M,
log Pr_M
Comments & Academic Discussion
Loading comments...
Leave a Comment