Rethinking Personalization in Large Language Models at the Token Level

Rethinking Personalization in Large Language Models at the Token Level
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

With large language models (LLMs) now performing strongly across diverse tasks, there is growing demand for them to personalize outputs for individual users. Personalization is typically framed as an additional layer on top of a base NLP task, requiring model responses to meet user-specific needs while still accomplishing the underlying task. From a token-level perspective, different tokens in a response contribute to personalization to varying degrees. Tokens with higher personalization relevance should therefore receive greater emphasis when developing personalized LLMs. However, accurately estimating such personalization degrees remains challenging. To address this challenge, we propose PerContrast, a self-contrast method that estimates each output token’s dependence on user-specific information through causal intervention. Building on this mechanism, we develop the PerCE loss, which adaptively upweights tokens with higher estimated personalization degrees during training via a bootstrap procedure, enabling the model to alternate between estimating and optimizing these tokens. Experiments on multiple LLMs demonstrate that PerCE substantially improves personalization performance with minimal additional cost, achieving average gains of over 10% and up to 68.04% on the LongLaMP dataset, along with strong cross-task and cross-scenario transferability. These results highlight the importance of token-level personalization modeling and establish token-aware training as a simple yet effective paradigm for advancing personalized LLMs.


💡 Research Summary

The paper tackles a fundamental yet overlooked aspect of personalized large language models (LLMs): the fact that not all output tokens contribute equally to personalization. While most prior work treats the entire response as a monolithic unit, the authors argue that certain tokens—such as stylistic words in writing or trait‑revealing words in dialogue—carry far more user‑specific information than others. Ignoring this heterogeneity can dilute the learning signal for the most important tokens and limit personalization performance.

To address this, the authors introduce two tightly coupled components: PerContrast and PerCE.

PerContrast is a self‑contrast method that quantifies the “personalization degree” of each token. For a given token (y_i) in a response, the method computes the log‑probability of that token under two contexts: (1) the full prompt containing the user persona (p_u) and the query (x), and (2) a counterfactual prompt where the persona information is removed (or masked). The difference, called the Personal Influence Ratio (PIR), is defined as

\


Comments & Academic Discussion

Loading comments...

Leave a Comment