Title: PARAN: Persona-Augmented Review ANswering system on Food Delivery Review Dataset
ArXiv ID: 2512.10148
Date: 2025-12-10
Authors: ** - Moonsoo Park (Lemong Research, Seoul, Republic of Korea) – kd.mpark10@gmail.com - Jeongseok Yun (Lemong Research, Seoul, Republic of Korea) – jeongseok.yun@lemong.ai - Bohyung Kim (Lemong Research, Seoul, Republic of Korea) – bohyung@lemong.ai **
📝 Abstract
Personalized review response generation presents a significant challenge in domains where user information is limited, such as food delivery platforms. While large language models (LLMs) offer powerful text generation capabilities, they often produce generic responses when lacking contextual user data, reducing engagement and effectiveness. In this work, we propose a two-stage prompting framework that infers both explicit (e.g., user-stated preferences) and implicit (e.g., demographic or stylistic cues) personas directly from short review texts. These inferred persona attributes are then incorporated into the response generation prompt to produce user-tailored replies. To encourage diverse yet faithful generations, we adjust decoding temperature during inference. We evaluate our method using a real-world dataset collected from a Korean food delivery app, and assess its impact on precision, diversity, and semantic consistency. Our findings highlight the effectiveness of persona-augmented prompting in enhancing the relevance and personalization of automated responses without requiring model fine-tuning.
💡 Deep Analysis
📄 Full Content
PARAN: Persona-Augmented Review ANswering
system on Food Delivery Review Dataset
Moonsoo Park
Lemong Research
Seoul, Republic of Korea
kd.mpark10@gmail.com
Jeongseok Yun
Lemong Research
Seoul, Republic of Korea
jeongseok.yun@lemong.ai
Bohyung Kim
Lemong Research
Seoul, Republic of Korea
bohyung@lemong.ai
Abstract—Personalized review response generation presents a
significant challenge in domains where user information is lim-
ited, such as food delivery platforms. While large language models
(LLMs) offer powerful text generation capabilities, they often
produce generic responses when lacking contextual user data,
reducing engagement and effectiveness. In this work, we propose
a two-stage prompting framework that infers both explicit
(e.g., user-stated preferences) and implicit (e.g., demographic
or stylistic cues) personas directly from short review texts.
These inferred persona attributes are then incorporated into the
response generation prompt to produce user-tailored replies. To
encourage diverse yet faithful generations, we adjust decoding
temperature during inference. We evaluate our method using a
real-world dataset collected from a Korean food delivery app, and
assess its impact on precision, diversity, and semantic consistency.
Our findings highlight the effectiveness of persona-augmented
prompting in enhancing the relevance and personalization of
automated responses without requiring model fine-tuning.
Index Terms—Persona-augmented prompting, Large language
models, Prompt engineering, Food delivery platforms, Text gen-
eration
I. INTRODUCTION
In real-world applications, it is often difficult to obtain
sufficient background information or contextual signals about
users. This challenge is particularly salient in online platforms
such as food delivery apps, where interactions with users are
limited. As a result, providing appropriate and personalized
responses to user reviews becomes difficult, and manually
responding to every review is time-consuming and costly.
While many prior approaches rely on structured user metadata
or historical interaction logs, our framework operates under a
more realistic constraint—inferring user personas solely from
short, sparse review texts, without access to any auxiliary user
information.
With the recent advancement of large language models
(LLMs), automated text generation systems are increasingly
being deployed in a wide range of domains [1]–[3], in-
cluding review response generation, social media comment
automation, and counseling chatbots. However, in scenarios
with limited user information, LLMs tend to produce generic
responses across users [4]–[6], which can negatively affect
user engagement and service satisfaction.
Previous studies [1], [7] have shown that timely and relevant
responses to user reviews can positively influence customer
satisfaction and even drive sales. More importantly, incorpo-
rating user-specific traits or preferences—such as whether a
customer prioritizes delivery speed versus food quantity—can
lead to responses that better resonate with individual users [8].
In this work, we collect user review data from a Korean food
delivery platform and investigate whether LLMs can infer both
explicit persona factors (e.g., stated preferences) and implicit
attributes (e.g., age, gender) from short review texts, and use
these inferences to generate personalized responses. Without
relying on model fine-tuning, we design a two-stage prompting
strategy: the LLM first infers a likely explicit and implicit
persona from the review, which is then incorporated into the
final response generation prompt.
One of the key challenges is that food delivery reviews are
typically short and sparse, offering limited cues for persona in-
ference. To mitigate this limitation, we adjust the temperature
parameter during inference to encourage the LLM to leverage
its world knowledge and generate more diverse responses. We
empirically evaluate how this approach affects the precision,
diversity, and answer consistency of the generated responses.
To capture these objectives, we measure precision with n-
gram overlap metrics (Rouge-2, BLEU, METEOR), diversity
with lexical variation (Distinct-2), and answer consistency with
embedding-based semantic similarity (BERTScore).
II. RELATED WORK
A. Persona Integration in Language Models
A growing body of research has explored the integration
of persona information into large language models (LLMs)
to enhance text generation. Early approaches often relied on
manually constructed persona profiles (e.g., a set of predefined
traits or background sentences), which were provided to the
model as additional input to guide generation, particularly
in dialogue systems [9]. More recent work leverages prompt
engineering or fine-tuning techniques to condition generation
on persona representations with personal traits [10], [11].
Unlike these studies, our work does not assume any prior
persona information and instead infers persona traits directly
f