ReFORM: Review-aggregated Profile Generation via LLM with Multi-Factor Attention for Restaurant Recommendation

In recommender systems, large language models (LLMs) have gained popularity for generating descriptive summarization to improve recommendation robustness, along with Graph Convolution Networks. However, existing LLM-enhanced recommendation studies ma…

Authors: Moonsoo Park, Seulbeen Je, Donghyeon Park

ReFORM: Review-aggregated Profile Generation via LLM with Multi-Factor Attention for Restaurant Recommendation
ReF ORM: Review-aggregated Profile Generation via LLM with Multi-F actor A tten tion for Restauran t Recommendation Mo onso o P ark 1 , Seulb een Je 2 , and Dongh yeon P ark 2 (  ) 1 Univ ersity of Southern California, USA, moonsoo@usc.edu 2 Sejong Universit y , South Korea, powerjsv@sju.ac.kr, parkdh@sejong.ac.kr Abstract. In recommender systems, large language mo dels (LLMs) ha ve gained p opularit y for generating descriptive summarization to improv e recommendation robustness, along with Graph Con volution Net works. Ho wev er, existing L LM-enhanced recommendation studies mainly rely on the in ternal knowledge of LLMs ab out item titles while neglect- ing the imp ortance of v arious factors influencing users’ decisions. Al- though information reflecting v arious decision factors of each user is abundan t in reviews, few studies hav e actively exploited suc h insights for recommendation. T o address these limitations, we prop ose a Re- F ORM : Re view-aggregated Profile Generation via LLM with Multi- F act O r A ttentiv e R eco M mendation framew ork. Sp ecifically , we first generate factor-sp ecific user and item profiles from reviews using LLM to capture a user’s preference b y items and an item’s ev aluation b y users. Then, we prop ose a Multi-F actor Atten tion to highlight the most in- fluen tial factors in each user’s decision-making pro cess. In this pap er, w e conduct exp erimen ts on tw o restaurant datasets of v arying scales, demonstrating its robustness and sup erior p erformance ov er state-of- the-art baselines. F urthermore, in-depth analyses v alidate the effective- ness of the prop osed mo dules and provide insights into the sources of p ersonalization. Our source co de and datasets are a v ailable at https: //github.com/m0onsoo/ReFORM . Keyw ords: Recommender System · Large Language Model · Conten t- based Recommendation. 1 In tro duction Recommender systems pla y a vital role in filtering v ast amoun ts of informa- tion, offering p ersonalized conten t recommendations across domains such as e-commerce [12], mo vies [25], and social net works [10]. Collab orative filtering (CF), whic h mo dels user-item in teractions to predict future preferences, serves as the foundation of many recommender systems. Notably , Graph Con volution Net works (GCNs) hav e emerged as a p ow erful paradigm in collab orative filter- ing, enabling the aggregation of high-order collab orative signals by represen ting users and items as no des in a bipartite graph [29,5,13,33,37]. Nevertheless, these 2 M. Park et al. E m b e d d i n g L a y e r D i f f u s i o n E n c o d e r E m b e d d i n g L a y e r S e q u e n c e E n c o d e r S e q u e n c e E n c o d e r S e q u e n c e E n c o d e r . . . . . . . . . . . . . . . . . .  F o r w a r d R e v e r s e U s e r S e q u e n c e O r i g i n a l I t e m n o i s e I t e m P o s i t i v e I t e m H a r d N e g a t i v e I t e m C o n t r a s t i v e l o s s R e c o m m e n d a t i o n l o s s D i f f u s i o n l o s s G e n e r a t e d I t e m E m b e d d i n g L a y e r D i f f u s i o n E n c o d e r E m b e d d i n g L a y e r S e q u e n c e E n c o d e r S e q u e n c e E n c o d e r S e q u e n c e E n c o d e r A v e r a g e , . . . . g e n e r a t e d n o i s e N o i s e 1 s t 2 n d t h 1 s t 2 n d t h . . . g e n e r a t e d n o i s e N o i s e N o i s e E m b e d d i n g S p a c e Atmosphere Flavor Price Cuisine Type Companion Time Waiting Atmosphere Flavor Price Cuisine Type Companion Time Waiting Atmosphere Flavor Price Cuisine Type Companion Time Waiting Atmosphere Flavor Price Cuisine Type Companion Time Waiting S e m a n t i c S i m i l a r i t y - b a s e d N o i s e G e n e r a t i o n D i f f u s i o n C o n f i d e n c e - b a s e d P o s i t i o n i n g f o r A u g m e n t a t i o n H a r d N e g a t i v e S a m p l i n g A u g m e n t e d V i e w 1 A u g m e n t e d V i e w 2 S a m p l i n g H a r d N e g a t i v e S a m p l e P o s i t i v e S a m p l e S h a r e S h a r e . . . A u g m e n t a t i o n H a r d N e g a t i v e S a m p l e P o s i t i v e S a m p l e S h a r e O r i g i n a l I t e m P o s i t i v e I t e m H a r d N e g a t i v e I t e m C o n t r a s t i v e L o s s R e c o m m e n d a t i o n L o s s 0 4 0 3 수 정 버 전 D e n o i s i n g S e q u e n c e w i t h C o n f i d e n c e S c o r e U s e r S e q u e n c e G e n e r a t e d N o i s e E m b e d d i n g S p a c e T o p R a n k i n g ( c o s i n e s i m i l a r i t y ) 0 . 9 0 . 7 0 . 3 0 . 1 0 . 2 C o n f i d e n c e S c o r e , . . . . . . s e m a n t i c s i m i l a r i t y n o i s e . . . . P o s i t i o n S e l e c t i o n R a n k i n g S h a r e . . . . . . F o r w a r d R e v e r s e . . . . . . e m b e d d i n g s p a c e f o r w a r d r e v e r s e D e n o i s i n g S e q u e n c e D i f f u s i o n L o s s A u g m e n t e d V i e w 1 A u g m e n t e d V i e w 2 0 . 9 0 . 7 0 . 3 0 . 1 0 . 2 . . . . . . . . . . . . . . . e m b e d d i n g s p a c e f o r w a r d r e v e r s e . . . . . . E m b e d d i n g S p a c e F o r w a r d R e v e r s e . . . . . . “FastFood” “Withfriends!” “Reasonable price” UserPerspective User Reviews “GreekStyle” “Instagram photos” “GreatVibe ” User Reviews “Lovely atmosphere” “Fancy decorations” “Superior service ” ItemPerspective User Reviews “QuickService” “Friends hangout” “CheapPrice ” User Reviews Fig. 1. An example of capturing differen t users’ personal preferences and restaurants’ prominen t characteristics based on their reviews. By matching a user’s preferences and a restauran t’s characteristics, our ReFORM framew ork pro vides more p ersonalized and robust recommendations. metho ds still rely exclusively on interaction data, o verlooking v aluable external feedbac k (e.g., ratings and user reviews) that offers richer decision reasoning. Mining hidden factors from review text helps ov ercome the limitations of in teraction-based recommender systems [17,21], as reviews offer detailed con text on user preferences and item attributes that are absent in binary interactions. By leveraging these insights, mo dels can build more expressive and p ersonalized represen tations [23]. Recent efforts increasingly exploit LLMs [2,27] for adv anced text understanding and generation, extracting deep er insigh ts from interaction history and user reviews. Sp ecifically , existing LLM-enhanced recommendation metho ds (e.g., KAR [34], RLMRec [23]) primarily focus on aligning general se- man tic features with graph structures or op en-world knowledge. How ev er, they still lack the granular, p ersonalized mo deling of div erse decision factors that are vividly expressed in individual user reviews. T o ov ercome the aforemen tioned c hallenges, we prop ose a ReF ORM : Re view -aggregated Profile Generation via LLM with Multi- F act O r Atten tive R eco M mendation framework that 1) generates factor-sp ecific user and item profiles exclusiv ely from reviews through Review-aggregated Profile Generation (RPG) and 2) captures n uanced factor preferences from the profiles through Multi- F actor A ttention (MF A) mechanism. As illustrated in Figure 1, ReF ORM aims to identify the sp ecific factor preferences driving user decisions. It consists of tw o main components. First, Review-aggregated Profile Generation (RPG) utilizes LLMs to distill unstruc- tured reviews in to explicit, factor-sp ecific profiles for b oth users and items. Sec- ond, rather than treating all factors equally , the Multi-F actor A ttention (MF A) mec hanism dynamically highligh ts the most influen tial factors in eac h user’s decision-making pro cess b y computing attention across user-item in teractions. Finally , these factor-attentiv e embeddings are integrated with standard graph no de em b eddings for precise and p ersonalized recommendations. ReF ORM 3 Extensiv e exp erimen ts v alidate that our t wo main approaches, Review-aggr- egated Profile Generation (RPG) and Multi-F actor Atten tion (MF A), outp er- form state-of-the-art baselines in b oth p erformance and robustness. In addition, w e conduct in-depth analyses, including factor-wise ablation to quantify the con tribution of each factor and noise injection exp eriments in the RPG stage to assess the imp ortance of authentic user reviews and LLM-based profiling. The ma jor con tributions are summarized as follows: – W e prop ose ReFORM, a no vel LLM-enhanced recommendation framew ork. It uniquely com bines Review-aggregated Profile Generation to extract factor- sp ecific profiles from reviews, and a Multi-F actor A tten tion mechanism to dynamically capture the most influen tial decision factors for each user. – ReF ORM framew ork demonstrates extensiv e experiments that our metho d significan tly outperforms existing baselines, v alidating its effectiv eness in im- pro ving recommendation p erformance. – W e empirically v alidate the core design of ReF ORM through factor ablation and review-noise injection studies, demonstrating that the selected factors and review-derived profiles capture essential p ersonalized signals for effectiv e recommendation. 2 Related W ork 2.1 Graph-based Collab orative Filtering Graph-based collab orative filtering (CF) has b een widely adopted to model re- lationships b etw een users and items through interactions, leveraging graph con- v olution netw orks to improv e recommendation accuracy . Building on this line of research, metho ds such as NGCF [29], GCCF [5], and LightGCN [13] hav e pushed the boundaries of graph-based collab orative filtering. How ever, these metho ds still face challenges due to the sparsity and noise in implicit feedback data. T o address these challenges, self-sup ervised learning has b een introduced to mitigate the limitations of GCNs and enhance graph-based CF mo dels [33,37]. Ho wev er, graph-based recommendation systems primarily fo cus on aggregating structural relationships, making it difficult to leverage text-based insights that explain the reasons b ehind user-item in teractions. While mo dels like PinSage [36], MMGCN [32], and GRCN [31] incorp orate multi-modal features for user preference mo deling, they are limited by their reliance on item-lev el information. In real-world scenarios, how ever, users make decisions based on diverse contex- tual factors, highligh ting the need to capture user-level information as well. Prior studies [17,21] hav e shown that contextual information from reviews impro ves recommendation quality . Motiv ated by these findings, w e extract de- cisiv e textual factors and in tegrate them in to GCNs at both the user and item lev els to enhance p ersonalization. 2.2 Large Language Mo dels for Recommendation Large language mo dels (LLMs) hav e gained significan t interest in Recommen- dation systems (RS) due to their adv anced text understanding and generation 4 M. Park et al. FL A R E : Fa c t o r - sp e c i f i c P r o f i l e G e ne r at i o n f r o m R e v i e w T e x t v i a L L M w i t h Mu l t i - Fa c t o r A t t e n t i ve R e c o m m e n d a t i o n Prediction (i ) Review - aggregated Profile Generation (RPG) ! " User - Item Interaction Graph Factor - specific Profile “C uisine type”: “Greek” , “ Seafood” , “ Flavor ” : “ fr esh, sweet”, “ Atmosphere”: “cozy, friendly”, “ Companion ” : “friends”, “Waiting”: “None” Domain - adapted Prompt You are a helpful r estaurant rev iew analyst . .. “ cuisine type”: “The sp ecifi c type of food offered a ppeals to user's preferences.” , “flavor”: “The spec ifi c taste characteristics that user prefers.” Review 1 Review N … User ’s Rev ie w … (ii) Graph Representation Lear ning (iii) Multi - Factor Attention (MFA) … K V Q Matmul Aggregate Pooling !"#$ %&#' ! = 1 ! Fig. 2. The ov erall framew ork of ReFORM. (i) Review-aggregated Profile Genera- tion (RPG) constructs factor-sp ecific user and item profiles from reviews based on domain-adapted prompt. (ii) Graph Representation Learning captures high-order in- teractions. (iii) Multi-F actor A ttention highlights the most influential factors in each user’s decision-making pro cess for b etter recommendation results. capabilities. Several studies hav e explored the use of LLMs as inference mo dels b y fine-tuning language mo dels for recommendation tasks [38,7,4]. F or instance, P5 [11] reform ulates user interaction data into natural language sequences, en- abling the training of language models for recommendations. How ev er, directly emplo ying LLMs as a recommender sho ws limited performance compared to traditional RSs [8,16,18]. Bey ond using LLMs as stand-alone recommendation systems, some studies [20,30,1,19] hav e employ ed LLMs to generate enhanced text features for tradi- tional recommender systems. Notably , KAR [34] used scenario-sp ecific factors to deduce user preferences from generic metadata (e.g., movie titles or genres), effec- tiv ely integrating op en-world kno wledge into recommender systems. Despite the adv ancement, relying heavily on internal knowledge of LLMs may fail to capture the nuanced asp ects of personal user exp eriences (e.g., atmosphere, companion) found in user reviews. Also, RLMRec [23] shows its robustness in in tegrating seman tic representations from metadata and reviews with v arious GCN meth- o ds. Ho wev er, treating all semantic information equally may introduce noise and degrade recommendation qualit y [26] for certain users. In this pap er, to mitigate these challenges, we extract domain-adapted factors solely from reviews through Review-aggregated Profile Generation (RPG) using LLM. These factor-sp ecific profiles are further mo deled through Multi-F actor A ttention (MF A) to finely adjust the imp ortance of factors for each individual. 3 Metho dology 3.1 Ov erview The prop osed framew ork is illustrated in Figure 2. It b egins with Review-aggre -gated Profile Generation (RPG) by generating user and item profiles enriched with aggregated reviews and domain-adapted factor prompts, which utilizes a ReF ORM 5 large language mo del (LLM). The model then learns from b oth graph-based represen tation and profile-based representation to enhance recommendation p er- formance. Our Multi-F actor A ttention (MF A) mec hanism utilizes pre-generated profiles of RPG to capture the factor imp ortance of b oth user-to-items and item- to-users p ersp ectives. Finally , the tw o representations are in tegrated to pro duce T op- K recommendation results. 3.2 Graph Representation Learning The prop osed framework leverages the propagation rule of LightGCN [13], ex- tracting no de-level representations through graph con volution without feature transformation and nonlinear activ ation. During the propagation for the l lay ers, the mo del aggregates information from neighboring no des to refine graph em- b eddings e g . This pro cess captures high-order collab orative signals while main- taining computational simplicity , as illustrated in Figure 2-(ii). Here, N u and N i denote neigh b or no de sets of user u and item i . e ( l +1) u = X i ∈N u 1 p |N u | p |N i | e ( l ) i ; e ( l +1) i = X u ∈N i 1 p |N i | p |N u | e ( l ) u (1) e g u = L X l =1 e ( l ) u ; e g i = L X l =1 e ( l ) i (2) 3.3 Review-aggregated Profile Generation Restauran t-scenario factors. People choose restaurants based on a v ariety of criteria. T o define k ey factors inv olv ed in restauran t recommendation, w e systematically selected restaurant-scenario factors based on prior studies [17,34]. Sp ecifically , w e consider cuisine t yp e, flavor, atmospher e, pric e, t ime, waiting, and c omp anion , which reflect the characteristics of the restaurant domain. F actor-sp ecific Profile Generation. In Review-aggregated Profile Genera- tion (RPG) step, w e reform the reviews into detailed factor-sp ecific user/item profiles using a large language mo del (LLM). W e first construct a domain- adapted prompt (Figure 2-(i)) that illustrates m domain-sp ecific factors ( cuisine typ e, flavor, atmospher e , etc.). W e then query the GPT-4o mini [22] mo del with the aggregated reviews to generate factor-sp ecific profiles. W e sample N = 100 reviews as input for LLM. F or user-level profile con- struction, we randomly sample reviews, assuming that individual users tend to express consistent preferences throughout their reviews. In contrast, for item- lev el profile construction, longer reviews are prioritized during sampling. This strategy is motiv ated by the observ ation that longer texts tend to exhibit higher lexical co verage and low er distributional v ariance [6], enabling the LL M to cap- ture more comprehensiv e and nuanced item characteristics. F or all the sampled 6 M. Park et al. reviews r j , we provide the domain-adapted prompt to generate descriptions for eac h factor m . The final user/item profile P u/i is constructed b y aggregating the descriptions across all M factors: P u/i = { f m } M m =1 , f m = LLM(Prompt m , { r j } N j =1 ) (3) where f m represen ts the description generated for factor m . Figure 2-(i) illustrates an example of the prompt and the actual resp onse generated by the LLM. In order to generate a profile of a sp ecific user, a sam- pled subset of reviews of the user and a prompt containing descriptions of factors are provided to guide the LLM in understanding the user’s factor-sp ecific pref- erences. F or instance, in the case of cuisine typ e , the instruction "The sp e cific typ es of fo o d offer e d app e al to users’ pr efer enc es and dietary r estrictions." is in- cluded in the prompt, prompting the LLM to extract relev ant preferences such as Gr e ek and Se afo o d from the user’s reviews. By enabling factor-level preference inference directly from user-written reviews, RPG allows for the construction of more p ersonalized and fine-grained user profiles, ensuring a robust and efficient means of capturing p ersonal preferences for further recommendation tasks. 3.4 Multi-F actor A tten tion The profile information generated by the LLM demonstrates high quality; how- ev er, assuming equal imp ortance across all factors hinders the mo del’s ability to capture user preference s in detail. F or more effective recommendations, it is essen tial to identify whic h factors a user prioritizes during the decision-making pro cess to ensure b etter p ersonalization. T o this end, we prop ose a Multi-F actor A ttention (MF A) to mo del factor-level imp ortance, as illustrated in Figure 3. Profile Enco ding. W e first enco de user and item profiles, generated from our previous RPG pro cess, using BER T [9]. Each factor is transformed in to a textual em b edding separately to form the profile matrices X P u , Y P i ∈ R M × d . Here, M and d denote the n umber of factors and the dimension size of the embedding. Multi-lev el Cross Atten tion. Typical Atten tion mechanisms assume K = V , where the enco der’s hidden states serve as keys K and v alues V , and the deco der’s hidden states act as queries Q [3]. In our MF A pro cess, we assume Q = V . That is, user representations serve as the query Q , while item representations serv e as the key K . The v alue V , again derived from user representations, helps refine the n uanced preferences for each sp ecific factor. Q u = X P u W Q u K i = Y P i W K i V u = X P u W V u (4) Q i = X P i W Q i K u = Y P u W K u V i = X P i W V i (5) F or simplicit y , we describ e the pro cess from the user’s p ersp ectiv e in this section and Figure 3; how ev er, the reverse—items as Q and V , and users as K —is also ReF ORM 7 FL A R E : Fa c t o r - sp ec i f i c P r o f i l e G en er a t i o n f r o m R ev i ew T ex t v i a L L M w i t h M u l t i - Fa c t o r A t t e n t i v e R e c o m m e n d a t i o n 4 F i g u r e - MF A Fi g u r e . M u l t i - Fa c t o r A t t e n t i o n ( M FA ) A r c h i t e c t u r e . Query Key Value 𝑛 Quer y (U) Keys (I) ! 𝑊 " User ! 𝑊 # Value (U) 𝑀 𝑎𝑥 𝑃𝑜𝑜𝑙𝑖 𝑛𝑔 Fig. 3. Multi-F actor A ttention (MF A) Architecture. mo deled. The user profile matrix X P u is transformed into the query and v alue represen tations via the learned pro jection matrices W Q u and W V u , resp ectively , while the item profile matrix Y P i is transformed in to key representations via W K i . The MF A equation is denoted as: MF A ( Q , K 1: n , V ) = n max i =1 (softmax QK T i √ d ∗ ! ) V (6) Here, we apply cross attention [28] to Q , K 1: n , V (all of dimension d*) to mo del the importance of eac h factor influencing a user’s c hoices on certain items. Sp ecif- ically , if users tend to choose restaurants based on cuisine typ e , MF A assigns greater weigh t to that factor. At each epo ch, w e randomly select n items from their b ehavior history and calculate attention w eights b et ween user query Q u and item keys K 1: n i [39]. Calculating attention with a single k ey may bias the mo del to ward a user’s sudden decisions, p otentially misrepresenting their gen- uine preferences. T o improv e robustness, we compare multiple k eys to achiev e consisten t and less biased attention outputs. A ttention Map Pooling. Inspired by the concept of squeeze-and-excitation [14,15], whic h enhances feature representation by capturing critical con textual information through the compression of each feature map, we adopt a simplified approac h. Our MF A method aggregates attention maps across m ultiple keys by applying max p o oling to activ ate sp ecific factors of the v alues. Max p o oling metho d is to maximize the fo cus on dominant factors that users are interested in. The aggregated attention map b et ween the query and multiple k eys is trained to strongly reflect the user’s preferred factors, ensuring that relev ant factors are emphasized and effectiv ely in tegrated in to the v alue representation. The resulting factor-w eighted v alues are a matrix of size M × d ∗ , reflecting the num b er of factors M and the em b edding dimensions d ∗ . T o obtain the final Multi-F actor Atten tiv e em b edding that captures factor-sp ecific preferences, we av eraged these w eighted 8 M. Park et al. v alue matrices: e a u = 1 M M X m =1 MF A ( Q u , K 1: n i , V u ); e a i = 1 M M X m =1 MF A ( Q i , K 1: n u , V i ) (7) Consequen tly , the MF A enables the model to discern whic h factors the user prioritizes and whic h ones are less relev ant during their decision-making. F or item modeling, the roles of users and items are reversed, treating items as queries, v alues, and users as keys, to capture item-cen tric factor imp ortance. 3.5 Aligning GCN and MF A-Profile Representations T o integrate the Multi-F actor A ttentiv e embeddings and graph embeddings for recommendation, w e adopt a straigh tforward concatenation approach, where [; ] denotes the concatenation. Despite its simplicity , this metho d effectively com- bines the user-item collab orative signals captured b y GCN and the semantic preferences represented in the w eighted profile embeddings. T o ensure seamless in tegration, the vector dimensions of the tw o representations are set to b e iden- tical. Finally , the matching scores are computed from the inner product of the final user and item em b eddings for T op- K recommendations. e u = [ e g u ; e a u ] , e i = [ e g i ; e a i ]; ˆ y ui = e T u e i (8) F or mo del training, we optimize the Ba yesian Personalized Ranking (BPR) loss [24], a pairwise ranking loss that encourages observed interactions to hav e higher scores than unobserv ed ones. L BPR = − M X ( u,i,j ) ∈ O ln σ ( ˆ y ui − ˆ y uj ) + λ ∥ Θ ∥ 2 2 (9) Here, O = { ( u, i, j ) | ( u, i ) ∈ R + , ( u, j ) ∈ R − } denotes the pairwise training data. R + and R − indicate the observ ed and unobserv ed interactions respectively; σ ( · ) is the sigmoid function. Additionally , λ controls the L 2 regularization and Θ = { E ( 0 ) , W Q , W K , W V } denote all trainable parameters, where E ( 0 ) are the em b eddings of the 0-th GCN lay er. 4 Exp erimen ts Here, w e conduct exp eriments to address the following researc h questions: – R Q1 : How does the prop osed framew ork p erform compared to state-of-the-art baseline metho ds? – R Q2 : How do different hyperparameters impact the ov erall p erformance of the framew ork? – R Q3 : How do the individual comp onents of the framew ork contribute to its o verall p erformance? – R Q4 : Do the selected factors demonstrate v alidity in the context of restauran t recommendation? – R Q5 : Do es LLM-based profiling from user reviews provide discriminative in- formation for capturing user preferences? ReF ORM 9 4.1 Exp erimen tal Setting Datasets. W e ev aluate on tw o public datasets pro viding abundant textual feed- bac k for profile construction: Y elp and Go ogle Restaurants (GR) [35]. F ollowing [23], w e apply 10-core (Y elp) and 5-core (GR) filtering and partition the data in to training, v alidation, and test sets at a 3:1:1 ratio. Ev aluation Protocols and Metrics. W e measure top- K recommendation p erformance using Recall@ K and NDCG@ K ( K ∈ { 10 , 20 } ) under an all- ranking strategy [13,33]. Results are a veraged o ver fiv e random seed runs, and statistical significance is assessed through paired t-tests against the b est baseline, rep orting the corresp onding p -v alues. Baselines. W e compare ReFORM with four groups of baselines: (i) GCN Meth- o ds: GCCF [5] and LightGCN [13]; (ii) Self-sup ervised GCN Metho ds: SGL [33] and SimGCL [37]; (iii) Conten ts-based GCN Metho ds: MMGCN [32] and GRCN [31]; and (iv) LLM-Enhanced GCN: RLMRec [23]. F or fair comparison, we use RLMRec’s best-p erforming contrastiv e learning v ariant with an identical back- b one mo del. Implemen tation Details. Mo dels are optimized using the A dam optimizer with a 1 × 10 − 3 learning rate, a 4096 batc h size, and early stopping on v alidation. All mo dels use an em b edding size of 256, and text-based baselines share our seman tic representations. F or the Multi-F actor Atten tion mo dule, we tune the atten tion keys n ∈ { 1 , 2 , 3 , 4 , 5 } . 4.2 Ov erall P erformance (R Q1) Our prop osed framework demonstrates consistent and significant improv ements o ver all baseline metho ds, as sho wn in T able 1. By incorp orating our tw o main metho dologies Review-aggregated Profile Generation (RPG) and Multi-F actor A ttention (MF A), our mo del effectively captures detailed user and item prefer- ences at a gran ular level, resulting in sup erior recommendation p erformance. In tegrating factor-specific generated profiles substantially enhances recom- mendation p erformance. LLM-enhanced metho ds, suc h as RLMRec and Re- F ORM, which lev erage rich profile information, deliver sup erior recommenda- tion p erformance compared to Graph Conv olution Ne t works (GCN) metho ds. ReF ORM ac hieves impro vemen ts of 51.3% and 16.7% in Recall@20 ov er its Ligh tGCN backbone on Y elp and Google Restaurants (GR), resp ectiv ely . Com- pared with SGL and SimGCL, which learn expressive representations through self-sup ervised learning, these metho ds still rely solely on in ternal collab ora- tiv e signals. In contrast, ReFORM actively integrates external information to directly capture granular user- and item-sp ecific details. Since SGL, SimGCL, and ReFORM all utilize Ligh tGCN as their backbone, p erformance gaps directly 10 M. Park et al. T able 1. Performance comparison b etw een our framework and baselines on tw o datasets. The b est results are highligh ted in b old, and the second b ests are under- scored. The relative improv ements compared to the best baselines are indicated as Impro v. Baseline Y elp Go ogle Restaurants R@10 R@20 N@10 N@20 R@10 R@20 N@10 N@20 Graph Conv olution Netw orks (GCN) Metho ds GCCF 0.0390 0.0643 0.0331 0.0414 0.0570 0.0925 0.0351 0.0457 LightGCN 0.0421 0.0702 0.0357 0.0450 0.0574 0.0931 0.0357 0.0464 Self-supervised GCN Metho ds SGL 0.0485 0.0806 0.0409 0.0516 0.0583 0.0932 0.0362 0.0468 SimGCL 0.0389 0.0645 0.0331 0.0416 0.0607 0.0973 0.0377 0.0487 Conten ts-based GCN Metho ds MMGCN 0.0416 0.0714 0.0340 0.0441 0.0158 0.0268 0.0092 0.0126 GRCN 0.0509 0.0877 0.0427 0.0551 0.0486 0.0796 0.0300 0.0394 LLM-Enhanced GCN Metho ds RLMRec 0.0597 0.0973 0.0505 0.0630 0.0598 0.0962 0.0371 0.0480 ReF ORM 0.0650 0.1062 0.0546 0.0683 0.0685 0.1088 0.0424 0.0545 p -v alue. 3 . 4 e − 4 3 . 2 e − 5 3 . 0 e − 4 4 . 9 e − 5 9 . 0 e − 4 1 . 0 e − 4 1 . 6 e − 3 9 . 0 e − 4 Improv. 8.88% 9.15% 8.12% 8.41% 12.85% 11.82% 12.47% 11.91% reflect the effectiveness of their representation enhancement strategies. Sp ecifi- cally , ReFORM achiev es a Recall@20 improv ement of 51.3% on Y elp and 16.7% on GR ov er LightGCN, far exceeding the gains of the b est SSL metho d (14.8% and 4.4%, resp ectively), clearly underscoring the adv antage of integrating ric h external information. Con tents-based GCN methods, MMGCN and GRCN, outp erform the GCN metho ds on Y elp but lag behind LLM-enhanced mo dels b y relying solely on item-lev el features without mo deling user-side seman tics. Unexpectedly , these metho ds p erform po orly on the GR dataset, as w e assume it do es not provide sufficien t interaction and item features due to its small num b er of reviews. In con trast, ReF ORM effectively mo dels the correlation b et ween in teractions and textual preferences ev en on a small dataset by utilizing features at b oth the user and item lev els. F urthermore, ReF ORM outperforms RLMRec, a recent LLM-enhanced GCN approach. In particular, ReFORM achiev es improv emen ts o ver RLMRec by 9.2% and 13% in Recall@20 on the Y elp and GR datasets, re- sp ectiv ely . While RLMRec effectiv ely in tegrates textual information in to graph structure through contrastiv e learning, it lacks the ability to differentiate the imp ortance of diverse semantic signals. In con trast, ReF ORM combines graph- based collab orative signals with Multi-F actor A ttentiv e profiles, effectively ad- justing user and item factor preferences o ver interacted decisions. ReF ORM 11 1 2 3 4 5 n 0.095 0.100 0.105 0.110 0.115 R ecall@20 Y elp Google R estaurants 1 2 3 4 5 n 0.050 0.055 0.060 0.065 0.070 NDCG@20 Y elp Google R estaurants Fig. 4. Influence of the n umber of keys n of Multi-F actor Atten tion. R ecall@20 NDCG@20 R ecall@20 NDCG@20 0.00 0.02 0.04 0.06 0.08 0.10 0.12 Y elp Google R estaurants R eFORM R eFORM w/ AP w/o MF A LightGCN Fig. 5. Ablation study on the comp onents of ReFORM framework. 4.3 Hyp erparameter Analysis (R Q2) In Figure 4, the num b er n of k eys K used in the Multi-F actor Atten tion (MF A) of Eq. (6) controls the tradeoff b etw een bias and ov erfitting. On the Y elp dataset, w e observe that the MF A achiev es the b est p erformance at n = 3. W e b e- liev e that using few er K k eys (e.g., 1 or 2) introduces high bias, whereas using more keys (e.g., 4 or 5) results in o verfitting. On the Google Restaurants (GR) dataset, MF A achiev es the b est p erformance at n = 1 and the results decline in p erformance as the num b er of keys increases. W e assume that this is b ecause GR has low er av erage user interactions than the Y elp, which makes the data sparse. Sparse records increase the lik eliho o d of sampling the same keys rep eat- edly across ep o chs, which leads to o verfitting and ultimately p o or performance. 4.4 Ablation Study (R Q3) T o comprehensively analyze the con tributions of k ey comp onents in the prop osed framew ork, we conduct an ablation study . The results, presented in Figure 5, demonstrate the imp ortance of the framew ork’s key mo dules. In Multi-F actor A ttention (MF A), attention maps from n k eys are aggregated using max p o oling to accentuate factor imp ortance. W e replaced max p o oling with a verage p o oling ( R eFORM w/ AP ) to compare the aggregation metho ds. As shown in Figure 5, R eFORM w/ AP demonstrates slightly lo wer p erformance compared to R eFORM , suggesting that av erage p o oling may dilute the emphasis on dominant preferences. This finding highlights the efficacy of max p o oling in preserving strong factor preference during the MF A pro cess. Multi-F actor Atten tion (MF A) adjusts influential factors at b oth the user and item lev els to optimize their representations. W e replaced this mechanism with a simpler MLP , denoted as w/o MF A , to remov e the factor preference adjustmen t step. This substitution led to a significant p erformance drop, highlighting the essen tial role of MF A in adjusting factor preferences for improv ed representa- tion learning. Notably , on the Go ogle Restaurants dataset, remo ving MF A even degrades performance b elow that of the base mo del, LightGCN . This suggests that using factor-sp ecific information without prop er adjustment may introduce noise, negativ ely impacting the learning pro cess [26]. These findings highlight 12 M. Park et al. T able 2. Ablation study on factor imp ortance for Y elp and Go ogle Restaurants datasets. ∆ (%) denotes the p erformance change compared to ReFORM. Metho d Y elp Go ogle Restaurants R@20 ∆ (%) N@20 ∆ (%) R@20 ∆ (%) N@20 ∆ (%) ReF ORM 0.1062 - 0.0683 - 0.1088 - 0.0545 - w/o cuisine type 0.0982 -7.53 0.0634 -7.17 0.0932 -14.34 0.0468 -14.13 w/o flav or 0.1032 -2.82 0.0664 -2.78 0.1004 -7.72 0.0505 -7.34 w/o atmosphere 0.1055 -0.66 0.0679 -0.59 0.1030 -5.33 0.0519 -4.77 w/o price 0.1060 -0.19 0.0681 -0.29 0.1069 -1.75 0.0538 -1.28 w/o companion 0.1061 -0.09 0.0682 -0.15 0.1078 -0.92 0.0544 -0.18 w/o time 0.1044 -1.69 0.0671 -1.76 0.1081 -0.64 0.0545 0.00 w/o waiting 0.1060 -0.19 0.0682 -0.15 0.1086 -0.18 0.0549 0.73 the critical role of the MF A mec hanism in capturing nuanced user and item preferences effectiv ely . 5 In-depth Analysis of ReF ORM 5.1 V alidation of Selected F actors (RQ4) In this section, w e conduct an in-depth analysis of the selected factors in the restauran t recommendation setting. W e ev aluated factor imp ortance through an ablation exp eriment, in which each factor-sp ecific profile embedding is masked in turn to quan tify its contribution to recommendation p erformance. T o assess the importance of eac h selected factor, w e conduct an ablation study by individually masking the text-derived profile embedding corresp ond- ing to each factor during inference. Sp ecifically , for each factor, we replace the corresp onding slice in the user and item profile embeddings with a zero vec- tor, thereby removing the con tribution of that factor in the recommendation pro cess. W e then ev aluate the absolute and relative changes in performance on the test set. The results of this exp eriment are summarized in T able 2, which quan tifies the contribution of each factor in the recommendation p erformance. Notably , ablating cuisine typ e and flavor causes the most severe p erformance drops across both datasets (e.g., R@20 drops by 14.34% and 7.72% in Go ogle Restauran ts, resp ectively), demonstrating their critical roles in capturing p erson- alized preferences. In terestingly , atmospher e shows a marked impact on Google Restauran ts (R@20: -5.33%) but a limited effect on Y elp (-0.66%), indicating subtle platform-sp ecific differences in user priorities. Conv ersely , removing time and waiting yields minimal reductions or even slight impro vemen ts. This implies that these p eripheral factors are less informative and ma y o ccasionally introduce noise, diluting the mo del’s fo cus. Overall, these results v alidate the effectiv eness of the selected factors, providing actionable insights for both future mo del design and real-w orld system deploymen t. ReF ORM 13 T able 3. Impact of noise ratio in RPG profile generation on Go ogle Restaurants dataset. ∆ (%) denotes the relativ e performance change compared to the original Re- F ORM (noise ratio = 0). Noise ratio R@20 ∆ (%) N@20 ∆ (%) 0 0.1088 – 0.0545 – 0.5 0.0962 -11.58 0.0481 -11.74 1.0 0.0915 -15.9 0.0457 -16.15 5.2 Noise Review Injection in RPG (RQ5) T o ev aluate the robustness of Review-aggregated Profile Generation (RPG), we systematically in tro duce irrelev ant reviews (noise) in to the profile construction for the Go ogle Restaurants dataset. Here, a noise ratio of 0 uses only the target user’s authen tic reviews, while 1.0 relies entirely on randomly sampled reviews from other users. In T able 3, p erformance degrades monotonically as the noise ratio increases. Even partial contamination (ratio = 0.5) significantly impairs recommendation quality , and complete noise (ratio = 1.0) leads to substantial drops of up to 16.15% in NDCG@20. This confirms that authentic user reviews pro vide crucial p ersonalized signals that LLM-p o wered RPG effectiv ely extracts, underscoring the necessit y of high-quality input data for profile generation. 6 Conclusion In this pap er, we prop ose ReFORM, a nov el recommendation framework that lev erages a large language mo del to distinguish individual preferences. ReF ORM generates factor-sp ecific user/item profiles from reviews only via Review-aggrega- ted Profile Generation (RPG). By in tro ducing Multi-F actor A ttention (MF A), our framework captures the factor-sp ecific preferences of users and items, that pro vides understanding of which factors drive decision-making. W e demonstrate the robustness of our framework by ev aluating ReFORM on restaurant datasets of differen t sizes, where it consistently outperforms state-of-the-art baselines. Mo ving forward, we plan to further extend ReFORM’s capabilities b y exploring other domains. W e also aim to apply adv anced graph mo deling (e.g., con trastive learning) and other recommendation paradigms (e.g., sequen tial recommenda- tion) to deep en our insigh ts in to user b ehavior and impro ve the practical usabilit y of the framew ork. References 1. A chary a, A., Singh, B., Ono e, N.: Llm based generation of item-description for recommendation system. In: Pro ceedings of the 17th ACM Conference on Recom- mender Systems. p. 1204–1207. RecSys ’23, Asso ciation for Computing Machinery , New Y ork, NY, USA (2023) 14 M. Park et al. 2. A chiam, J., Adler, S., Agarwal, S., Ahmad, L., Akk ay a, I., Aleman, F.L., Almeida, D., Altenschmidt, J., Altman, S., Anadk at, S., et al.: Gpt-4 technical report. arXiv preprin t arXiv:2303.08774 (2023) 3. Bahdanau, D.: Neural machine translation by jointly learning to align and trans- late. arXiv preprint arXiv:1409.0473 (2014) 4. Bao, K., Zhang, J., Zhang, Y., W ang, W., F eng, F., He, X.: T allrec: An effective and efficien t tuning framew ork to align large language mo del with recommendation. In: Pro ceedings of the 17th A CM Conference on Recommender Systems. p. 1007–1014. RecSys ’23, Asso ciation for Computing Mac hinery , New Y ork, NY, USA (2023) 5. Chen, L., W u, L., Hong, R., Zhang, K., W ang, M.: Revisiting graph based collabora- tiv e filtering: A linear residual graph con volutional netw ork approac h. Pro ceedings of the AAAI Conference on Artificial Intelligence 34 , 27–34 (04 2020) 6. Ch ujo, K., Utiy ama, M.: Understanding the role of text length, sample size and v o cabulary size in determining text cov erage. Reading in a foreign language 17 , 1–22 (2005) 7. Cui, Z., Ma, J., Zhou, C., Zhou, J., Y ang, H.: M6-rec: Generative pretrained lan- guage mo dels are op en-ended recommender systems (2022) 8. Dai, S., Shao, N., Zhao, H., Y u, W., Si, Z., Xu, C., Sun, Z., Zhang, X., Xu, J.: Unco vering chatgpt’s capabilities in recommender systems. In: Pro ceedings of the 17th ACM Conference on Recommender Systems. p. 1126–1132. RecSys ’23, As- so ciation for Computing Machinery , New Y ork, NY, USA (2023) 9. Devlin, J., Chang, M.W., Lee, K., T outanov a, K.: BER T: Pre-training of deep bidirectional transformers for language understanding. In: Burstein, J., Doran, C., Solorio, T. (eds.) Proceedings of the 2019 Conference of the North Ameri- can Chapter of the Asso ciation for Computational Linguistics: Human Language T ec hnologies, V olume 1 (Long and Short Papers). pp. 4171–4186. Association for Computational Linguistics, Minneap olis, Minnesota (Jun 2019) 10. F an, W., Ma, Y., Li, Q., He, Y., Zhao, E., T ang, J., Yin, D.: Graph neural net- w orks for social recommendation. In: The W orld Wide W eb Conference. p. 417–426. WWW ’19, Asso ciation for Computing Mac hinery , New Y ork, NY, USA (2019) 11. Geng, S., Liu, S., F u, Z., Ge, Y., Zhang, Y.: Recommendation as language pro- cessing (rlp): A unified pretrain, p ersonalized prompt & predict paradigm (p5). In: Pro ceedings of the 16th A CM Conference on Recommender Systems. p. 299–315. RecSys ’22, Asso ciation for Computing Mac hinery , New Y ork, NY, USA (2022) 12. He, R., McAuley , J.: Ups and downs: Mo deling the visual ev olution of fashion trends with one-class collab orative filtering. In: Pro ceedings of the 25th Interna- tional Conference on W orld Wide W eb. p. 507–517. WWW ’16, International W orld Wide W eb Conferences Steering Committee (2016) 13. He, X., Deng, K., W ang, X., Li, Y., Zhang, Y., W ang, M.: Lightgcn: Simplifying and p o wering graph con volution net work for recommendation. In: Proceedings of the 43rd International A CM SIGIR Conference on Research and Dev elopment in Information Retriev al. p. 639–648. SIGIR ’20, Association for Computing Machin- ery , New Y ork, NY, USA (2020) 14. Hu, J., Shen, L., Albanie, S., Sun, G., V edaldi, A.: Gather-excite: exploiting feature con text in conv olutional neural net works. In: Pro ceedings of the 32nd International Conference on Neural Information Processing Systems. p. 9423–9433. NIPS’18, Curran Asso ciates Inc., Red Ho ok, NY, USA (2018) 15. Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation netw orks. In: 2018 IEEE/CVF Conference on Computer Vision and P attern Recognition. pp. 7132–7141 (2018) ReF ORM 15 16. Kang, W.C., Ni, J., Mehta, N., Sathiamo orthy , M., Hong, L., Chi, E., Cheng, D.Z.: Do llms understand user preferences? ev aluating llms on user rating prediction (2023) 17. Li, Y., Nie, J., Zhang, Y., W ang, B., Y an, B., W eng, F.: Contextual recommenda- tion based on text mining. In: Pro ceedings of the 23rd In ternational Conference on Computational Linguistics: Posters. p. 692–700. COLING ’10, Asso ciation for Computational Linguistics, USA (2010) 18. Lin, J., Dai, X., Xi, Y., Liu, W., Chen, B., Zhang, H., Liu, Y., W u, C., Li, X., Zhu, C., Guo, H., Y u , Y., T ang, R., Zhang, W.: How can recommender systems b enefit from large language mo dels: A surv ey . ACM T rans. Inf. Syst. 43 (2) (Jan 2025) 19. Liu, Q., Chen, N., Sak ai, T., W u, X.M.: Once: Bo osting conten t-based recommen- dation with both op en- and closed-source large language mo dels. In: Pro ceedings of the 17th ACM In ternational Conference on W eb Searc h and Data Mining. p. 452–461. WSDM ’24, Asso ciation for Computing Machinery , New Y ork, NY, USA (2024) 20. Lyu, H., Jiang, S., Zeng, H., Xia, Y., W ang, Q., Zhang, S., Chen, R., Leung, C., T ang, J., Luo, J.: LLM-rec: Personalized recommendation via prompting large language mo dels. In: Duh, K., Gomez, H., Bethard, S. (eds.) Findings of the Asso- ciation for Computational Linguistics: NAACL 2024. pp. 583–612. Association for Computational Linguistics, Mexico City , Mexico (Jun 2024) 21. McAuley , J., Lesko vec, J.: Hidden factors and hidden topics: understanding rating dimensions with review text. In: Proceedings of the 7th ACM Conference on Rec- ommender Systems. p. 165–172. RecSys ’13, Association for Computing Machinery , New Y ork, NY, USA (2013) 22. Op enAI: Gpt-4o mini: adv ancing cost-efficient intelligence (2024) 23. Ren, X., W ei, W., Xia, L., Su, L., Cheng, S., W ang, J., Yin, D., Huang, C.: Repre- sen tation learning with large language models for recommendation. In: Proceedings of the ACM W eb Conference 2024. p. 3464–3475. WWW ’24, Asso ciation for Com- puting Machinery , New Y ork, NY, USA (2024) 24. Rendle, S., F reudenthaler, C., Gan tner, Z., Schmidt-Thieme, L.: Bpr: Ba y esian p ersonalized ranking from implicit feedback. In: Proceedings of the T wen ty-Fifth Conference on Uncertaint y in Artificial Intelligence. p. 452–461. UAI ’09, AUAI Press, Arlington, Virginia, USA (2009) 25. Sarw ar, B., Karypis, G., Konstan, J., Riedl, J.: Item-based collaborative filtering recommendation algorithms. In: Pro ceedings of the 10th International Conference on W orld Wide W eb. p. 285–295. WWW ’01, Asso ciation for Computing Mac hin- ery , New Y ork, NY, USA (2001) 26. Tian, Y., Zhang, C., Guo, Z., Zhang, X., Cha wla, N.: Learning MLPs on graphs: A unified view of effectiveness, robustness, and efficiency . In: The Eleven th Inter- national Conference on Learning Representations (2023) 27. T ouvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.A., Lacroix, T., Rozière, B., Go yal, N., Hambro, E., Azhar, F., Rodriguez, A., Joulin, A., Grav e, E., Lample, G.: Llama: Op en and efficient foundation language models (2023) 28. V asw ani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., P olosukhin, I.: Atten tion is all you need. In: Pro ceedings of the 31st In ternational Conference on Neural Information Pro cessing Systems. p. 6000–6010. NIPS’17, Curran Asso ciates Inc., Red Hook, NY, USA (2017) 29. W ang, X., He, X., W ang, M., F eng, F., Chua, T.S.: Neural graph collab orative filtering. In: Pro ceedings of the 42nd International ACM SIGIR Conference on Researc h and Dev elopment in Information Retriev al. p. 165–174. SIGIR’19, Asso- ciation for Computing Machinery , New Y ork, NY, USA (2019) 16 M. Park et al. 30. W ei, W., Ren, X., T ang, J., W ang, Q., Su, L., Cheng, S., W ang, J., Yin, D., Huang, C.: Llmrec: Large language mo dels with graph augmentation for recommendation. In: Pro ceedings of the 17th ACM In ternational Conference on W eb Search and Data Mining. p. 806–815. WSDM ’24, Asso ciation for Computing Machinery , New Y ork, NY, USA (2024) 31. W ei, Y., W ang, X., Nie, L., He, X., Chua, T.S.: Graph-refined con volutional net- w ork for multimedia recommendation with implicit feedback. In: Pro ceedings of the 28th ACM International Conference on Multimedia. p. 3541–3549. MM ’20, Asso ciation for Computing Machinery , New Y ork, NY, USA (2020) 32. W ei, Y., W ang, X., Nie, L., He, X., Hong, R., Chua, T.S.: Mmgcn: Multi- mo dal graph conv olution netw ork for p ersonalized recommendation of micro-video. In: Pro ceedings of the 27th ACM International Conference on Multimedia. p. 1437–1445. MM ’19, Asso ciation for Computing Machinery , New Y ork, NY, USA (2019) 33. W u, J., W ang, X., F eng, F., He, X., Chen, L., Lian, J., Xie, X.: Self-sup ervised graph learning for recommendation. In: Proceedings of the 44th In ternational A CM SIGIR Conference on Research and Dev elopment in Information Retriev al. p. 726–735. SIGIR ’21, Association for Computing Machinery , New Y ork, NY, USA (2021) 34. Xi, Y., Liu, W., Lin, J., Cai, X., Zhu, H., Zh u, J., Chen, B., T ang, R., Zhang, W., Y u, Y.: T ow ards op en-world recommendation with kno wledge augmentation from large language mo dels. In: Pro ceedings of the 18th ACM Conference on Rec- ommender Systems. p. 12–22. RecSys ’24, Asso ciation for Computing Machinery , New Y ork, NY, USA (2024) 35. Y an, A., He, Z., Li, J., Zhang, T., McAuley , J.: Personalized sho wcases: Generating m ulti-mo dal explanations for recommendations. In: Pro ceedings of the 46th Inter- national ACM SIGIR Conference on Research and Developmen t in Information Retriev al. p. 2251–2255. SIGIR ’23, Asso ciation for Computing Machinery , New Y ork, NY, USA (2023) 36. Ying, R., He, R., Chen, K., Eksombatc hai, P ., Hamilton, W.L., Lesko vec, J.: Graph con volutional neural net works for web-scale recommender systems. In: Proceedings of the 24th A CM SIGKDD International Conference on Knowledge Discov ery & Data Mining. p. 974–983. KDD ’18, Asso ciation for Computing Machinery , New Y ork, NY, USA (2018) 37. Y u, J., Yin, H., Xia, X., Chen, T., Cui, L., Nguyen, Q.V.H.: Are graph aug- men tations necessary? simple graph con trastive learning for recommendation. In: Pro ceedings of the 45th International ACM SIGIR Conference on Research and Dev elopment in Information Retriev al. p. 1294–1303. SIGIR ’22, Asso ciation for Computing Machinery , New Y ork, NY, USA (2022) 38. Zhang, J., Xie, R., Hou, Y., Zhao, X., Lin, L., W en, J.R.: Recommendation as in- struction following: A large language mo del empow ered recommendation approach. A CM T rans. Inf. Syst. (Dec 2024), just A ccepted 39. Zhou, G., Devlin, J.: Multi-vector atten tion mo dels for deep re-ranking. In: Mo ens, M.F., Huang, X., Sp ecia, L., Yih, S.W.t. (eds.) Pro ceedings of the 2021 Conference on Empirical Metho ds in Natural Language Processing. pp. 5452–5456. Asso ciation for Computational Linguistics, Online and Punta Cana, Dominican Republic (Nov 2021)

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment