MiniRec: Data-Efficient Reinforcement Learning for LLM-based Recommendation

MiniRec: Data-Efficient Reinforcement Learning for LLM-based Recommendation
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

The integration of reinforcement learning (RL) into large language models (LLMs) has opened new opportunities for recommender systems by eliciting reasoning and improving user preference modeling. However, RL-based LLM recommendation faces significant efficiency challenges, making full-data training costly. Existing data selection methods define sample value based on learnability or representativeness, yet their loss- or gradient-driven or dataset coverage-driven criteria often misalign with RL learning dynamics, resulting in suboptimal performance. To address this, we propose MiniRec, a data selection framework tailored for RL-based LLM recommendation. MiniRec evaluates sample learnability using key RL signals – rewards – pruning samples that are too easy (too high reward) or too difficult (consistently low reward). It assesses representativeness by aligning sample gradients with the approximated “ideal” global RL optimization trajectory, selecting samples that mainly drive model updates, and it also enforces diversity to reduce redundancy. Combined with a curriculum learning strategy from easy to hard samples, MiniRec significantly reduces training cost while largely preserving performance. Extensive experiments demonstrate MiniRec’s effectiveness, highlighting the importance of reward-aligned, trajectory-informed data selection in RL-based LLM recommendation.


💡 Research Summary

MiniRec tackles the efficiency bottleneck of reinforcement‑learning (RL)‑driven recommendation systems that employ large language models (LLMs). While RL methods such as Group‑based Reinforced Policy Optimization (GRPO) can endow LLMs with reasoning capabilities—learning an “input → reason → output” pipeline—they require generating multiple trajectories per training example, leading to high GPU memory consumption and long training times. Existing data‑selection or pruning techniques, which rely on loss/gradient magnitude (learning‑signal driven) or dataset coverage (similarity driven), are misaligned with RL dynamics: loss‑based scores can be inflated by low‑reward samples, and coverage‑based methods ignore the reasoning component that RL seeks to teach.

MiniRec proposes a unified data‑selection framework that evaluates each sample along three dimensions specifically designed for RL‑LLM recommendation:

  1. Learnability (L) – Measured directly from the reward signal. A lightweight proxy model estimates the average reward of each sample. Samples with excessively high rewards (trivially easy) or persistently low rewards (hard to learn) receive low scores, while medium‑difficulty samples receive higher scores. This reward‑based filtering aligns sample importance with the RL objective of reward maximization.

  2. Representativeness (R) – Defined via alignment with an approximated “ideal” optimization trajectory. MiniRec computes a global second‑order gradient that approximates the direction from the initial policy to the final updated policy after full‑data training. For each sample, the cosine similarity between its gradient and this ideal direction is calculated; higher similarity indicates that the sample drives the core policy updates and therefore is representative of the reasoning learning process.

  3. Diversity (D) – Dynamically adjusts a sample’s value based on its similarity to already selected samples, preventing the over‑selection of highly similar yet important items. This ensures that the final subset covers a broader portion of the user‑item space.

The three scores are combined into a unified value function V(x|S)=λ·L(x)+(1‑λ)·R(x)+D(x|S), where λ balances learnability and representativeness. Samples are ranked by V and selected until the target subset size is reached.

Beyond static selection, MiniRec incorporates a curriculum learning schedule. The chosen subset is partitioned into K batches ordered from easy to hard (according to reward difficulty). Training proceeds sequentially: early epochs expose the model only to easy samples, stabilizing policy updates; later epochs gradually introduce harder samples, allowing the model to exploit richer reward signals. This curriculum mitigates early instability and improves final generalization.

Experimental Evaluation
MiniRec was evaluated on several real‑world recommendation datasets (e.g., MovieLens, Amazon reviews) using the Gemma‑2‑2b‑it LLM as the base policy. Baselines included random sampling, K‑means clustering in embedding space, and prior loss/gradient‑based pruning methods. Metrics were NDCG@5,10,20 and Hit Ratio@5,10,20.

Key findings:

  • Using only 30‑50 % of the original training data, MiniRec achieved performance within 0.5‑2 % of the full‑data baseline across all metrics.
  • K‑means coverage performed worse than random sampling, confirming that conventional representativeness does not align with RL objectives.
  • Reward‑based learnability alone outperformed loss‑based pruning by 5‑7 % in terms of data efficiency.
  • Adding representativeness (gradient alignment) and diversity further reduced training time by roughly 40 % while preserving accuracy.
  • Curriculum learning contributed an additional 1‑2 % boost in NDCG, especially on the hardest test sets.

Analysis
The study demonstrates that reward signals are a more faithful proxy for sample usefulness in RL than loss or gradient magnitude, because GRPO’s loss is derived from relative rewards and can be noisy. Aligning sample gradients with the global optimization direction captures the “reasoning” component that RL seeks to teach, offering a principled way to select representative samples without explicitly modeling intermediate reasoning steps. Diversity control prevents redundancy, ensuring that the selected subset remains informative.

Conclusion and Future Work
MiniRec provides a practical, RL‑aware data‑selection pipeline that dramatically cuts computational cost for LLM‑based recommendation without sacrificing recommendation quality. Future directions include (1) adaptive tuning of the proxy reward estimator, (2) multi‑objective selection that simultaneously accounts for fairness or novelty, and (3) online extensions where the selection mechanism updates continuously as new user interactions arrive.


Comments & Academic Discussion

Loading comments...

Leave a Comment