Post-Training LLMs as Better Decision-Making Agents: A Regret-Minimization Approach

Reading time: 2 minute
...

📝 Original Info

  • Title: Post-Training LLMs as Better Decision-Making Agents: A Regret-Minimization Approach
  • ArXiv ID: 2511.04393
  • Date: 2025-11-06
  • Authors: Chanwoo Park, Ziyang Chen, Asuman Ozdaglar, Kaiqing Zhang

📝 Abstract

Large language models (LLMs) are increasingly deployed as "agents" for decision-making (DM) in interactive and dynamic environments. Yet, since they were not originally designed for DM, recent studies show that LLMs can struggle even in basic online DM problems, failing to achieve low regret or an effective exploration-exploitation tradeoff. To address this, we introduce Iterative Regret-Minimization Fine-Tuning (Iterative RMFT), a post-training procedure that repeatedly distills low-regret decision trajectories back into the base model. At each iteration, the model rolls out multiple decision trajectories, selects the k-lowest regret ones, and fine-tunes itself on them. Unlike prior methods that (a) distill action sequences from known DM algorithms or (b) rely on manually crafted chain-of-thought templates, our approach leverages the regret metric to elicit the model's own DM ability and reasoning rationales. This reliance on model-generated reasoning avoids rigid output engineering and provides more flexible, natural-language training signals. Empirical results show that Iterative RMFT improves LLMs' DM performance across diverse models - from Transformers with numerical input/output, to open-weight LLMs, and advanced closed-weight models like GPT-4o mini. Its flexibility in output and reasoning formats enables generalization across tasks with varying horizons, action spaces, reward processes, and natural-language contexts. Finally, we provide theoretical insight showing that a single-layer Transformer under this paradigm can act as a no-regret learner in a simplified setting. Overall, Iterative RMFT offers a principled and general post-training framework for enhancing LLMs' decision-making capabilities.

💡 Deep Analysis

Figure 1

📄 Full Content

📸 Image Gallery

1x3_analysis_gamma_best_model.png 1x3_analysis_gaussian_best_model.png A3_T100_gemini-2.0-flash-lite-001_gaussian_combined_regret_final_sufff_minfrac.png A3_T100_gpt-4o-mini-2024-07-18_bernoulli_combined_regret_final_sufff_minfrac.png A3_T100_gpt-4o-mini-2024-07-18_gaussian_combined_regret_final_sufff_minfrac.png A3_T100_gpt-4o-mini-2024-07-18_gradual_variation_combined_regret_final_sufff.png A3_T50_gemini-2.0-flash-lite-001_gradual_variation_combined_regret_final_sufff.png A5_T100_gpt-4o-mini-2024-07-18_gaussian_combined_regret_final_sufff_minfrac.png A5_T100_gpt-4o-mini-2024-07-18_gradual_variation_combined_regret_final_sufff.png SDM_example.png alternating.png bernoulli.png distribution.png distribution2.png gamma_T100.png gamma_T50.png gaussian_T50.png l2ball_regret_dynamics_2x4.png loss.png minfrac.png mixture.png mixture_A2.png mixture_A3.png noisy_alternating.png phi_dialogue.png phi_summary.png policy_space_comparison_1x2.png reasoning_figures.png simplex_regret_dynamics_2x4.png train_full_projectedTF_simplex_mu_l2_zero_init_regret_behaviors_noisy_adaptive_bernoulli.png variance.png

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut