📝 Original Info
- Title: Generalization of RLVR Using Causal Reasoning as a Testbed
- ArXiv ID: 2512.20760
- Date: 2025-12-23
- Authors: ** Brian Lu¹, Hongyu Zhao², Shuo Sun³, Hao Peng⁴, Rui Ding⁵, Hongyuan Mei⁶ ¹ Johns Hopkins University ² University of Maryland, College Park ³ National University of Singapore ⁴ University of Illinois at Urbana‑Champaign ⁵ Microsoft Research Asia ⁶ Toyota Technological Institute at Chicago — **
📝 Abstract
Reinforcement learning with verifiable rewards (RLVR) has emerged as a promising paradigm for post-training large language models (LLMs) on complex reasoning tasks. Yet, the conditions under which RLVR yields robust generalization remain poorly understood. This paper provides an empirical study of RLVR generalization in the setting of probabilistic inference over causal graphical models. This setting offers two natural axes along which to examine generalization: (i) the level of the probabilistic query -- associational, interventional, or counterfactual -- and (ii) the structural complexity of the query, measured by the size of its relevant subgraph. We construct datasets of causal graphs and queries spanning these difficulty axes and fine-tune Qwen-2.5-Instruct models using RLVR or supervised fine-tuning (SFT). We vary both the model scale (3B-32B) and the query level included in training. We find that RLVR yields stronger within-level and across-level generalization than SFT, but only for specific combinations of model size and training query level. Further analysis shows that RLVR's effectiveness depends on the model's initial reasoning competence. With sufficient initial competence, RLVR improves an LLM's marginalization strategy and reduces errors in intermediate probability calculations, producing substantial accuracy gains, particularly on more complex queries. These findings show that RLVR can improve specific causal reasoning subskills, with its benefits emerging only when the model has sufficient initial competence.
💡 Deep Analysis
📄 Full Content
Generalization of RLVR using Causal Reasoning as a Testbed
GENERALIZATION OF RLVR USING CAUSAL REASON-
ING AS A TESTBED
Brian Lu 1∗
Hongyu Zhao 2
Shuo Sun 3
Hao Peng 4
Rui Ding 5
Hongyuan Mei 6
1Johns Hopkins University
2University of Maryland, College Park
3National University of Singapore
4University of Illinois at Urbana-Champaign
5Microsoft Research Asia
6Toyota Technological Institute at Chicago
ABSTRACT
Reinforcement learning with verifiable rewards (RLVR) has emerged as a promis-
ing paradigm for post-training large language models (LLMs) on complex rea-
soning tasks. Yet, the conditions under which RLVR yields robust generalization
remain poorly understood. This paper provides an empirical study of RLVR gen-
eralization in the setting of probabilistic inference over causal graphical models.
This setting offers two natural axes along which to examine generalization: (i) the
level of the probabilistic query—associational, interventional, or counterfactual—
and (ii) the structural complexity of the query, measured by the size of its relevant
subgraph. We construct datasets of causal graphs and queries spanning these dif-
ficulty axes and fine-tune Qwen-2.5-Instruct models using RLVR or supervised
fine-tuning (SFT). We vary both the model scale (3B-32B) and the query level
included in training. We find that RLVR yields stronger within-level and across-
level generalization than SFT, but only for specific combinations of model size and
training query level. Further analysis shows that RLVR’s effectiveness depends
on the model’s initial reasoning competence. With sufficient initial competence,
RLVR improves an LLM’s marginalization strategy and reduces errors in interme-
diate probability calculations, producing substantial accuracy gains, particularly
on more complex queries. These findings show that RLVR can improve specific
causal reasoning subskills, with its benefits emerging only when the model has
sufficient initial competence.
1
INTRODUCTION
Reinforcement learning with verifiable rewards (RLVR) (Lambert et al., 2025; DeepSeek-AI et al.,
2025) is a promising paradigm for post-training large language models (LLMs) on complex rea-
soning tasks. RLVR leverages automatic correctness signals from domains equipped with reliable
verifiers, and has enabled substantial progress in mathematical problem solving (Shao et al., 2024;
Lambert et al., 2025; DeepSeek-AI et al., 2025), formal theorem proving (Xin et al., 2024; Ren et al.,
2025; Wang et al., 2025), code generation (Le et al., 2022; Liu & Zhang, 2025), and in biomedi-
cal and chemistry applications (Biomni & Sky RL, 2025; Narayanan et al., 2025). Despite rapid
progress across diverse domains, the conditions under which LLMs trained with RLVR exhibit reli-
able generalization beyond their training data remain poorly understood.
Recent work has begun to examine the generalization behavior of reinforcement-learning fine-tuning
(RL) relative to supervised fine-tuning (SFT) or hybrid approaches (Chu et al., 2025; Chen et al.,
2025; Swamy et al., 2025; Qiu et al., 2025). Particularly relevant is Chu et al. (2025), which evaluates
the generalization of RLVR and SFT on novel variants of text and visual reasoning tasks. Our work
differs from these prior work by focusing on a challenging and essential task: causal inference.
Causal inference provides a structured setting for examining RLVR generalization, because its three
levels of inference—associational, interventional, and counterfactual, known collectively as the
causal ladder (Bareinboim et al., 2022; Pearl & Mackenzie, 2018)—form a hierarchy that supports
both within- and across-level generalization tests. CLadder (Jin et al., 2023) covers this hierarchy
∗Correspondence to: zlu39@jhu.edu.
1
arXiv:2512.20760v1 [cs.LG] 23 Dec 2025
Generalization of RLVR using Causal Reasoning as a Testbed
Here's a causal graph:
Variables: v1, v2, v3, ...
Values: [0, 1]
Graph:
v1 -> v2, v2 -> v3, v1
... // more edges
Parametrization:
p(v2|v1=0) = [0.3, 0.7]
p(v2|v1=1) = [0.8, 0.2]
p(v3|v1=0,v2=0) = ...
... // more cpts
Question:
What's the marginal
distribution of v3 if we
intervene to set v1 to 1?
Task Formulation and RL Training
Output: Derivations & Calculation Steps + Final Answer
Input: Causal Graph +
Marginal Inference Question
THOUGHT PROCESS
... modify the graph ...
... write formula ...
... substitute value ...
... compute answer ...
After all calculations,
p(v3=1|do(v1)=1) = 0.78
ANSWER: [0.78, 0.22]
Sample
Answer
[0.78, 0.22]
: Reference
THOUGHT PROCESS
... no graph change ...
... digression ...
... copy wrong cpt ...
Since v1 is intervened
to 1, p(v3|v1=1) = [0.3,
0.7]
ANSWER: [0.3, 0.7]
THOUGHT PROCESS
... incoherent steps ...
... false assumption ...
... no calculations ...
ANSWER: [0.66.3]
1.0
0.2
0.0
Compute
Rewards
Reward:
Distribution of
DAGs
Data Generation
Mechanism
Sampler
Graph
Sampler
Query
Sampler
Solver Module
1
2
4
3
5
[0.78, 0
Reference
This content is AI-processed based on open access ArXiv data.