Generating Verifiable Chain of Thoughts from Exection-Traces
Teaching language models to reason about code execution remains a fundamental challenge. While Chain-of-Thought (CoT) prompting has shown promise, current synthetic training data suffers from a critic
Teaching language models to reason about code execution remains a fundamental challenge. While Chain-of-Thought (CoT) prompting has shown promise, current synthetic training data suffers from a critical weakness: the reasoning steps are often plausible-sounding explanations generated by teacher models, not verifiable accounts of what the code actually does. This creates a troubling failure mode where models learn to mimic superficially convincing but logically flawed reasoning patterns. We address this by grounding CoT generation directly in program execution traces. Our pipeline instruments code to capture its dynamic behavior, then narrates these execution traces into natural language and factually-grounded rationales that are verifiable by design. This execution-grounded approach ensures every reasoning step reflects what the program computes, eliminating logical hallucinations at the source. We evaluate our method on code reasoning tasks, code generation and explanation tasks from HumanEval. Models trained on our bi-directional trace-grounded data achieve substantial improvements on reasoning tasks, with gains of up to 30 points on output prediction and 28 points on input prediction over base models, alongside competitive explanation and code generation performance. https://github.ibm.com/IBM-Research-AI/Verified-Code-CoT
📜 Original Paper Content
🚀 Synchronizing high-quality layout from 1TB storage...