EngTrace: A Symbolic Benchmark for Verifiable Process Supervision of Engineering Reasoning
Reading time: 2 minute
...
📝 Original Info
- Title: EngTrace: A Symbolic Benchmark for Verifiable Process Supervision of Engineering Reasoning
- ArXiv ID: 2511.01650
- Date: 2025-11-03
- Authors: ** - 논문에 명시된 저자 정보가 제공되지 않았습니다. (정보 없음) **
📝 Abstract
Large Language Models (LLMs) are increasingly entering specialized, safety-critical engineering workflows governed by strict quantitative standards and immutable physical laws, making rigorous evaluation of their reasoning capabilities imperative. However, existing benchmarks such as MMLU, MATH, and HumanEval assess isolated cognitive skills, failing to capture the physically grounded reasoning central to engineering, where scientific principles, quantitative modeling, and practical constraints must converge. To enable verifiable process supervision in engineering, we introduce EngTrace, a symbolic benchmark comprising 90 templates across three major engineering branches, nine core domains and 20 distinct areas. Through domain-aware parameterization, we generate 1,350 unique, contamination-resistant test cases to stress-test generalization. Moving beyond outcome matching, we introduce a verifiable two-stage evaluation framework that uses a tiered protocol to validate intermediate reasoning traces alongside final answers through automated procedural checks and a heterogeneous AI Tribunal. Our evaluation of 24 leading LLMs reveals a distinct trade-off between numeric precision and trace fidelity, identifying a complexity cliff where abstract mathematical pre-training fails to translate into the integrative reasoning required for advanced engineering tasks.💡 Deep Analysis
📄 Full Content
Reference
This content is AI-processed based on open access ArXiv data.