Skill-Based Autonomous Agents for Material Creep Database Construction
The advancement of data-driven materials science is currently constrained by a fundamental bottleneck: the vast majority of historical experimental data remains locked within the unstructured text and rasterized figures of legacy scientific literature. Manual curation of this knowledge is prohibitively labor-intensive and prone to human error. To address this challenge, we introduce an autonomous, agent-based framework powered by Large Language Models (LLMs) designed to excavate high-fidelity datasets from scientific PDFs without human intervention. By deploying a modular “skill-based” architecture, the agent orchestrates complex cognitive tasks - including semantic filtering, multi-modal information extraction, and physics-informed validation. We demonstrate the efficacy of this framework by constructing a physically self-consistent database for material creep mechanics, a domain characterized by complex graphical trajectories and heterogeneous constitutive models. Applying the pipeline to 243 publications, the agent achieved a verified extraction success rate exceeding 90% for graphical data digitization. Crucially, we introduce a cross-modal verification protocol, demonstrating that the agent can autonomously align visually extracted data points with textually extracted constitutive parameters ($R^2 > 0.99$), ensuring the physical self-consistency of the database. This work not only provides a critical resource for investigating time-dependent deformation across diverse material systems but also establishes a scalable paradigm for autonomous knowledge acquisition, paving the way for the next generation of self-driving laboratories.
💡 Research Summary
The paper addresses the critical data bottleneck in data‑driven materials science, where most historic experimental results are locked in unstructured text and rasterized figures. To overcome this, the authors develop an autonomous, skill‑based agent framework powered by a large language model (LLM) that can mine scientific PDFs without human intervention. The system follows a five‑stage pipeline: (1) literature collection, (2) automated screening, (3) multimodal information extraction, (4) physics‑informed formula validation, and (5) structured storage with a web interface.
In the collection stage, a broad set of PDFs is gathered using keyword queries. The screening stage employs the LLM to read full texts and retain only papers that contain both experimental creep data and explicit constitutive equations, discarding purely theoretical or unrelated works. The core of the framework is the multimodal extraction engine. Textual extraction parses material composition, experimental conditions (temperature, stress), and the exact mathematical form of the creep law, normalizing variable names and units. Visual extraction identifies plots, determines axis scales, and digitizes data points from relevant creep curves while filtering out supplementary or ambiguous series.
A novel physics‑guardrail skill validates each extracted entry on three fronts: completeness of the equation, physical relevance to time‑dependent deformation, and mathematical integrity (dimensional homogeneity). Cross‑modal consistency is enforced by requiring a high coefficient of determination (R² > 0.99) between parameters derived from the text and the digitized curves. Only entries passing all checks are serialized into a relational database that stores bibliographic metadata (including DOI) and the extracted creep data. An interactive web portal allows users to filter by material, temperature, stress, visualize strain‑time curves in real time, and export data in standard formats for downstream machine‑learning tasks.
The framework was applied to 243 creep‑mechanics publications. Graph digitization succeeded for more than 90 % of the identified curves, and the cross‑modal verification consistently achieved R² > 0.99, demonstrating physical self‑consistency. A human‑verified subset of 20 papers was used to benchmark the screening module, yielding Precision ≈ 0.95, Recall ≈ 0.92, F1 ≈ 0.94, and Accuracy ≈ 0.94—substantially outperforming traditional rule‑based or manual curation approaches.
Beyond the specific case study, the authors argue that the modular skill‑based architecture—comprising instruction injection, scoped tool sets, and hard output constraints—provides a scalable paradigm for autonomous knowledge acquisition across scientific domains that require complex multimodal reasoning. By integrating LLM reasoning with domain‑specific validation, the system mitigates hallucination risks and delivers high‑fidelity datasets, paving the way for self‑driving laboratories and accelerated AI‑for‑Science research.
Comments & Academic Discussion
Loading comments...
Leave a Comment