Automated Extraction of Multicomponent Alloy Data Using Large Language Models for Sustainable Design
The design of sustainable materials requires access to materials performance and sustainability data from literature corpus in an organized, structured and automated manner. Natural language processing approaches, particularly large language models (LLMs), have been explored for materials data extraction from the literature, yet often suffer from limited accuracy or narrow scope. In this work, an LLM-based pipeline is developed to accurately extract alloy-related information from both textual descriptions and tabular data across the literature on high-entropy (or multicomponent) alloys (HEA). Specifically two databases with 37,711 and 148,069 entries respectively are retrieved; one from the literature text, consisting of alloy composition, processing conditions, characterization methods, and reported properties, and other from the literature tables, consisting of property names, values, and units. The pipeline enhances materials-domain sensitivity through prompt engineering and retrieval-augmented generation and achieves F1-scores of 0.83 for textual extraction and 0.88 for tabular extraction, surpassing or matching existing approaches. Application of the pipeline to over 10,000 articles yields the largest publicly available multicomponent alloy database and reveals compositional and processing-property trends. The database is further employed for sustainability-aware materials selection in three application domains, i.e., lightweighting, soft magnetic, and corrosion-resistant, identifying multicomponent alloy candidates with more sustainable production while maintaining or exceeding benchmark performance. The pipeline developed can be easily generalized to other class of materials, and assist in development of comprehensive, accurate and usable databases for sustainable materials design.
💡 Research Summary
**
The paper presents a comprehensive, large‑scale pipeline that uses large language models (LLMs) to automatically extract alloy‑related information from both narrative text and tables in the scientific literature on high‑entropy (multicomponent) alloys (HEAs). Recognizing that existing materials databases are fragmented across unstructured formats, the authors design a two‑stage extraction workflow that couples sophisticated prompt engineering with retrieval‑augmented generation (RAG) to achieve high accuracy while remaining fully automated.
In the first stage (Query Set 1, QS1), only abstract and experimental‑section paragraphs are processed. Each LLM call receives a composite prompt consisting of: (1) system instructions that cast the model as a “materials‑data extraction expert,” (2) domain definitions that clarify alloy composition notation (e.g., Al₀.₃Cu₀.₇, AlₓCu₁₋ₓ), processing methods, and typical measurement conditions, (3) a curated set of 98 few‑shot examples annotated by experts, and (4) chain‑of‑thought formatting that forces the model to reason step‑by‑step. RAG is employed to dynamically select the most relevant few‑shot examples for each paragraph, improving both precision and recall. QS1 extracts alloy system names, compositions, processing routes, characterization techniques, and reported property names, but deliberately omits numerical values.
The second stage (Query Set 2, QS2) targets tables, which contain the bulk of quantitative data. All tables, captions, and footnotes are first converted to CSV, then fed to the LLM with a prompt that incorporates the property list generated from QS1 (≈ 350 distinct properties). This list guides the model to focus on relevant columns, while an “Others” bucket captures any out‑of‑vocabulary entries. QS2 extracts alloy composition, processing and testing conditions, property name, numeric value, and units. Post‑processing normalizes units, resolves composition aliases, and removes duplicates, yielding a clean, machine‑readable dataset.
Performance is benchmarked against two expert‑annotated review articles. Text extraction attains precision 0.81, recall 0.86 (F1 ≈ 0.83); table extraction reaches precision 0.98, recall 0.81 (F1 ≈ 0.88). These figures surpass earlier GPT‑3‑based efforts and demonstrate that LLMs, when guided by domain‑specific prompts and RAG, can reliably handle the complex, context‑dependent relationships typical of materials science literature.
Applying the pipeline to a corpus of 10 829 accessible papers (selected from ~18 000 Web of Science hits using “multicomponent alloys” or “high entropy alloy”) yields two databases containing 37 711 (text) and 148 069 (table) records—the largest publicly available HEA dataset to date. The authors then couple this database with sustainability indicators that capture supply‑risk, environmental burden, and socio‑economic factors. Multi‑objective optimization is performed for three application domains: lightweight structural alloys, soft magnetic materials, and corrosion‑resistant alloys. In each case, candidate alloys are identified that improve sustainability metrics (often >20 % reduction in carbon footprint or critical material use) while meeting or exceeding benchmark performance of current commercial alloys.
The discussion highlights remaining challenges: extracting phase‑specific compositions in multiphase alloys, handling non‑standard symbols or abbreviations, and retrieving quantitative information embedded in figures. Cost considerations are also addressed; while proprietary LLM APIs can be expensive at this scale, the authors argue that open‑source models combined with efficient prompt design can mitigate expenses.
Finally, the paper emphasizes the generality of the approach. By simply redefining domain prompts and updating the property list, the same pipeline can be adapted to polymers, ceramics, or composite systems. The curated database is made publicly accessible through an interactive web portal (Alloy Tattvasar), providing the broader community with a ready‑to‑use resource for data‑driven, sustainability‑aware materials design.
Comments & Academic Discussion
Loading comments...
Leave a Comment