MAU-GPT: Enhancing Multi-type Industrial Anomaly Understanding via Anomaly-aware and Generalist Experts Adaptation
As industrial manufacturing scales, automating fine-grained product image analysis has become critical for quality control. However, existing approaches are hindered by limited dataset coverage and poor model generalization across diverse and complex anomaly patterns. To address these challenges, we introduce MAU-Set, a comprehensive dataset for Multi-type industrial Anomaly Understanding. It spans multiple industrial domains and features a hierarchical task structure, ranging from binary classification to complex reasoning. Alongside this dataset, we establish a rigorous evaluation protocol to facilitate fair and comprehensive model assessment. Building upon this foundation, we further present MAU-GPT, a domain-adapted multimodal large model specifically designed for industrial anomaly understanding. It incorporates a novel AMoE-LoRA mechanism that unifies anomaly-aware and generalist experts adaptation, enhancing both understanding and reasoning across diverse defect classes. Extensive experiments show that MAU-GPT consistently outperforms prior state-of-the-art methods across all domains, demonstrating strong potential for scalable and automated industrial inspection.
💡 Research Summary
The paper tackles two intertwined challenges in industrial visual inspection: (1) the lack of a comprehensive, fine‑grained benchmark that reflects the diversity of real‑world defects, and (2) the difficulty of adapting large vision‑language models (VLMs) to the highly specialized domain of anomaly understanding. To this end, the authors introduce MAU‑Set, a new dataset, and MAU‑GPT, a domain‑adapted multimodal large model equipped with a novel Adaptive Mixture‑of‑Experts Low‑Rank Adaptation (AMoE‑LoRA) mechanism.
MAU‑Set
MAU‑Set aggregates images from seven sources (both real and synthetic) across six industrial domains: consumer products, electronic components, mechanical parts, construction materials, optical inspection, and others. It covers 35 product types and more than 100 defect classes, yielding 28 842 images and 224 341 question‑answer (QA) pairs. The dataset defines two QA styles: (i) Discriminative QA, a binary “normal vs. abnormal” decision, and (ii) Open‑Ended QA, which asks the model to generate free‑form, domain‑specific answers (e.g., defect type, location, root cause, impact). These styles are further split into five hierarchical tasks ranging from basic detection to deep reasoning, encouraging progressive learning. The QA prompts are curated by domain experts and then expanded using a large language model, ensuring both linguistic diversity and technical correctness. Compared with existing benchmarks such as MVTec‑AD, DeepPCB, and KolektorSDD2, MAU‑Set is larger in scale, richer in annotation depth (instance‑level QA), and more varied in domain coverage.
MAU‑GPT Architecture
MAU‑GPT builds on a frozen vision encoder, a trainable visual projection, and a large language model (LLM) backbone. The core innovation is the insertion of AMoE‑LoRA modules into multiple transformer layers. AMoE‑LoRA consists of two complementary expert streams:
- Generalist LoRA experts (G‑LoRA) – A set of N low‑rank LoRA adapters, each parameterized by matrices (A_i) and (B_i). An input‑dependent router computes softmax weights (\omega_i) from the token embedding, dynamically blending the outputs of all experts:
\
Comments & Academic Discussion
Loading comments...
Leave a Comment