Increasing AI Explainability by LLM Driven Standard Processes
Reading time: 1 minute
...
📝 Original Info
- Title: Increasing AI Explainability by LLM Driven Standard Processes
- ArXiv ID: 2511.07083
- Date: 2025-11-10
- Authors: ** - 논문에 명시된 저자 정보가 제공되지 않았습니다. (예: 홍길동, 김철수, 박영희 등 – 실제 저자명을 확인 필요) **
📝 Abstract
This paper introduces an approach to increasing the explainability of artificial intelligence (AI) systems by embedding Large Language Models (LLMs) within standardized analytical processes. While traditional explainable AI (XAI) methods focus on feature attribution or post-hoc interpretation, the proposed framework integrates LLMs into defined decision models such as Question-Option-Criteria (QOC), Sensitivity Analysis, Game Theory, and Risk Management. By situating LLM reasoning within these formal structures, the approach transforms opaque inference into transparent and auditable decision traces. A layered architecture is presented that separates the reasoning space of the LLM from the explainable process space above it. Empirical evaluations show that the system can reproduce human-level decision logic in decentralized governance, systems analysis, and strategic reasoning contexts. The results suggest that LLM-driven standard processes provide a foundation for reliable, interpretable, and verifiable AI-supported decision making.💡 Deep Analysis
📄 Full Content
Reference
This content is AI-processed based on open access ArXiv data.