This paper introduces an approach to increasing the explainability of artificial intelligence (AI) systems by embedding Large Language Models (LLMs) within standardized analytical processes. While traditional explainable AI (XAI) methods focus on feature attribution or post-hoc interpretation, the proposed framework integrates LLMs into defined decision models such as Question-Option-Criteria (QOC), Sensitivity Analysis, Game Theory, and Risk Management. By situating LLM reasoning within these formal structures, the approach transforms opaque inference into transparent and auditable decision traces. A layered architecture is presented that separates the reasoning space of the LLM from the explainable process space above it. Empirical evaluations show that the system can reproduce human-level decision logic in decentralized governance, systems analysis, and strategic reasoning contexts. The results suggest that LLM-driven standard processes provide a foundation for reliable, interpretable, and verifiable AI-supported decision making.
翻译:本文提出一种通过将大型语言模型(LLMs)嵌入标准化分析流程来增强人工智能(AI)系统可解释性的方法。传统可解释人工智能(XAI)方法侧重于特征归因或事后解释,而本框架将LLMs整合至明确定义的决策模型中,例如问题-选项-准则(QOC)、敏感性分析、博弈论和风险管理。通过将LLM推理置于这些形式化结构中,该方法将不透明的推理过程转化为透明且可审计的决策轨迹。本文提出一种分层架构,将LLM的推理空间与其上层的可解释流程空间相分离。实证评估表明,该系统能够在去中心化治理、系统分析和战略推理场景中复现人类水平的决策逻辑。结果表明,LLM驱动的标准流程为可靠、可解释且可验证的AI辅助决策提供了基础。