Why should a clinician trust an Artificial Intelligence (AI) prediction? Despite the increasing accuracy of machine learning methods in medicine, the lack of transparency continues to hinder their adoption in clinical practice. In this work, we explore Kolmogorov-Arnold Networks (KANs) for clinical classification tasks on tabular data. In contrast to traditional neural networks, KANs are function-based architectures that offer intrinsic interpretability through transparent, symbolic representations. We introduce \emph{Logistic-KAN}, a flexible generalization of logistic regression, and \emph{Kolmogorov-Arnold Additive Model (KAAM)}, a simplified additive variant that delivers transparent, symbolic formulas. Unlike ``black-box'' models that require post-hoc explainability tools, our models support built-in patient-level insights, intuitive visualizations, and nearest-patient retrieval. Across multiple health datasets, our models match or outperform standard baselines, while remaining fully interpretable. These results position KANs as a promising step toward trustworthy AI that clinicians can understand, audit, and act upon. We release the code for reproducibility in \codeurl.
翻译:临床医生为何应信任人工智能(AI)的预测?尽管机器学习方法在医学领域的准确性日益提升,但其透明度的缺乏仍阻碍着其在临床实践中的广泛应用。本研究探索了Kolmogorov-Arnold网络(KANs)在表格数据临床分类任务中的应用。与传统神经网络不同,KANs是基于函数的架构,通过透明的符号化表征提供内在可解释性。我们提出了Logistic-KAN——逻辑回归的灵活泛化形式,以及Kolmogorov-Arnold加性模型(KAAM)——一种能生成透明符号化公式的简化加性变体。与依赖事后可解释性工具的“黑箱”模型相比,我们的模型支持内置的患者层面洞察、直观可视化及最相似患者检索。在多个健康数据集上的实验表明,我们的模型在保持完全可解释性的同时,达到或超越了标准基线方法的性能。这些结果确立了KANs作为迈向可信赖AI的重要一步,使临床医生能够理解、审查并据此采取行动。我们已公开代码以确保结果可复现。