Effective feature selection is critical for building robust and interpretable predictive models, particularly in medical applications where identifying risk factors in the most extreme patient strata is essential. Traditional methods often focus on average associations, potentially overlooking predictors whose importance is concentrated in the tails of the data distribution. In this study, we introduce a novel, computationally efficient supervised filter method that leverages the Gumbel copula's upper-tail dependence coefficient to rank features based on their tendency to be simultaneously extreme with a positive outcome. We conducted a rigorous evaluation of this method against four standard baselines (Mutual Information, mRMR, ReliefF, and L1/Elastic-Net) using four distinct classifiers on two diabetes datasets: a large-scale public health survey (CDC, N=253,680) and a classic clinical benchmark (PIMA, N=768). Our analysis included comprehensive statistical tests, permutation importance, and robustness checks. On the CDC dataset, our method was the fastest selector and reduced the feature space by approximately 52% while maintaining predictive performance statistically indistinguishable from a model using all features. On the PIMA dataset, our method's feature ranking yielded the single best-performing model, achieving the highest ROC-AUC of all tested configurations. Across both datasets, the Gumbel-upper-tail dependence coefficient selector consistently identified clinically coherent and impactful predictors. We conclude that feature selection via upper-tail dependence is a powerful, efficient, and interpretable new tool for developing risk models in public health and clinical medicine.
翻译:暂无翻译