We study the optimal sample complexity of variable selection in linear regression under general design covariance, and show that subset selection is optimal while under standard complexity assumptions, efficient algorithms for this problem do not exist. Specifically, we analyze the variable selection problem and provide the optimal sample complexity with exact dependence on the problem parameters for both known and unknown sparsity settings. Moreover, we establish a sample complexity lower bound for any efficient estimator, highlighting a gap between the statistical efficiency achievable by combinatorial algorithms (such as subset selection) compared to efficient algorithms (such as those based on convex programming). The proofs rely on a finite-sample analysis of an information criterion estimator, which may be of independent interest. Our results emphasize the optimal position of subset selection, the critical role played by restricted eigenvalues, and characterize the statistical-computational trade-off in high-dimensional variable selection.
翻译:暂无翻译