Covariance Neural Networks (VNNs) perform graph convolutions on the covariance matrix of input data to leverage correlation information as pairwise connections. They have achieved success in a multitude of applications such as neuroscience, financial forecasting, and sensor networks. However, the empirical covariance matrix on which VNNs operate typically contains spurious correlations, creating a mismatch with the actual covariance matrix that degrades VNNs' performance and computational efficiency. To tackle this issue, we put forth Sparse coVariance Neural Networks (S-VNNs), a framework that applies sparsification techniques on the sample covariance matrix and incorporates the latter into the VNN architecture. We investigate the S-VNN when the underlying data covariance matrix is both sparse and dense. When the true covariance matrix is sparse, we propose hard and soft thresholding to improve the covariance estimation and reduce the computational cost. Instead, when the true covariance is dense, we propose a stochastic sparsification where data correlations are dropped in probability according to principled strategies. Besides performance and computation improvements, we show that S-VNNs are more stable to finite-sample covariance estimations than nominal VNNs and the analogous sparse principal component analysis. By analyzing the impact of sparsification on their behavior, we tie the S-VNN stability to the data distribution and sparsification approach. We support our theoretical findings with experimental results on a variety of application scenarios, ranging from brain data to human action recognition, and show an improved task performance, improved stability, and reduced computational time compared to alternatives.
翻译:暂无翻译