One of the most pressing challenges in artificial intelligence is to make models more transparent to their users. Recently, explainable artificial intelligence has come up with numerous method to tackle this challenge. A promising avenue is to use concept-based explanations, that is, high-level concepts instead of plain feature importance score. Among this class of methods, Concept Activation vectors (CAVs), Kim et al. (2018) stands out as one of the main protagonists. One interesting aspect of CAVs is that their computation requires sampling random examples in the train set. Therefore, the actual vectors obtained may vary from user to user depending on the randomness of this sampling. In this paper, we propose a fine-grained theoretical analysis of CAVs construction in order to quantify their variability. Our results, confirmed by experiments on several real-life datasets, point out towards an universal result: the variance of CAVs decreases as $1/N$, where $N$ is the number of random examples. Based on this we give practical recommendations for a resource-efficient application of the method.
翻译:暂无翻译