Comment updating is an emerging task in software evolution that aims to automatically revise source code comments in accordance with code changes. This task plays a vital role in maintaining code-comment consistency throughout software development. Recently, deep learning-based approaches have shown great potential in addressing comment updating by learning complex patterns between code edits and corresponding comment modifications. However, the effectiveness of these learning-based approaches heavily depends on the quality of training data. Existing datasets are typically constructed by mining version histories from open-source repositories such as GitHub, where there is often a lack of quality control over comment edits. As a result, these datasets may contain noisy or inconsistent samples that hinder model learning and generalization. In this paper, we focus on cleaning existing comment updating datasets, considering both the data's characteristics in the updating scenario and their implications on the model training process. We propose a hybrid statistical approach named CupCleaner (Comment UPdating's CLEANER) to achieve this purpose. Specifically, we combine static semantic information within data samples and dynamic loss information during the training process to clean the dataset. Experimental results demonstrate that, on the same test set, both the individual static strategy and the dynamic strategy can significantly filter out a portion of the data and enhance the performance of the model. Furthermore, employing a model ensemble approach can combine the characteristics of static and dynamic cleaning, further enhancing the performance of the model and the reliability of its output results.
翻译:暂无翻译