Large Language Models (LLMs) show great promise in software engineering tasks like Fault Localization (FL) and Automatic Program Repair (APR). This study investigates the impact of input order and context size on LLM performance in FL, a crucial step for many downstream software engineering tasks. We test different orders for methods using Kendall Tau distances, including "perfect" (where ground truths come first) and "worst" (where ground truths come last), using two benchmarks that consist of both Java and Python projects. Our results indicate a significant bias in order; Top-1 FL accuracy in Java projects drops from 57% to 20%, while in Python projects, it decreases from 38% to approximately 3% when we reverse the code order. Breaking down inputs into smaller contexts helps reduce this bias, narrowing the performance gap in FL from 22% to 6% and then to just 1% on both benchmarks. We then investigated whether the bias in order was caused by data leakage by renaming the method names with more meaningful alternatives. Our findings indicated that the trend remained consistent, suggesting that the bias was not due to data leakage. We also look at ordering methods based on traditional FL techniques and metrics. Ordering using DepGraph's ranking achieves 48% Top-1 accuracy, which is better than more straightforward ordering approaches like CallGraphDFS. These findings underscore the importance of how we structure inputs, manage contexts, and choose ordering methods to improve LLM performance in FL and other software engineering tasks.
翻译:暂无翻译