The recent success of generative AI highlights the crucial role of high-quality human feedback in building trustworthy AI systems. However, the increasing use of large language models (LLMs) by crowdsourcing workers poses a significant challenge: datasets intended to reflect human input may be compromised by LLM-generated responses. Existing LLM detection approaches often rely on high-dimensional training data such as text, making them unsuitable for annotation tasks like multiple-choice labeling. In this work, we investigate the potential of peer prediction -- a mechanism that evaluates the information within workers' responses without using ground truth -- to mitigate LLM-assisted cheating in crowdsourcing with a focus on annotation tasks. Our approach quantifies the correlations between worker answers while conditioning on (a subset of) LLM-generated labels available to the requester. Building on prior research, we propose a training-free scoring mechanism with theoretical guarantees under a crowdsourcing model that accounts for LLM collusion. We establish conditions under which our method is effective and empirically demonstrate its robustness in detecting low-effort cheating on real-world crowdsourcing datasets.
翻译:生成式人工智能的最新进展凸显了高质量人类反馈在构建可信AI系统中的关键作用。然而,众包工作者日益广泛使用大型语言模型(LLMs)带来了严峻挑战:旨在反映人类输入的标注数据集可能因LLM生成的响应而受到污染。现有的LLM检测方法通常依赖于文本等高维训练数据,使其难以适用于多项选择标注等任务。本研究探索了同行预测机制——一种无需真实标注即可评估工作者响应信息的方法——在缓解众包标注任务中LLM辅助作弊行为的潜力。我们的方法通过量化工作者答案之间的相关性,同时以请求方可获取的(部分)LLM生成标签为条件,构建了考虑LLM共谋的众包模型。基于前期研究,我们提出了一种具有理论保证的无训练评分机制,确立了该方法有效的条件,并通过真实世界众包数据集的实验验证了其在检测低质量作弊行为方面的鲁棒性。