Differentially private in-context learning (DP-ICL) has recently become an active research topic due to the inherent privacy risks of in-context learning. However, existing approaches overlook a critical component of modern large language model (LLM) pipelines: the similarity search used to retrieve relevant context data. In this work, we introduce a DP framework for in-context learning that integrates nearest neighbor search of relevant examples in a privacy-aware manner. Our method outperforms existing baselines by a substantial margin across all evaluated benchmarks, achieving more favorable privacy-utility trade-offs. To achieve this, we employ nearest neighbor retrieval from a database of context data, combined with a privacy filter that tracks the cumulative privacy cost of selected samples to ensure adherence to a central differential privacy budget. Experimental results on text classification and document question answering show a clear advantage of the proposed method over existing baselines.
翻译:差分隐私上下文学习(DP-ICL)因上下文学习固有的隐私风险,近期已成为活跃的研究课题。然而,现有方法忽视了现代大语言模型(LLM)流程中的一个关键组成部分:用于检索相关上下文数据的相似性搜索。本研究提出了一种用于上下文学习的差分隐私框架,以隐私感知的方式整合了相关示例的最近邻搜索。在评估的所有基准测试中,我们的方法显著优于现有基线,实现了更优的隐私-效用权衡。为实现这一目标,我们采用从上下文数据库中进行最近邻检索,并结合隐私过滤器跟踪所选样本的累积隐私成本,以确保遵循中心差分隐私预算。文本分类和文档问答的实验结果表明,所提方法较现有基线具有明显优势。