"Machine unlearning" is a popular proposed solution for mitigating the existence of content in an AI model that is problematic for legal or moral reasons, including privacy, copyright, safety, and more. For example, unlearning is often invoked as a solution for removing the effects of specific information from a generative-AI model's parameters, e.g., a particular individual's personal data or the inclusion of copyrighted content in the model's training data. Unlearning is also proposed as a way to prevent a model from generating targeted types of information in its outputs, e.g., generations that closely resemble a particular individual's data or reflect the concept of "Spiderman." Both of these goals--the targeted removal of information from a model and the targeted suppression of information from a model's outputs--present various technical and substantive challenges. We provide a framework for ML researchers and policymakers to think rigorously about these challenges, identifying several mismatches between the goals of unlearning and feasible implementations. These mismatches explain why unlearning is not a general-purpose solution for circumscribing generative-AI model behavior in service of broader positive impact.
翻译:“机器学习遗忘机制”是一种被广泛提出的解决方案,旨在缓解AI模型中因法律或道德原因(包括隐私、版权、安全等)而存在的内容问题。例如,遗忘机制常被视为从生成式AI模型参数中消除特定信息影响的方法,如移除特定个体的个人数据或模型训练数据中的受版权保护内容。该机制也被提议用于阻止模型在输出中生成特定类型的信息,例如与特定个体数据高度相似的生成内容或反映“蜘蛛侠”概念的输出。这两个目标——从模型中定向移除信息以及从模型输出中定向抑制信息——均面临诸多技术与实质挑战。我们为机器学习研究者和政策制定者提供了一个严谨分析这些挑战的框架,指出了遗忘机制目标与可行实施方案之间的若干错配。这些错配解释了为何遗忘机制无法作为通用解决方案,用以限定生成式AI模型行为以实现更广泛的积极影响。