Despite interest in communicating ethical problems and social contexts within the undergraduate curriculum to advance Public Interest Technology (PIT) goals, interventions at the graduate level remain largely unexplored. This may be due to the conflicting ways through which distinct Artificial Intelligence (AI) research tracks conceive of their interface with social contexts. In this paper we track the historical emergence of sociotechnical inquiry in three distinct subfields of AI research: AI Safety, Fair Machine Learning (Fair ML) and Human-in-the-Loop (HIL) Autonomy. We show that for each subfield, perceptions of PIT stem from the particular dangers faced by past integration of technical systems within a normative social order. We further interrogate how these histories dictate the response of each subfield to conceptual traps, as defined in the Science and Technology Studies literature. Finally, through a comparative analysis of these currently siloed fields, we present a roadmap for a unified approach to sociotechnical graduate pedagogy in AI.
翻译:尽管本科生课程有意交流伦理问题和社会背景,以推进公共利益技术(PIT)目标,但研究生一级的干预措施在很大程度上仍未探讨,原因可能是不同的人工智能(AI)研究如何以相互矛盾的方式构思其与社会背景的相互关系。在本论文中,我们追踪了社会技术调查在AI研究的三个不同的子领域:AI安全、公平机器学习(Fair ML)和人类在Loop(HIL)自治领域的历史出现。我们显示,每个子领域对PIT的看法来自以往技术系统在规范社会秩序中一体化所面临的特殊危险。我们进一步询问这些历史如何决定每个子领域如何应对科学和技术研究文献中定义的概念陷阱。最后,我们通过比较分析目前这些分领域,提出了在AI中统一社会技术研究生教学方法的路线图。