The emergence of transformative technologies often surfaces deep societal divisions, nowhere more evident than in contemporary debates about artificial intelligence (AI). A striking feature of these divisions is that they persist despite shared interests in ensuring that AI benefits humanity and avoiding catastrophic outcomes. This paper analyzes contemporary debates about AI risk, parsing the differences between the "doomer" and "boomer" perspectives into definitional, factual, causal, and moral premises to identify key points of contention. We find that differences in perspectives about existential risk ("X-risk") arise fundamentally from differences in causal premises about design vs. emergence in complex systems, while differences in perspectives about employment risks ("E-risks") pertain to different causal premises about the applicability of past theories (evolution) vs their inapplicability (revolution). Disagreements about these two forms of AI risk appear to share two properties: neither involves significant disagreements on moral values and both can be described in terms of differing views on the extent of boundedness of human rationality. Our approach to analyzing reasoning chains at scale, using an ensemble of LLMs to parse textual data, can be applied to identify key points of contention in debates about risk to the public in any arena.
翻译:变革性技术的出现常常暴露出深刻的社会分歧,这在当前关于人工智能(AI)的辩论中尤为明显。这些分歧的一个显著特征是,尽管各方都致力于确保AI造福人类并避免灾难性后果,但分歧依然存在。本文分析了当代关于AI风险的辩论,将“末日论者”与“繁荣论者”的视角差异分解为定义性、事实性、因果性和道德性前提,以识别关键争议点。我们发现,关于存在性风险(“X-risk”)的观点差异根本上源于对复杂系统中设计与涌现的因果前提的不同理解,而关于就业风险(“E-risks”)的观点差异则涉及对过去理论(进化)适用性与其不适用性(革命)的不同因果前提。这两种AI风险的分歧似乎共享两个特性:均未涉及重大的道德价值观分歧,且均可通过人类理性有限性程度的不同观点来描述。我们利用大语言模型集成解析文本数据,大规模分析推理链的方法,可应用于识别任何领域公共风险辩论中的关键争议点。