Recent advances in AI applications have raised growing concerns about the need for ethical guidelines and regulations to mitigate the risks posed by these technologies. In this paper, we present a mixed-methods survey study - combining statistical and qualitative analyses - to examine the ethical perceptions, practices, and knowledge of individuals involved in various AI development roles. Our survey comprises 414 participants from 43 countries, representing various roles such as AI managers, analysts, developers, quality assurance professionals, and information security and privacy experts. The results reveal varying degrees of familiarity and experience with AI ethics principles, government initiatives, and risk mitigation strategies across roles, regions, and other demographic factors. Our findings underscore the importance of a collaborative, role-sensitive approach that involves diverse stakeholders in ethical decision-making throughout the AI development lifecycle. We advocate for developing tailored, inclusive solutions to address ethical challenges in AI development, and we propose future research directions and educational strategies to promote ethics-aware AI practices.
翻译:人工智能应用的最新进展引发了对伦理准则与监管需求的日益关注,以减轻这些技术带来的风险。本文通过一项混合方法调查研究——结合统计分析与质性分析——考察了参与各类人工智能开发角色的个体对伦理的认知、实践及知识水平。我们的调查涵盖了来自43个国家的414名参与者,其角色包括人工智能管理者、分析师、开发者、质量保证专业人员以及信息安全与隐私专家。研究结果显示,不同角色、区域及其他人口统计因素在人工智能伦理原则、政府倡议及风险缓解策略的熟悉程度和实践经验上存在差异。我们的发现强调了在人工智能开发生命周期中,采用协作且角色敏感的方法、让多元利益相关者参与伦理决策的重要性。我们主张制定定制化、包容性的解决方案以应对人工智能开发中的伦理挑战,并提出了未来研究方向与教育策略,以促进具备伦理意识的人工智能实践。