Deepfake technologies are often associated with deception, misinformation, and identity fraud, raising legitimate societal concerns. Yet such narratives may obscure a key insight: deepfakes embody sophisticated capabilities for sensory manipulation that can alter human perception, potentially enabling beneficial applications in domains such as healthcare and education. Realizing this potential, however, requires understanding how the technology is conceptualized across disciplines. This paper analyzes 826 peer-reviewed publications from 2017 to 2025 to examine how deepfakes are defined and understood in the literature. Using large language models for content analysis, we categorize deepfake conceptualizations along three dimensions: Identity Source (the relationship between original and generated content), Intent (deceptive versus non-deceptive purposes), and Manipulation Granularity (holistic versus targeted modifications). Results reveal substantial heterogeneity that challenges simplified public narratives. Notably, a subset of studies discuss non-deceptive applications, highlighting an underexplored potential for social good. Temporal analysis shows an evolution from predominantly threat-focused views (2017 to 2019) toward recognition of beneficial applications (2022 to 2025). This study provides an empirical foundation for developing nuanced governance and research frameworks that distinguish applications warranting prohibition from those deserving support, showing that, with safeguards, deepfakes' realism can serve important social purposes beyond deception.
 翻译:暂无翻译