Support for deepfake regulation: The role of third-person perception, trust, and risk

Inhaltsverzeichnis

Bibliographische Infos


Cover der Ausgabe: SCM Studies in Communication and Media Jahrgang 14 (2025), Heft 4
Open Access Vollzugriff

SCM Studies in Communication and Media

Jahrgang 14 (2025), Heft 4


Autor:innen:
Verlag
Nomos, Baden-Baden
Copyrightjahr
2026
ISSN-Online
2192-4007
ISSN-Print
2192-4007

Kapitelinformationen


Open Access Vollzugriff

Jahrgang 14 (2025), Heft 4

Support for deepfake regulation: The role of third-person perception, trust, and risk


Autor:innen:
ISSN-Print
2192-4007
ISSN-Online
2192-4007


Kapitelvorschau:

Wie andere aufkommende Technologien bringen Deepfakes sowohl Risiken als auch Vorteile für die Gesellschaft mit sich. Aufgrund schädlicher Anwendungen wie Desinformation und nicht einvernehmlicher Pornografie sind die Forderungen nach einer Regulierung von Deepfake-Technologie jüngst gestiegen. Allerdings ist wenig darüber bekannt, inwieweit die Öffentlichkeit eine Regulierung von Deepfakes unterstützt und welche Faktoren dabei eine Rolle spielen. Diese Studie adressiert diese Forschungslücke mit einer präregistrierten Online-Befragung (n = 1.361) in der Schweiz, einem Land, in dem Bürgerinnen und Bürger durch direktdemokratische Instrumente wie Referenden Einfluss auf die politische Regulierung nehmen können. Unsere Ergebnisse bestätigen die Third-Person-Perception: Menschen glauben, dass Deepfakes andere stärker beeinflussen als sich selbst (Cohen’s d = 0,77). Dieser vermutete Effekt auf andere ist ein schwacher, aber signifikanter Prädiktor für die Unterstützung einer Regulierung (β = 0,07). Allerdings finden wir keine Hinweise auf den Second-Person-Effekt–die Annahme, dass Personen, die Deepfakes sowohl bei anderen als auch bei sich selbst als besonders einflussreich wahrnehmen, eine stärkere Unterstützung für Regulierungsmaßnahmen zeigen. Eine explorative Analyse weist allerdings auf einen potenziellen Second-Person-Effekt bei Frauen hin, die besonders von Deepfakes betroffen sind; dieses Ergebnis muss weiter untersucht und repliziert werden. Darüber hinaus stellen wir fest, dass eine höhere Risikowahrnehmung sowie ein größeres Vertrauen in Institutionen positiv mit der Unterstützung für eine Regulierung von Deepfakes zusammenhängen.

Literaturverzeichnis


  1. Ahmed, S. (2023). Examining public perception and cognitive biases in the presumed influence of deepfakes threat: Empirical evidence of third person perception from three studies. Asian Journal of Communication, 33(3), 308–331. https://doi.org/10.1080/01292986.2023.2194886 Google Scholar öffnen
  2. Altay, S., & Acerbi, A. (2024). People believe misinformation is a threat because they assume others are gullible. New Media & Society, 26(11), 6440–6461. https://doi.org/10.1177/14614448231153379 Google Scholar öffnen
  3. Baek, Y. M., Kang, H., & Kim, S. (2019). Fake news should be regulated because it influences both “others” and “me”: How and why the influence of presumed influence model should be extended. Mass Communication and Society, 22(3), 301–323. https://doi.org/10.1080/15205436.2018.1562076 Google Scholar öffnen
  4. Birrer, A., & Just, N. (2024). What we know and don’t know about deepfakes: An investigation into the state of the research and regulatory landscape. New Media & Society. Advance online publication. https://doi.org/10.1177/14614448241 253138 Google Scholar öffnen
  5. Bendahan Bitton, D. B., Hoffmann, C. P., & Godulla, A. (2024). Deepfakes in the context of AI inequalities: Analysing disparities in knowledge and attitudes. Information, Communication & Society, 295–315. https://doi.org/10.1080/1369118X.2024.2420037 Google Scholar öffnen
  6. Chen, M., Yu, W., & Liu, K. (2023). A meta-analysis of third-person perception related to distorted information: Synthesizing the effect, antecedents, and consequences. Information Processing & Management, 60(5). https://doi.org/10.1016/j.ipm.2023.103425 Google Scholar öffnen
  7. Chung, M., & Wihbey, J. (2024). Social media regulation, third-person effect, and public views: A comparative study of the United States, the United Kingdom, South Korea, and Mexico. New Media & Society, 26(8), 4534–4553. https://doi.org/10.1177/14614448221122996 Google Scholar öffnen
  8. Corbu, N., Oprea, D.-A., Negrea-Busuioc, E., & Radu, L. (2020). ‘They can’t fool me, but they can fool the others!’ Third person effect and fake news detection. European Journal of Communication, 35(2), 165–180. https://doi.org/10.1177/ 0267323120903686 Google Scholar öffnen
  9. Davison, W. P. (1983). The third-person effect in communication. Public Opinion Quarterly, 47(1), 1–15. https://doi.org/10.1086/268763 Google Scholar öffnen
  10. de Ruiter, A. (2021). The distinct wrong of deepfakes. Philosophy & Technology, 34(4), 1311–1332. https://doi.org/10. 1007/s13347-021-00459-2 Google Scholar öffnen
  11. Gardner, G. T., & Gould, L. C. (1989). Public perceptions of the risks and benefits of technology. Risk Analysis, 9(2), 225–242. https://doi.org/10.1111/j.1539-6924. 1989.tb01243.x Google Scholar öffnen
  12. Godulla, A., Hoffmann, C. P., & Seibert, D. (2021). Dealing with deepfakes – An interdisciplinary examination of the state of research and implications for communication studies. SCM Studies in Communication and Media, 10(1), 72–96. https://doi.org/10.5771/2192-4007-2021-1-72 Google Scholar öffnen
  13. Gosse, C., & Burkell, J. (2020). Politics and porn: How news media characterizes problems presented by deepfakes. Critical Studies in Media Communication, 37(5), 497–511. https://doi.org/10.1080/15295036.2020.1832697 Google Scholar öffnen
  14. Gunther, A. C., & Storey, J. D. (2003). The influence of presumed influence. Journal of Communication, 53(2), 199–215. https://doi.org/10.1111/j.1460- 2466.2003.tb02586.x Google Scholar öffnen
  15. Hameleers, M., Van Der Meer, T. G. L. A., & Dobber, T. (2022). You won’t believe what they just said! The effects of political deepfakes embedded as vox Google Scholar öffnen
  16. populi on social media. Social Google Scholar öffnen
  17. Media + Society, 8(3). https://doi.org/10.1177/20563051221116346 Google Scholar öffnen
  18. Jang, S. M., & Kim, J. K. (2018). Third person effects of fake news: Fake news regulation and media literacy interventions. Computers in Human Behavior, 80, 295–302. https://doi.org/10.1016/j.chb.2017.11.034 Google Scholar öffnen
  19. Jungherr, A., & Rauchfleisch, A. (2024). Negative downstream effects of alarmist disinformation discourse: Evidence from the United States. Political Behavior, 46(4), 2123–2143. https://doi.org/10.1007/s11109-024-09911-3 Google Scholar öffnen
  20. Jungherr, A., & Rauchfleisch, A. (in press). Public Opinion on the Politics of AI Alignment: Cross-National Evidence on Expectations for AI Moderation from Germany and the United States. Social Media + Society. Google Scholar öffnen
  21. Kalogeropoulos, A., Toff, B., & Fletcher, R. (2022). The watchdog press in the doghouse: A comparative study of attitudes about accountability journalism, trust in news, and news avoidance. The International Journal of Press/Politics, 29(2), 485–506. https://doi.org/10. 1177/19401612221112572 Google Scholar öffnen
  22. Karaboga, M., Frei, N., Puppis, M., Vogler, D., Raemy, P., Ebbers, F., Runge, G., Rauchfleisch, A., de Seta, G., Gurr, G., Friedewald, M., & Rovelli, S. (2024). Deepfakes und manipulierte Realitäten: Technologiefolgenabschätzung und Handlungsempfehlungen für die Schweiz [Deepfakes and manipulated realities: Technology impact assessment and policy recommendations for Switzerland]. vdf Hochschulverlag AG. Google Scholar öffnen
  23. Kim, M. (2025). A direct and indirect effect of third-person perception of COVID-19 fake news on support for fake news regulations on social media: Investigating the role of negative emotions and political views. Mass Communication and Society, 28(2), 229–252. https://doi.org/10.1080/15205436.2023.2227601 Google Scholar öffnen
  24. Lima, M. L., Barnett, J., & Vala, J. (2005). Risk perception and technological development at a societal level. Risk Analysis, 25(5), 1229–1239. https://doi.org/10. 1111/j.1539-6924.2005.00664.x Google Scholar öffnen
  25. Liu, P. L., & Huang, L. V. (2020). Digital disinformation about COVID-19 and the third-person effect: Examining the channel differences and negative emotional outcomes. Cyberpsychology, Behavior, and Social Networking, 23(11), 789–793. https://doi.org/10.1089/cyber. 2020.0363 Google Scholar öffnen
  26. Marien, S., & Hooghe, M. (2011). Does political trust matter? An empirical investigation into the relation between political trust and support for law compliance: does political trust matter? European Journal of Political Research, 50(2), 267–291. https://doi.org/10.1111/j.1475-6765.2010.01930.x Google Scholar öffnen
  27. Nguyen, D. (2023). How news media frame data risks in their coverage of big data and AI. Internet Policy Review, 12(2). https://policyreview.info/articles/analysis/how-news-media-frame-data-risks-big-data-and-ai Google Scholar öffnen
  28. Paradise, A., & Sullivan, M. (2012). (In)visible threats? The third-person effect in perceptions of the influence of Facebook. Cyberpsychology, Behavior, and Social Networking, 15(1), 55–60. ­https://doi.org/10.1089/cyber.2011.0054 Google Scholar öffnen
  29. PytlikZillig, L. M., Kimbrough, C. D., Shockley, E., Neal, T. M. S., Herian, M. N., Hamm, J. A., Bornstein, B. H., & Tomkins, A. J. (2017). A longitudinal and experimental study of the impact of knowledge on the bases of institutional trust. PLOS ONE, 12(4). https://doi.org/10.1371/journal.pone.0175387 Google Scholar öffnen
  30. Rauchfleisch, A., Vogler, D., & de Seta, G. (2025). Deepfakes or synthetic media? The effect of euphemisms for labeling technology on risk and benefit perceptions. Social Media + Society. https://doi.org/10.1177/20563051251350975 Google Scholar öffnen
  31. Riedl, M. J., Whipple, K. N., & Wallace, R. (2022). Antecedents of support for social media content moderation and platform regulation: The role of presumed effects on self and others. Information, Communication & Society, 25(11), 1632–1649. https://doi.org/10.1080/1369118X.2021.1874040 Google Scholar öffnen
  32. Six, F. (2013). Trust in regulatory relations: How new insights from trust research improve regulation theory. Public Management Review, 15(2), 163–185. ­https://doi.org/10.1080/14719037.2012.727461 Google Scholar öffnen
  33. Slovic, P., Fischhoff, B., & Lichtenstein, S. (1982). Why study risk perception? Risk Analysis, 2(2), 83–93. https://doi.org/10.1111/j.1539-6924.1982.tb01369.x Google Scholar öffnen
  34. Swissinfo.ch (2025, May 9). Switzerland rejects deepfake regulation. Retrieved from https://www.swissinfo.ch/eng/ai-­governance/switzerland-rejects-deepfake-regulation/89277391 Google Scholar öffnen
  35. Thouvenin, F.; Eisenegger, M.; Volz, S.; Vogler, D.; Jaffé, M., (2023). Governance von Desinformation in digitalisierten Öffentlichkeiten. Bericht für das Bundesamt für Kommunikation (BAKOM) [Governance of disinformation in digitalized publics. Report for the Federal Office of Communication]. Retrieved from: https://www.bakom.admin.ch/bakom/de/home/elektronische-medien/studien/ Google Scholar öffnen
  36. einzelstudien.html Google Scholar öffnen
  37. Vaccari, C., & Chadwick, A. (2020). Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media + Society, 6(1). https://doi.org/10.1177/2056305120903408 Google Scholar öffnen
  38. Verhoest, K., Redert, B., Maggetti, M., Levi-Faur, D., & Jordana, J. (2025). Trust and regulation. In F. Six, J. A. Hamm, D. Latusek, E. V. Zimmeren, & K. Verhoest (Eds.), Handbook on Trust in Public Governance (pp. 360–380). Edward Google Scholar öffnen
  39. Elgar Publishing. https://doi.org/10. 4337/9781802201406.00030 Google Scholar öffnen
  40. Wang, S., & Kim, S. (2022). Users’ emotional and behavioral responses to deepfake videos of K-pop idols. Computers in Human Behavior, 134. https://doi.org/10.1016/j.chb.2022.107305 Google Scholar öffnen
  41. Wolf, C. (2021). Public trust and biotech innovation: A theory of trustworthy regulation of (scary!) technology. Social Philosophy and Policy, 38(2), 29–49. https://doi.org/10.1017/S0265052522 000036 Google Scholar öffnen
  42. Yadlin-Segal, A., & Oppenheim, Y. (2021). Whose dystopia is it anyway? Deepfakes and social media regulation. Convergence: The International Journal of Research into New Media Technologies, 27(1), 36–51. https://doi.org/10. 1177/1354856520923963 Google Scholar öffnen
  43. Yu, E., Song, H., Jung, J., & Kim, Y. J. (2023). Perception and attitude toward the regulation of online video streaming (in South Korea). Online Media and Global Communication, 2(4), 651–679. https://doi.org/10.1515/omgc-2023-0059 Google Scholar öffnen
  44. Google Scholar öffnen

Zitation


Download RIS Download BibTex