A new face of political advertising? Synthetic imagery in the 2025 German federal election campaigns on social media

Table of contents

Bibliographic information


Cover of Volume: SCM Studies in Communication and Media Volume 14 (2025), Issue 4
Open Access Full access

SCM Studies in Communication and Media

Volume 14 (2025), Issue 4


Authors:
Publisher
Nomos, Baden-Baden
Copyright Year
2026
ISSN-Online
2192-4007
ISSN-Print
2192-4007

Chapter information


Open Access Full access

Volume 14 (2025), Issue 4

A new face of political advertising? Synthetic imagery in the 2025 German federal election campaigns on social media


Authors:
ISSN-Print
2192-4007
ISSN-Online
2192-4007


Preview:

The rise of AI-generated content represents a new frontier in political communication. As synthetic media become more sophisticated and accessible, their role in shaping voter perceptions and influencing public discourse warrants closer examination. This study examines the use of AI-generated images in the 2025 German federal election campaign, assessing their prevalence, strategic use, and transparency. We conducted a content analysis of Instagram posts from the major German political parties and their youth organizations in the six weeks leading up to the election. Our analysis focused on identifying AI-generated visuals, evaluating their labeling practices, and examining their communicative and ideological functions. We also compared differences in adoption and usage patterns across parties to assess potential implications for democratic processes. Our findings indicate that the far-right Alternative for Germany (AfD) uses synthetic visuals significantly more than other parties. These AI-generated images are predominantly photorealistic and often lack clear labeling, raising concerns about transparency and potential voter deception. The AfD primarily uses such visuals for emotional and ideological messaging, using AI-generated content to reinforce its political narratives and mobilize support. Our findings provide a structured assessment of AI-generated content in German political communication and highlight the potential risks associated with unregulated use of synthetic media in electoral campaigns. Our research also contributes to the broader discourse on the ethical implications of synthetic media in democratic societies.

Bibliography


  1. Bast, J. (2024). Managing the image. The visual communication strategy of European right-wing populist politicians on Instagram. Journal of Political Marketing, 23(1), 1–25. ­https://doi.org/10.1080/15377857.2021.1892901 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  2. Bennett, W. L., & Livingston, S. (2018). The disinformation order: Disruptive communication and the decline of democratic institutions. European Journal of Communication, 33(2), 122–139. https://doi.org/10.1177/0267323118760317 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  3. Bray, S. D., Johnson, S. D., & Kleinberg, B. (2023). Testing human ability to detect ‘deepfake’ images of human faces. Journal of Cybersecurity, 9(1), 1–18. https://doi.org/10.1093/ Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  4. cybsec/tyad011 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  5. Brettschneider, F., Niedermayer, O., & Weßels, B. (2007). Die Bundestagswahl 2005: Analysen des Wahlkampfes und der Wahlergebnisse [The German federal election 2005: Analyses of the election campaign and results]. In: F. Brettschneider, O. Niedermayer, B. Weßels (Eds.), Die Bundestagswahl 2005 (pp. 9–18). VS https://doi.org/10.1007/978-3-531-90536-5_1 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  6. Burrus, O., Curtis, A., & Herman, L. (2024). Unmasking AI: Informing authenticity decisions by labeling AI-generated content. Interactions, 31(4), 38–42. https://doi.org/10.1145/3665321 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  7. Campbell, A. (1960). Surge and decline: A study of electoral change. Public Opinion Quarterly, 24(3), 397–418. https://psycnet.apa.org/doi/10.1086/266960 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  8. Corsi, G., Marino, B., & Wong, W. (2024). The spread of synthetic media on X. Harvard Kennedy School (HKS) Misinformation Review. https://doi.org/10.37016/mr-2020-140 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  9. Dalton, R. J. (2018). Citizen politics: Public opinion and political parties in advanced industrial democracies. CQ Press. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  10. De Vreese, C. D., & Votta, F. (2023). AI and political communication. Political Communication Report, 2023. http://doi.org/10.17169/refubium-39047 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  11. Dobber, T., Metoui, N., Trilling, D., Helberger, N., & De Vreese, C. (2021). Do (microtargeted) deepfakes have real effects on political attitudes? The International Journal of Press/Politics, 26(1), 69–91. https://doi.org/10.1177/1940161220944364 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  12. Engesser, S., Ernst, N., Esser, F., & Büchel, F. (2017). Populism and social media: How politicians spread a fragmented ideology. Information, Communication & Society, 20(8), 1109–1126. https://doi.org/10.1080/1369118X.2016.1207697 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  13. Ernst, N., Blassnig, S., Engesser, S., Büchel, F., & Esser, F. (2019). Populists prefer social media over talk shows: An analysis of populist messages and stylistic elements across six countries. Social Media+ Society, 5(1). https://doi.org/10.1177/2056305118823358 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  14. Epstein, Z., Arechar, A. A., & Rand, D. (2023). What label should be applied to content produced by generative AI? PsyArxiv preprint, 2023. https://doi.org/10.31234/osf.io/v4mfz Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  15. Esser, F., & Strömbäck, J. (2014). Mediatization of politics: Understanding the transformation of Western democracies. Springer. https://doi.org/10.1057/9781137275844 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  16. Farrell, D.M., & Schmitt-Beck, R. (Eds.). (2002). Do political campaigns matter? Campaign Effects in Elections and Referendums (1st ed.). Routledge. https://doi.org/10.4324/9780203166956 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  17. Geise, S., & Baden, C. (2015). Putting the image back into the frame: Modeling the linkage between visual communication and frame-processing theory. Communication Theory, 25(1), 46–69. https://doi.org/10.1111/comt.12048 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  18. Geise, S., & Xu, Y. (2024). Effects of visual framing in multimodal media environments: A systematic review of studies between 1979 and 2023. Journalism & Mass Communication Quarterly, 102(3), 796–823. https://doi.org/10.1177/10776990241257586 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  19. Gerbaudo, P. (2018). Social media and populism: an elective affinity? Media, Culture & Society, 40(5), 745–753. https://doi.org/10.1177/0163443718772192 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  20. Gibson, R. K., & McAllister, I. (2015). Normalising or equalising party competition? Assessing the impact of the web on election campaigning. Political Studies, 63(3), 529–547. ­https://doi.org/10.1111/1467-9248.12107 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  21. Godulla, A., Hoffmann, C. P., & Seibert, D. (2021). Dealing with deepfakes–An interdisciplinary examination of the state of research and implications for communication studies. SCM Studies in Communication and Media, 10(1), 72–96. https://doi.org/10.5771/2192-4007-2021-1-72 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  22. Gosselin, R. D. (2025). AI detectors are poor western blot classifiers: A study of accuracy and predictive values. PeerJ, 13. https://doi.org/10.7717/peerj.18988 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  23. Grabe, M. E., & Bucy, E. P. (2009). Image bite politics: News and the visual framing of elections. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780195372076.001.0001 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  24. Graf & Koch-Kramer (2020) Instaloader. Retrieved January 10, 2025, from https://github.com/instaloader/instaloader. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  25. Grittmann, E. (2007). Das politische Bild: Fotojournalismus und Pressefotografie in Theorie und Empirie [The political image: Photojournalism and press photography in theory and empirical research]. Herbert von Halem. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  26. Habermas, J. (1983). Moralbewußtsein und kommunikatives Handeln [Moral consciousness and communicative action]. Suhrkamp. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  27. Hameleers, M., & Marquart, F. (2023). It’s nothing but a deepfake! The effects of misinformation and deepfake labels delegitimizing an authentic political speech. International Journal of Communication, 17, 6291–6311. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  28. Hameleers, M., & Schmuck, D. (2017). It’s us against them: A comparative experiment on the effects of populist messages communicated via social media. Information, Communication & Society, 20(9), 1425–1444. https://doi.org/10.1080/1369118X.2017.1328523 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  29. Hameleers, M., van der Meer, T. G., & Dobber, T. (2024). Distorting the truth versus blatant lies: The effects of different degrees of deception in domestic and foreign political deepfakes. Computers in Human Behavior, 152. https://doi.org/10.1016/j.chb.2023.108096 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  30. Hausken, L. (2024). Photorealism versus photography. AI-generated depiction in the age of visual disinformation. Journal of Aesthetics & Culture, 16(1). https://doi.org/10.1080/20004214.2024.2340787 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  31. Hooghe, M., Stolle, D., & Stouthuysen, P. (2004). Head start in politics: The recruitment function of youth organizations of political parties in Belgium (Flanders). Party Politics, 10(2), 193–212. https://doi.org/10.1177/1354068804040503 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  32. Klinger, U., Koc-Michalska, K., & Russmann, U. (2023). Are campaigns getting uglier, and who is to blame? Negativity, dramatization and populism on Facebook in the 2014 and 2019 EP election campaigns. Political Communication, 40(3), 263–282. https://doi.org/10.1080/10584609.2022.2133198 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  33. Laba, N. (2024). Engine for the imagination? Visual generative media and the issue of ­representation. Media, Culture & Society, 46(8), 1599–1620. https://doi.org/10.1177/01634437241259950 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  34. Leidecker-Sandmann, M., & Geise, S. (2020). Tradition statt Innovation. Die deutsche Presseberichterstattung über die Wahlkampfstrategien der Parteien zur Bundestagswahl 2017 [Tradition instead of innovation. The German press coverage of political parties’ campaign­ strategies in the run-up to the 2017 parliamentary elections]. SCM Studies in Communication and Media, 9(2), 264–307. https://doi.org/10.5771/2192-4007-2020-2-264 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  35. Leidecker-Sandmann, M., & Thomas, F. (2023). “Never was there more to do.” Use of vaguely formulated statements in the 2021 German national election campaign and their potential effects. In C. Holtz-Bacha, (Ed.), Die (Massen-)Medien im Wahlkampf: Die Bundestagswahl 2021 (pp. 43–66). Springer Fachmedien Wiesbaden. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  36. Li, Y., Liu, Z., Zhao, J., Ren, L., Li, F., Luo, J., & Luo, B. (2024). The adversarial AI-art: Understanding, generation, detection, and benchmarking. In European Symposium on Research in Computer Security (pp. 311–331). Springer Nature Switzerland. https://doi.org/10.48550/arXiv.2404.14581 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  37. Lu, Z., Huang, D., Bai, L., Qu, J., Wu, C., Liu, X., & Ouyang, W. (2023). Seeing is not always believing: Benchmarking human and model perception of AI-generated images. Advances in Neural Information Processing Systems, 36, 25435–25447. https://doi.org/10.48550/arXiv.2304.13023 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  38. Mathys, M., Willi, M., & Meier, R. (2024). Synthetic photography detection: A visual guidance for identifying synthetic images created by AI. arXiv preprint arXiv:2408.06398. https://doi.org/10.48550/arXiv.2408.06398 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  39. Magin, M., Podschuweit, N., Haßler, J., & Russmann, U. (2017). Campaigning in the fourth age of political communication. A multi-method study on the use of Facebook by German and Austrian parties in the 2013 national election campaigns. Information, Communication & Society, 20(11), 1698–1719. https://doi.org/10.1080/1369118X.2016.1254269 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  40. Messaris, P., & Abraham, L. (2001). The role of images in framing news stories. In S. D. Reese, O. H. Gandy, Jr., & A. E. Grant (Eds.), Framing public life: Perspectives on media and our understanding of the social world (pp. 231–242). Routledge. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  41. Moernaut, R., Mast, J., & Pauwels, L. (2020). Visual and multimodal framing analysis. In L. Pauwels, D. Mannay (Eds.), The SAGE Handbook of Visual Research Methods (pp. 484–499). SAGE Publications. https://doi.org/10.4135/9781526417015.n30 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  42. Momeni, M. (2025). Artificial intelligence and political deepfakes: Shaping citizen perceptions through misinformation. Journal of Creative Communications, 20(1), 41–56. https://doi.org/10.1177/09732586241277335 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  43. Müller, M. G. (1997). Visuelle Wahlkampfkommunikation: Eine Typologie der Bildstrategien im amerikanischen Präsidentschaftswahlkampf [Visual campaign communication: A typology of image strategies in the American presidential election campaign]. Publizistik, 42(2), 205–228. https://doi.org/10.1007/BF03654575 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  44. Perloff, R. M. (2021). The dynamics of political communication: Media and politics in a digital age. Routledge. https://doi.org/10.4324/9780429298851 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  45. Powell, T. E., Boomgaarden, H. G., De Swert, K., & de Vreese, C. H. (2019). Framing fast and slow: A dual processing account of multimodal framing effects. Media Psychology, 22(4), 572–600. https://doi.org/10.1080/15213269.2018.1476891 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  46. Peng, Q., Lu, Y., Peng, Y., Qian, S., Liu, X., & Shen, C. (2025, April). Crafting synthetic realities: Examining visual realism and misinformation potential of photorealistic AI-generated images. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (pp. 1–12). https://doi.org/10.1145/3706599.3719834 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  47. Schmuck, D., & Matthes, J. (2017). Effects of economic and symbolic threat appeals in right-wing populist advertising on anti-immigrant attitudes: The impact of textual and visual appeals. Political Communication, 34(4), 607–626. https://doi.org/10.1080/10584609.2017.1316807 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  48. Semetko, H. A., & Tworzecki, H. (2017). Campaign strategies, media, and voters: The fourth era of political communication. In J. Fisher, E. Fieldhouse, M.N. Franklin, R. Gibson, M. Cantijoch & C. Wlezien (Eds.), The Routledge Handbook of Elections, Voting Behavior and Public Opinion (pp. 293–304). Routledge. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  49. Seo, K. (2020). Meta-analysis on visual persuasion–does adding images to texts influence persuasion? Athens Journal of Mass Media and Communications, 6(3), 177–190. https://doi.org/10.30958/ajmmc.6-3-3 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  50. Ternovski, J., Kalla, J., & Aronow, P. (2022). The negative consequences of informing voters about deepfakes: Evidence from two survey experiments. Journal of Online Trust and Safety, 1(2). https://doi.org/10.54501/jots.v1i2.28 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  51. Vaccari, C., & Chadwick, A. (2020). Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media+ Society, 6(1), 2056305120903408. https://doi.org/10.1177/2056305120903408 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  52. Ward, J. (2011). Reaching citizens online: How youth organizations are evolving their web presence. Information, Communication & Society, 14(6), 917–936. https://doi.org/10.1080/1369118X.2011.572982 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  53. Weber, R. (2017). Political participation of young people in political parties. Zeitschrift für Politikwissenschaft, 27, 379–396. https://doi.org/10.1007/s41358-017-0106-z Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  54. Wilke, J., & Leidecker, M. (2013). Regional – national – supranational. How the German press covers election campaigns on different levels of the political system. Central European Journal of Communication, 6(1(10)), 122–143 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  55. Wilke, J., & Reinemann, C. (2003). Die Bundestagswahl 2002: Ein Sonderfall? [The German federal election 2002: A special case?]. In: C. Holtz-Bacha, C. (Eds.), Die Massenmedien im Wahlkampf (pp. 29–46). VS. https://doi.org/10.1007/978-3-322-80461-7_3 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  56. Wittenberg, C., Epstein, Z., Berkinsky, A.J., & Rand, D.G. (2023). Labeling AI-generated content: Promises, perils, and future directions. Topical Policy Brief, MIT Schwarzman College of Computing. https://computing.mit.edu/wp-content/uploads/2023/11/AI-Policy_Labeling.pdf Open Google Scholar DOI: 10.5771/2192-4007-2025-4-485
  57. 9News staff. (2024, March 7). 9ExPress. 9News. https://www.9news.com.au/technology/9express/16480c33-636a-461f-9c4f-d0e2522c722a Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  58. Ananny, M., & Karr, J. (2025). How media unions stabilize technological hype: Tracing organized journalism’s discursive constructions of generative artificial intelligence. Digital Journalism. https://doi.org/10.1080/21670811.2025.2454516 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  59. Attard, M., Davis, M., & Main, L. (2023). Gen AI and journalism. UTS Centre for Media Transition. https://doi.org/10.6084/m9.figshare.24751881.v3 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  60. Australian Communications and Media Authority. (n.d.). Local content on regional commercial radio. Retrieved April 24, 2025, from https://www.acma.gov.au/local-content-­regional-commercial-radio Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  61. Australian Competition and Consumer Commission. (2019). Digital platforms inquiry final report. https://www.accc.gov.au/about-us/publications/digital-platforms-inquiry-final-report Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  62. Australian Press Council. (2023, August). Submission to the Department of Industry, Transport, Regional Development and Communications on the Exposure Draft of the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2023 (sub. E3250). https://www.infrastructure.gov.au/sites/default/files/documents/acma2023-e3250-australian-press-council.pdf Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  63. Avishai, T. (Director). (2023, December 4). Synthetic media: AI and journalism (No. 6) [Broadcast]. In Knowing Machines. https://engelberg-center-live.simplecast.com/episodes/synthetic-media-ai-and-journalism Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  64. Bäck, A., Diakopoulos, N., Granroth-Wilding, M., Haapanen, L., Leppänen, L. J., Melin, M., Moring, T. A., Munezero, M. D., Siren-Heikel, S. J., Södergård, C., & Toivonen, H. (2019). News automation: The rewards, risks and realities of “machine journalism.” World Association of Newspapers and News Publishers, WAN-IFRA. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  65. Bagozzi, R. (2007). The legacy of the technology acceptance model and a proposal for a paradigm shift. Journal of the Association for Information Systems, 8(4), 244–254. https://doi.org/10.17705/1jais.00122 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  66. Barnes, C., & Barraclough, T. (2020). Deepfakes and synthetic media. In R. Steff, J. Burton, & S. R. Soare (Eds.), Emerging Technologies and International Security: Machines, the State, and War (pp. 206–222). Routledge. https://doi.org/10.4324/9780367808846 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  67. Becker, K. B., Simon, F. M., & Crum, C. (2025). Policies in parallel? A comparative study of journalistic AI policies in 52 global news organisations. Digital Journalism, 1–21. https://doi.org/10.1080/21670811.2024.2431519 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  68. Beckett, C., & Yaseen, M. (2023). Generating change: The journalism AI report. Polis, London School of Economics and Political Science. https://www.journalismai.info/s/Generating-Change-_-The-Journalism-AI-report-_-English.pdf Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  69. Borchardt, A., Simon, F., Zachrison, O., Bremme, K., Kurczabinska, J., Mulhall, E., & Johanny, Y. (2024). Trusted journalism in the age of generative AI. European Broadcasting Union. https://ora.ox.ac.uk/objects/uuid:8c874e2e-34de-4813-ba23-84e6300af110 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  70. Borden, S. L., & Tew, C. (2007). The role of journalist and the performance of journalism: Ethical lessons from “fake” news (seriously). Journal of Mass Media Ethics, 22(4), 300–314. https://doi.org/10.1080/08900520701583586 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  71. Broussard, M., Diakopoulos, N., Guzman, A. L., Abebe, R., Dupagne, M., & Chuan, C.-H. (2019). Artificial intelligence and journalism. Journalism & Mass Communication Quarterly, 96(3), 673–695. https://doi.org/10.1177/1077699019859901 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  72. Cazzamatta, R., & Sarısakaloğlu, A. (2025). Mapping global emerging scholarly research and practices of AI-supported fact-checking tools in journalism. Journalism Practice, 19(10), 2422–2444. https://doi.org/10.1080/17512786.2025.2463470 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  73. Center for News, Technology & Innovation. (2025). What it means to do journalism in the age of AI: Journalist views on safety, technology and government. https://innovating.news/2024-journalist-survey/ Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  74. Chesney, R., & Citron, D. K. (2018). Deep fakes: A looming challenge for privacy, democracy, and national security. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3213954 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  75. Cools, H., & Diakopoulos, N. (2024). Uses of generative AI in the newsroom: Mapping journalists’ perceptions of perils and possibilities. Journalism Practice, 1–19. https://doi.org/10.1080/17512786.2024.2394558 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  76. Cools, H., & Koliska, M. (2024). News automation and algorithmic transparency in the newsroom: The case of the Washington Post. Journalism Studies, 25(6), 662–680. https://doi.org/10.1080/1461670X.2024.2326636 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  77. de-Lima-Santos, M.-F., Yeung, W. N., & Dodds, T. (2024). Guiding the way: A comprehensive examination of AI guidelines in global media. AI & Society, 40, 2585–2603. https://doi.org/10.1007/s00146-024-01973-5 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  78. Diakopoulos, N., Cools, H., Li, C., Helberger, N., Kung, E., Rinehart, A., & Gibbs, L. (2024). Generative AI in journalism: The evolution of newswork and ethics in a generative information ecosystem. https://doi.org/10.13140/RG.2.2.31540.05765 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  79. Dodds, T., Zamith, R., & Lewis, S. C. (2025). The AI turn in journalism: Disruption, adaptation, and democratic futures. Journalism. Advance online publication https://doi.org/10.1177/14648849251343518 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  80. Dunstan, J., & Ortolan, M. (2024, January 31). An AI-generated image of a Victorian MP raises wider questions on digital ethics. ABC News. https://www.abc.net.au/news/2024-02-01/georgie-purcell-ai-image-nine-news-apology-digital-ethics/103408440 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  81. Eder, M., & Sjøvaag, H. (2025). Falling behind the adoption curve: Local journalism’s struggle for innovation in the AI transformation. Journal of Media Business Studies, 22(4), 325–343. https://doi.org/10.1080/16522354.2025.2473301 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  82. Elbeyi, E., Bruhn Jensen, K., Aronczyk, M., Asuka, J., Ceylan, G., Cook, J., Erdelyi, G., Ford, H., Milani, C., Mustafaraj, E., Ogenga, F., Yadin, S., Howard, P. N., Valenzuela, S., Brulle, R., Jacquet, J., Lewandowsky, S., & Roberts, T. (2025). Information integrity about climate science: A systematic review. International Panel on the Information Environment (IPIE). https://doi.org/10.61452/BTZP3426 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  83. Etikan, I. (2016). Comparison of Convenience Sampling and Purposive Sampling. American Journal of Theoretical and Applied Statistics, 5(1), 1–4. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  84. https://doi.org/10.11648/j.ajtas.20160501.11 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  85. European Broadcasting Union. (2025, May 5). Media outlets worldwide join call for AI companies to help protect news integrity. https://www.ebu.ch/news/2025/05/media-outlets-worldwide-join-call-for-ai-companies-to-help-protect-news-integrity Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  86. Farhi, P. (2023, January 17). CNET used AI to write articles. It was a journalistic disaster. The Washington Post. https://www.washingtonpost.com/media/2023/01/17/cnet-ai-articles-journalism-corrections/ Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  87. Feher, K. (2024). Exploring AI media. Definitions, conceptual model, research agenda. Journal of Media Business Studies, 21(4), 340–363. https://doi.org/10.1080/16522354.2024.2340419 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  88. Ferrucci, P., & Perreault, G. (2021). The liability of newness: Journalism, innovation and the issue of core competencies. Journalism Studies, 22(11), 1436–1449. https://doi.org/10.1080/1461670X.2021.1916777 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  89. Furtáková, L., & Janáčková, Ľ. (2023). AI in radio: The game changer you did not hear coming. In M. Prostináková Hossová, M. Martovič, & M. Solík (Eds.), Marketing identity: AI – The future of today. Proceedings from the International Scientific Conference. University of Ss. Cyril and Methodius. https://mmidentity.fmk.sk/wp-content/uploads/2024/10/MM_2023_eng.pdf Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  90. Golding, P., & Murdock, G. (2022). The political economy of contemporary journalism and the crisis of public knowledge. In S. Allan (Ed.), The Routledge Companion to News and Journalism (2nd ed., pp. 36–45). Routledge. https://doi.org/10.4324/9781003174790-5 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  91. Gutierrez Lopez, M., Porlezza, C., Cooper, G., Makri, S., MacFarlane, A., & Missaoui, S. (2023). A question of design: Strategies for embedding AI-driven tools into journalistic work routines. Digital Journalism, 11(3), 484–503. https://doi.org/10.1080/21670811.2022.2043759 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  92. Gutiérrez-Caneda, B., Lindén, C.-G., & Vázquez-Herrero, J. (2024). Ethics and journalistic challenges in the age of artificial intelligence: Talking with professionals and experts. Frontiers in Communication, 9. https://doi.org/10.3389/fcomm.2024.1465178 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  93. Hall, C. J. (2025). Platform journalism on YouTube: A democratic functions approach to analysing journalism on digital platforms. Australian Journalism Review, 47(1), 97–115. ­https://doi.org/10.1386/ajr_00178_7 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  94. Harris, K. R. (2024). Synthetic media detection, the wheel, and the burden of proof. Philosophy & Technology, 37(131). https://doi.org/10.1007/s13347-024-00821-0 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  95. He, X., & Fang, L. (2024). Regulatory challenges in synthetic media governance: Policy frameworks for AI-generated content across image, video, and social platforms. Journal of Robotic Process Automation, AI Integration, and Workflow Optimization, 9(12), 36–54. https://helexscience.com/index.php/JRPAAIW/article/view/2024-12-13 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  96. Helberger, N., van Drunen, M., Moeller, J., Vrijenhoek, S., & Eskens, S. (2022). Towards a normative perspective on journalistic AI: Embracing the messy reality of normative ideals. Digital Journalism, 10(10), 1605–1626. https://doi.org/10.1080/21670811.2022.2152195 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  97. Hermida, A. (2015). Nothing but the truth: Redrafting the journalistic boundary of verification. In M. Carlson & S. C. Lewis (Eds.), Boundaries of Journalism (pp. 37–50). Routledge. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  98. Jones, B., Jones, R., & Luger, E. (2022). AI ‘everywhere and nowhere’: Addressing the AI intelligibility problem in public service journalism. Digital Journalism, 10(10), 1731–1755. https://doi.org/10.1080/21670811.2022.2145328 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  99. Jones, B., Jones, R., & Luger, E. (2023). Generative AI & journalism: A rapid risk-based review. University of Edinburgh. https://www.research.ed.ac.uk/en/publications/generative-ai-amp-journalism-a-rapid-risk-based-review Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  100. Kieran, M. (1998). Objectivity, impartiality and good journalism. In M. Kieran (Ed.), Media Ethics (1st ed., pp. 23–36). Routledge. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  101. Lin, B., & Lewis, S. C. (2022). The one thing journalistic AI just might do for democracy. Digital Journalism, 10(10), 1627–1649. https://doi.org/10.1080/21670811.2022.2084131 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  102. Lindén, T. C.-G., & Dierickx, L. (2019). Robot journalism: The damage done by a metaphor. Unmediated, 2, 152–155. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  103. Mahadevan, A. (2025, March 20). An Italian newspaper launched a generative AI experiment. It’s not going well. Poynter. https://www.poynter.org/tech-tools/2025/il-foglio-newspaper-generated-artificial-intelligence/ Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  104. Martin, A., & Newell, B. (2024). Synthetic data, synthetic media, and surveillance. Surveillance & Society, 22(4), 448–452. https://doi.org/10.24908/ss.v22i4.18334 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  105. Matich, P., Thomson, T. J., & Thomas, R. J. (2025). Old threats, new name? Generative AI and visual journalism. Journalism Practice, 19(10), 2402–2421. https://doi.org/10.1080/17512786.2025.2451677 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  106. Medianet. (2025). 2025 Australian media landscape report. https://engage.medianet.com.au/2025-media-landscape-report Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  107. Meir, N. (2015, June 15). Automated earnings stories multiply. The Associated Press. https://www.ap.org/the-definitive-source/announcements/automated-earnings-stories-multiply/ Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  108. Min, S. J., & Fink, K. (2021). Keeping up with the technologies: Distressed journalistic labor in the pursuit of “shiny” technologies. Journalism Studies, 22(14), 1987–2004. https://doi.org/10.1080/1461670X.2021.1979425 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  109. Møller, L. A., Cools, H., & Skovsgaard, M. (2025). One size fits some: How journalistic roles shape the adoption of generative AI. Journalism Practice, 1–22. https://doi.org/10.1080/17512786.2025.2484622 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  110. Montaña-Niño, S. (2024). Automated journalistic assemblages. A conceptual approach to the normative and ethical debates on AI implementation in newsrooms. Problemi Dell’informazione, 1, 17–40. https://doi.org/10.1445/113227 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  111. Moran, C. (2023, April 6). ChatGPT is making up fake Guardian articles. Here’s how we’re responding. The Guardian. https://www.theguardian.com/commentisfree/2023/apr/06/ai-chatgpt-guardian-technology-risks-fake-article Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  112. Moran, R. E., & Shaikh, S. J. (2022). Robots in the news and newsrooms: Unpacking meta-journalistic discourse on the use of artificial intelligence in journalism. Digital Journalism, 10(10), 1756–1774. https://doi.org/10.1080/21670811.2022.2085129 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  113. Oliver, L. (2024, November 1). This chatbot helps tell the story of how women are affected by drug trafficking in Paraguay. Reuters Institute News. https://reutersinstitute.politics.ox.ac.uk/news/chatbot-helps-tell-story-how-women-are-affected-drug-trafficking-paraguay Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  114. Paris Charter on AI and Journalism. (2023, November 10). https://rsf.org/sites/default/files/medias/file/2023/11/Paris%20Charter%20on%20AI%20and%20Journalism.pdf Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  115. Partnership on AI. (2023, February 27). PAI’s responsible practices for synthetic media. ­https://partnershiponai.org/ Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  116. Petković, B. (2014). Media integrity matters: Reclaiming public service values in media and journalism (1st ed). Peace Institute, Institute for Contemporary Social and Political Studies. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  117. Radcliffe, D. (2025). Journalism in the AI era (TRF Insights). Thomson Reuters Foundation. https://www.trust.org/wp-content/uploads/2025/01/TRF-Insights-Journalism-in-the-AI-Era.pdf Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  118. Riordan, K. (2014). Accuracy, independence, and impartiality: How legacy media and digital natives approach standards in the digital age. Reuters Institute for the Study of Journalism. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  119. https://reutersinstitute.politics.ox.ac.uk/our-research/accuracy-independence-and-­impartiality-how-legacy-media-and-digital-natives-approach Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  120. Roper, D., Henriksson, T., Hälbich, K., & Martin, O. (2023). Gauging generative AI’s impact on newsrooms. World Association of News Publishers (WAN-IFRA). https://wan-ifra.org/insight/gauging-generative-ais-impact-in-newsrooms/ Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  121. Salas, A., Rivero-Calle, I., & Martinón-Torres, F. (2023). Chatting with ChatGPT to learn about safety of COVID-19 vaccines – A perspective. Human Vaccines & Immunotherapeutics, 19(2). https://doi.org/10.1080/21645515.2023.2235200 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  122. Samosir, H. (2023, July 12). More countries across Asia are debuting digital artificial intelligence news readers. Could Australia follow suit? ABC News. https://www.abc.net.au/news/2023-07-13/artificial-intelligence-news-readers-becoming-common-in-asia/102591790 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  123. Schell, K. (2024). AI transparency in journalism: Labels for a hybrid era. Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2025-01/RISJ%20Fellows%20Paper_Katja%20Schell_MT24_Final.pdf Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  124. Simon, F. M. (2022). Uneasy bedfellows: AI in the news, platform companies and the issue of journalistic autonomy. Digital Journalism, 10(10), 1832–1854. https://doi.org/10.1080/21670811.2022.2063150 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  125. Simon, F. M. (2024). Artificial intelligence in the news: How AI retools, rationalizes, and reshapes journalism and the public arena. Tow Center for Digital Journalism. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  126. https://journalism.columbia.edu/news/tow-report-artificial-intelligence-news-and-how-­ai-reshapes-journalism-and-public-arena Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  127. Simon, F. M., Altay, S., & Mercier, H. (2023). Misinformation reloaded? Fears about the impact of generative AI on misinformation are overblown. Harvard Kennedy School Misinformation Review. https://doi.org/10.37016/mr-2020-127 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  128. Simon, F. M., & Isaza-Ibarra, L. F. (2023). AI in the news: Reshaping the information ecosystem? Oxford Internet Institute. https://www.oii.ox.ac.uk/wp-content/uploads/2023/08/Minderoo_Report_Simon_Ibarra.pdf Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  129. Sjøvaag, H. (2024). The business of news in the AI economy. AI Magazine, 45(2), 246–255. https://doi.org/10.1002/aaai.12172 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  130. Society of Professional Journalists. (2014). SPJ code of ethics. https://www.spj.org/spj-code-of-ethics/ Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  131. Squicciarini, M., Valdez Genao, J., & Sarmiento, C. (2024). Synthetic content and AI policy: A primer. UNESCO. https://policycommons.net/artifacts/17958669/synthetic-content-and-its-implications-for-ai-policy/18857919/ Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  132. Ternovski, J., Kalla, J., & Aronow, P. M. (2022). The negative consequences of informing voters about deepfakes: Evidence from two survey experiments. Journal of Online Trust and Safety, 1(2). https://doi.org/10.54501/jots.v1i2.28 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  133. Thomson, T. J., Thomas, R. J., & Matich, P. (2024). Generative visual AI in news organizations: Challenges, opportunities, perceptions, and policies. Digital Journalism, 1–22. ­https://doi.org/10.1080/21670811.2024.2331769 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  134. Thomson, T. J., Thomas, R., Riedlinger, M., & Matich, P. (2025). Generative AI & journalism. RMIT University. https://doi.org/10.6084/m9.figshare.28068008 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  135. Toff, B., & Simon, F. M. (2024). “Or they could just not use it?”: The dilemma of AI disclosure for audience trust in news. The International Journal of Press/Politics, 30(4), 881–903. https://doi.org/10.1177/19401612241308697 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  136. Tran, M. (2006, August 18). Robots write the news. The Guardian. https://www.theguardian.com/news/blog/2006/aug/18/robotswriteth Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  137. WashPostPR. (2024, November 7). The Washington Post launches “Ask the Post AI,” a new search experience. The Washington Post. https://www.washingtonpost.com/pr/2024/11/07/washington-post-launches-ask-post-ai-new-search-experience/ Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  138. Whittaker, L., Kietzmann, T. C., Kietzmann, J., & Dabirian, A. (2020). “All around me are synthetic faces”: The mad world of AI-generated media. IT Professional, 22(5), 90–99. https://doi.org/10.1109/MITP.2020.2985492 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  139. Wilding, D., Fray, P., Molitorisz, S., & McKewon, E. (2018). The impact of digital platforms on news and journalistic content. UTS Centre for Media Transition. http://hdl.handle.net/10453/159124 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  140. Wintterlin, F., Engelke, K. M., & Hase, V. (2020). Can transparency preserve journalism’s trustworthiness? Recipients’ views on transparency about source origin and verification regarding user-generated content in the news. SCM Studies in Communication and Media, 9(2), 218–240. https://doi.org/10.5771/2192-4007-2020-2-218 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  141. Zier, J., & Diakopoulos, N. (2024, October 26). Labeling AI-generated news content: Matching journalist intentions with audience expectations. Proceedings of the Computation and Journalism Symposium 2024. https://cplusj2024.github.io/ Open Google Scholar DOI: 10.5771/2192-4007-2025-4-517
  142. Abbas, F., & Taeihagh, A. (2024). Unmasking deepfakes: A systematic review of deepfake detection and generation techniques using artificial intelligence. Expert Systems with Applications, 252. https://doi.org/10.1016/j.eswa.2024.124260 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-550
  143. Brams, S., Ziv, G., Levin, O., Spitz, J., Wagemans, J., Williams, A. M., & Helsen, W. F. (2019). The relationship between gaze behavior, expertise, and performance: A systematic review. Psychological Bulletin, 145(10), 980–1027. https://doi.org/10.1037/bul0000207 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-550
  144. Bray, S. D., Johnson, S. D., & Kleinberg, B. (2023). Testing human ability to detect ‘deepfake’ images of human faces. Journal of Cybersecurity, 9(1). https://doi.org/10.1093/cybsec/tyad011 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-550
  145. Burla, L., Knierim, B., Barth, J., Liewald, K., Duetz, M., & Abel, T. (2008). From text to codings: Intercoder reliability assessment in qualitative content analysis. Nursing Research, 57(2), 113–117. https://doi.org/10.1097/01.NNR.0000313482.33917.7d Open Google Scholar DOI: 10.5771/2192-4007-2025-4-550
  146. Caporusso, N., Zhang, K., & Carlson, G. (2020). Using eye-tracking to study the authenticity of images produced by generative adversarial networks. 2020 International Conference on Electrical, Communication, and Computer Engineering (ICECCE), 1–6. https://doi.org/10.1109/ICECCE49384.2020.9179472 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-550
  147. Cartella, G., Cuculo, V., Cornia, M., & Cucchiara, R. (2024). Unveiling the truth: Exploring human gaze patterns in fake images. IEEE Signal Processing Letters, 1–5. https://doi.org/10.1109/LSP.2024.3375288 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-550
  148. Diel, A., Lalgi, T., Schröter, I. C., MacDorman, K. F., Teufel, M., & Bäuerle, A. (2024). Human performance in detecting deepfakes: A systematic review and meta-analysis of 56 papers. Computers in Human Behavior Reports, 16. https://doi.org/10.1016/j.chbr.2024.100538 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-550
  149. Diel, A., Teufel, M., & Bäuerle, A. (2024). Inability to detect deepfakes: Deepfake detection training improves detection accuracy, but increases emotional distress and reduces self-efficacy. OSF. https://doi.org/10.31219/osf.io/muwnj Open Google Scholar DOI: 10.5771/2192-4007-2025-4-550
  150. Dolhansky, B., Bitton, J., Pflaum, B., Lu, J., Howes, R., Wang, M., & Cristian Canton Ferrer. (2020). The DeepFake Detection Challenge (DFDC) Dataset. arXiv. https://doi.org/10.48550/arxiv.2006.07397 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-550
  151. Dornbusch, A., Tye, T., Somoray, K., & Miller, D. J. (2025). Third person effects and the base-rate fallacy: Cognitive biases in deepfake detection [Manuscript in preparation]. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-550
  152. El Mokadem, S. S. (2023). The effect of media literacy on misinformation and deep fake video detection. Arab Media & Society, 35, 53–78. https://www.arabmediasociety.com/ Open Google Scholar DOI: 10.5771/2192-4007-2025-4-550
  153. Flynn, A., Powell, A., Scott, A. J., & Cama, E. (2022). Deepfakes and digitally altered imagery abuse: A cross-country exploration of an emerging form of image-based sexual abuse. The British Journal of Criminology, 62(6), 1341–1358. https://doi.org/10.1093/bjc/azab111 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-550
  154. Godulla, A., Hoffmann, C. P., & Seibert, D. (2021). Dealing with deepfakes: An interdisciplinary examination of the state of research and implications for communication studies. Studies in Communication and Media, 10(1), 72–96. https://doi.org/10.5771/2192-4007-2021-1-72 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-550
  155. Gupta, P., Chugh, K., Dhall, A., & Subramanian, R. (2020). The eyes know it: FakeET- An eye-tracking database to understand deepfake perception. Proceedings of the 2020 International Conference on Multimodal Interaction, 519–527. https://doi.org/10.1145/ 3382507.3418857 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-550
  156. Köbis, N, C., Doležalová, B., & Soraperra., I. (2021). Fooled twice: People cannot detect deepfakes but think they can. iScience, 24(11). https://doi.org/10.2139/ssrn.3832978 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-550
  157. Kramer, R. S., Mireku, M. O., Flack, T. R., & Ritchie, K. L. (2019). Face morphing attacks: Investigating detection with humans and computers. Cognitive Research: Principles and Implications, 4(1). https://doi.org/10.1186/s41235-019-0181-4 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-550
  158. McMahon, L., Kleinman, Z., & Subramanian, C. (2025, January 8). Facebook and Instagram get rid of fact checkers. BBC News. https://www.bbc.com/news/articles/cly74mpy8klo Open Google Scholar DOI: 10.5771/2192-4007-2025-4-550
  159. Miller, D. J., Somoray, K., & Stevens, H. (2025). A shallow history of deepfakes. SSRN. http://dx.doi.org/10.2139/ssrn.5130379 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-550
  160. Ng, Y. L. (2023). An error management approach to perceived fakeness of deepfakes: The moderating role of perceived deepfake targeted politicians’ personality characteristics. Current Psychology, 42, 25658–25669. https://doi.org/10.1007/s12144-022-03621-x Open Google Scholar DOI: 10.5771/2192-4007-2025-4-550
  161. Robertson, D. J., Mungall, A., Watson, D. G., Wade, K. A., Nightingale, S. J., & Butler, S. (2018). Detecting morphed passport photos: A training and individual differences approach. Cognitive Research: Principles and Implications, 3. https://doi.org/10.1186/s41235-018-0113-8 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-550
  162. Senju, A., Vernetti, A., Kikuchi, Y., Akechi, H., Hasegawa, T., & Johnson, M. H. (2013). Cultural background modulates how we look at other persons’ gaze. International Journal of Behavioral Development, 37(2), 131–136. https://doi.org/10.1177/0165025412465360 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-550
  163. Silva, S. H., Bethany, M., Votto, A. M., Scarff, I. H., Beebe, N., & Najafirad, P. (2022). Deepfake forensics analysis: An explainable hierarchical ensemble of weakly supervised models. Forensic Science International: Synergy, 4. https://doi.org/10.1016/j.fsisyn.2022.100217 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-550
  164. Somoray, K., & Miller, D. J. (2023). Providing detection strategies to improve human detection of deepfakes: An experimental study. Computers in Human Behavior, 149. https://doi.org/10.1016/j.chb.2023.107917 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-550
  165. Somoray, K., Miller, D. J., & Holmes, M. (2025). Human performance in deepfake detection: A systematic review. Human Behavior and Emerging Technologies, 2025. https://doi.org/10.1155/hbe2/1833228 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-550
  166. Smith, H., & Mansted, K. (2020). Weaponised deep fakes: National security and democracy [Policy brief]. Australian Strategic Policy Institute. https://www.aspi.org.au/report/weaponised-deep-fakes Open Google Scholar DOI: 10.5771/2192-4007-2025-4-550
  167. Sütterlin, S., Ask, T. F., Mägerle, S., Glöckler, S., Wolf, L., Schray, J., Chandi, A., Bursac, T., Khodabakhsh, A., Know, B. J., Canham, M., & Lugo, R. G. (2023). Individual deep fake recognition skills are affected by viewer’s political orientation, agreement with content and device used. In D. D. Schmorrow & C. M. Fidopiastis (Eds.), Augmented Cognition: 17th International Conference, Held as Part of the 25th HCI International Conference, Copenhagen, Denmark, Proceedings: Vol. 14019 (pp. 269–284). Springer, Cham. https://doi.org/10.1007/978-3-031-35017-7_18 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-550
  168. Tahir, R., Batool, B., Jamshed, H., Jameel, M., Anwar, M., Ahmed, F., Zaffar, M. A., & Zaffar, M. F. (2021). Seeing is believing: Exploring perceptual differences in deepfake videos. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–16. https://doi.org/10.1145/3411764.3445699 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-550
  169. Thaw, N. N., July, T., Wai, A. N., Goh, D. H. L., & Chua, A. Y. (2020). Is it real? A study on detecting deepfake videos. Proceedings of the Association for Information Science and Technology, 57(1). https://doi.org/10.1002/pra2.366 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-550
  170. Wöhler, L., Zembaty, M., Castillo, S., & Magnor, M. (2021). Towards understanding perceptual differences between genuine and face-swapped videos. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–13. https://doi.org/10.1145/3411764.3445627 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-550
  171. World Economic Forum (2024). The Global Risks Report 2024. https://www3.weforum.org/docs/WEF_The_Global_Risks_Report_2024.pdf Open Google Scholar DOI: 10.5771/2192-4007-2025-4-550
  172. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-550
  173. Ahmed, S. (2023). Examining public perception and cognitive biases in the presumed influence of deepfakes threat: Empirical evidence of third person perception from three studies. Asian Journal of Communication, 33(3), 308–331. https://doi.org/10.1080/01292986.2023.2194886 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  174. Altay, S., & Acerbi, A. (2024). People believe misinformation is a threat because they assume others are gullible. New Media & Society, 26(11), 6440–6461. https://doi.org/10.1177/14614448231153379 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  175. Baek, Y. M., Kang, H., & Kim, S. (2019). Fake news should be regulated because it influences both “others” and “me”: How and why the influence of presumed influence model should be extended. Mass Communication and Society, 22(3), 301–323. https://doi.org/10.1080/15205436.2018.1562076 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  176. Birrer, A., & Just, N. (2024). What we know and don’t know about deepfakes: An investigation into the state of the research and regulatory landscape. New Media & Society. Advance online publication. https://doi.org/10.1177/14614448241 253138 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  177. Bendahan Bitton, D. B., Hoffmann, C. P., & Godulla, A. (2024). Deepfakes in the context of AI inequalities: Analysing disparities in knowledge and attitudes. Information, Communication & Society, 295–315. https://doi.org/10.1080/1369118X.2024.2420037 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  178. Chen, M., Yu, W., & Liu, K. (2023). A meta-analysis of third-person perception related to distorted information: Synthesizing the effect, antecedents, and consequences. Information Processing & Management, 60(5). https://doi.org/10.1016/j.ipm.2023.103425 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  179. Chung, M., & Wihbey, J. (2024). Social media regulation, third-person effect, and public views: A comparative study of the United States, the United Kingdom, South Korea, and Mexico. New Media & Society, 26(8), 4534–4553. https://doi.org/10.1177/14614448221122996 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  180. Corbu, N., Oprea, D.-A., Negrea-Busuioc, E., & Radu, L. (2020). ‘They can’t fool me, but they can fool the others!’ Third person effect and fake news detection. European Journal of Communication, 35(2), 165–180. https://doi.org/10.1177/ 0267323120903686 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  181. Davison, W. P. (1983). The third-person effect in communication. Public Opinion Quarterly, 47(1), 1–15. https://doi.org/10.1086/268763 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  182. de Ruiter, A. (2021). The distinct wrong of deepfakes. Philosophy & Technology, 34(4), 1311–1332. https://doi.org/10. 1007/s13347-021-00459-2 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  183. Gardner, G. T., & Gould, L. C. (1989). Public perceptions of the risks and benefits of technology. Risk Analysis, 9(2), 225–242. https://doi.org/10.1111/j.1539-6924. 1989.tb01243.x Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  184. Godulla, A., Hoffmann, C. P., & Seibert, D. (2021). Dealing with deepfakes – An interdisciplinary examination of the state of research and implications for communication studies. SCM Studies in Communication and Media, 10(1), 72–96. https://doi.org/10.5771/2192-4007-2021-1-72 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  185. Gosse, C., & Burkell, J. (2020). Politics and porn: How news media characterizes problems presented by deepfakes. Critical Studies in Media Communication, 37(5), 497–511. https://doi.org/10.1080/15295036.2020.1832697 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  186. Gunther, A. C., & Storey, J. D. (2003). The influence of presumed influence. Journal of Communication, 53(2), 199–215. https://doi.org/10.1111/j.1460- 2466.2003.tb02586.x Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  187. Hameleers, M., Van Der Meer, T. G. L. A., & Dobber, T. (2022). You won’t believe what they just said! The effects of political deepfakes embedded as vox Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  188. populi on social media. Social Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  189. Media + Society, 8(3). https://doi.org/10.1177/20563051221116346 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  190. Jang, S. M., & Kim, J. K. (2018). Third person effects of fake news: Fake news regulation and media literacy interventions. Computers in Human Behavior, 80, 295–302. https://doi.org/10.1016/j.chb.2017.11.034 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  191. Jungherr, A., & Rauchfleisch, A. (2024). Negative downstream effects of alarmist disinformation discourse: Evidence from the United States. Political Behavior, 46(4), 2123–2143. https://doi.org/10.1007/s11109-024-09911-3 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  192. Jungherr, A., & Rauchfleisch, A. (in press). Public Opinion on the Politics of AI Alignment: Cross-National Evidence on Expectations for AI Moderation from Germany and the United States. Social Media + Society. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  193. Kalogeropoulos, A., Toff, B., & Fletcher, R. (2022). The watchdog press in the doghouse: A comparative study of attitudes about accountability journalism, trust in news, and news avoidance. The International Journal of Press/Politics, 29(2), 485–506. https://doi.org/10. 1177/19401612221112572 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  194. Karaboga, M., Frei, N., Puppis, M., Vogler, D., Raemy, P., Ebbers, F., Runge, G., Rauchfleisch, A., de Seta, G., Gurr, G., Friedewald, M., & Rovelli, S. (2024). Deepfakes und manipulierte Realitäten: Technologiefolgenabschätzung und Handlungsempfehlungen für die Schweiz [Deepfakes and manipulated realities: Technology impact assessment and policy recommendations for Switzerland]. vdf Hochschulverlag AG. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  195. Kim, M. (2025). A direct and indirect effect of third-person perception of COVID-19 fake news on support for fake news regulations on social media: Investigating the role of negative emotions and political views. Mass Communication and Society, 28(2), 229–252. https://doi.org/10.1080/15205436.2023.2227601 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  196. Lima, M. L., Barnett, J., & Vala, J. (2005). Risk perception and technological development at a societal level. Risk Analysis, 25(5), 1229–1239. https://doi.org/10. 1111/j.1539-6924.2005.00664.x Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  197. Liu, P. L., & Huang, L. V. (2020). Digital disinformation about COVID-19 and the third-person effect: Examining the channel differences and negative emotional outcomes. Cyberpsychology, Behavior, and Social Networking, 23(11), 789–793. https://doi.org/10.1089/cyber. 2020.0363 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  198. Marien, S., & Hooghe, M. (2011). Does political trust matter? An empirical investigation into the relation between political trust and support for law compliance: does political trust matter? European Journal of Political Research, 50(2), 267–291. https://doi.org/10.1111/j.1475-6765.2010.01930.x Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  199. Nguyen, D. (2023). How news media frame data risks in their coverage of big data and AI. Internet Policy Review, 12(2). https://policyreview.info/articles/analysis/how-news-media-frame-data-risks-big-data-and-ai Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  200. Paradise, A., & Sullivan, M. (2012). (In)visible threats? The third-person effect in perceptions of the influence of Facebook. Cyberpsychology, Behavior, and Social Networking, 15(1), 55–60. ­https://doi.org/10.1089/cyber.2011.0054 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  201. PytlikZillig, L. M., Kimbrough, C. D., Shockley, E., Neal, T. M. S., Herian, M. N., Hamm, J. A., Bornstein, B. H., & Tomkins, A. J. (2017). A longitudinal and experimental study of the impact of knowledge on the bases of institutional trust. PLOS ONE, 12(4). https://doi.org/10.1371/journal.pone.0175387 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  202. Rauchfleisch, A., Vogler, D., & de Seta, G. (2025). Deepfakes or synthetic media? The effect of euphemisms for labeling technology on risk and benefit perceptions. Social Media + Society. https://doi.org/10.1177/20563051251350975 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  203. Riedl, M. J., Whipple, K. N., & Wallace, R. (2022). Antecedents of support for social media content moderation and platform regulation: The role of presumed effects on self and others. Information, Communication & Society, 25(11), 1632–1649. https://doi.org/10.1080/1369118X.2021.1874040 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  204. Six, F. (2013). Trust in regulatory relations: How new insights from trust research improve regulation theory. Public Management Review, 15(2), 163–185. ­https://doi.org/10.1080/14719037.2012.727461 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  205. Slovic, P., Fischhoff, B., & Lichtenstein, S. (1982). Why study risk perception? Risk Analysis, 2(2), 83–93. https://doi.org/10.1111/j.1539-6924.1982.tb01369.x Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  206. Swissinfo.ch (2025, May 9). Switzerland rejects deepfake regulation. Retrieved from https://www.swissinfo.ch/eng/ai-­governance/switzerland-rejects-deepfake-regulation/89277391 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  207. Thouvenin, F.; Eisenegger, M.; Volz, S.; Vogler, D.; Jaffé, M., (2023). Governance von Desinformation in digitalisierten Öffentlichkeiten. Bericht für das Bundesamt für Kommunikation (BAKOM) [Governance of disinformation in digitalized publics. Report for the Federal Office of Communication]. Retrieved from: https://www.bakom.admin.ch/bakom/de/home/elektronische-medien/studien/ Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  208. einzelstudien.html Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  209. Vaccari, C., & Chadwick, A. (2020). Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media + Society, 6(1). https://doi.org/10.1177/2056305120903408 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  210. Verhoest, K., Redert, B., Maggetti, M., Levi-Faur, D., & Jordana, J. (2025). Trust and regulation. In F. Six, J. A. Hamm, D. Latusek, E. V. Zimmeren, & K. Verhoest (Eds.), Handbook on Trust in Public Governance (pp. 360–380). Edward Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  211. Elgar Publishing. https://doi.org/10. 4337/9781802201406.00030 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  212. Wang, S., & Kim, S. (2022). Users’ emotional and behavioral responses to deepfake videos of K-pop idols. Computers in Human Behavior, 134. https://doi.org/10.1016/j.chb.2022.107305 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  213. Wolf, C. (2021). Public trust and biotech innovation: A theory of trustworthy regulation of (scary!) technology. Social Philosophy and Policy, 38(2), 29–49. https://doi.org/10.1017/S0265052522 000036 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  214. Yadlin-Segal, A., & Oppenheim, Y. (2021). Whose dystopia is it anyway? Deepfakes and social media regulation. Convergence: The International Journal of Research into New Media Technologies, 27(1), 36–51. https://doi.org/10. 1177/1354856520923963 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  215. Yu, E., Song, H., Jung, J., & Kim, Y. J. (2023). Perception and attitude toward the regulation of online video streaming (in South Korea). Online Media and Global Communication, 2(4), 651–679. https://doi.org/10.1515/omgc-2023-0059 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  216. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-570
  217. Aïmeur, E., Amri, S., & Brassard, G. (2023). Fake news, disinformation and misinformation in social media: A review. Social Network Analysis and Mining, 13(1). https://doi.org/10.1007/s13278-023-01028-5 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  218. Akhtar, P., Ghouri, A. M., Khan, H. U. R., Amin ul Haq, M., Awan, U., Zahoor, N., Khan, Z., & Ashraf, A. (2023). Detecting fake news and disinformation using artificial intelligence and machine learning to avoid supply chain disruptions. Annals of Operations Research, 327(2), 633–657. https://doi.org/10.1007/s10479-022-05015-5 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  219. Bennett, W. L., & Livingston, S. (2023). A brief history of the disinformation age: Information wars and the decline of institutional authority. In S. Salgado & S. Papathanassopoulos (Eds.), Streamlining Political Communication Concepts (pp. 43–73). Springer International Publishing. https://doi.org/10.1007/978-3-031-45335-9_4 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  220. Bösch, M., & Divon, T. (2024). The sound of disinformation: TikTok, computational propaganda, and the invasion of Ukraine. New Media & Society, 26(9), 5081–5106. https://doi.org/10.1177/14614448241251804 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  221. Bray, S. D., Johnson, S. D., & Kleinberg, B. (2023). Testing human ability to detect ‘deepfake’ images of human faces. Journal of Cybersecurity, 9(1). https://doi.org/10.1093/cybsec/tyad011 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  222. Calvo, D., Cano-Orón, L., & Abengozar, A. E. (2020). Materials and assessment of literacy level for the recognition of social bots in political misinformation contexts. ICONO 14, Revista de Comunicación y Tecnologías Emergentes, 18(2), 111–136. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  223. CDEI. (2019, September 12). Snapshot paper – Deepfakes and audiovisual disinformation. Centre for Data Ethics and Innovation. https://www.gov.uk/government/publications/cdei-publishes-its-first-series-of-three-snapshot-papers-ethical-issues-in-ai/snapshot-paper-deepfakes-and-audiovisual-disinformation Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  224. Cho, H., Cannon, J., Lopez, R., & Li, W. (2024). Social media literacy: A conceptual framework. New Media & Society, 26(2), 941–960. https://doi.org/10.1177/14614448211068530 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  225. Cole, S. (2017, December 11). AI-assisted fake porn is here and we’re all fucked. VICE. https://www.vice.com/en/article/gal-gadot-fake-ai-porn/ Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  226. Dan, V., Paris, B., Donovan, J., Hameleers, M., & Roozenbeek, J. (2021). Visual mis- and disinformation, social media, and democracy. Journalism & Mass Communication Quarterly, 98(3), 641–664. https://doi.org/10.1177/10776990211035395 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  227. Darius, P., & Stephany, F. (2022). How the Far-Right polarises Twitter: ‘Hashjacking’ as a disinformation strategy in times of COVID-19. In R. M. Benito, C. Cherifi, H. Cherifi, E. Moro, L. M. Rocha, & M. Sales-Pardo (Eds.), Complex Networks & Their Applications X (Vol. 1073, pp. 100–111). Springer International Publishing. https://doi.org/10.1007/978-3-030-93413-2_9 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  228. Dobber, T., Metoui, N., Trilling, D., Helberger, N., & de Vreese, C. (2021). Do (microtargeted) deepfakes have real effects on political attitudes? The International Journal of Press/Politics, 26(1), 69–91. https://doi.org/10.1177/1940161220944364 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  229. Dresing, T., & Pehl, T. (2013). Praxisbuch Interview, Transkription & Analyse [Practical guide interview, transcription & analysis] (5th ed.). Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  230. Filipovic, A., & Schülke, A. (2023). Desinformation und Desinformationsresilienz [Disinformation and disinformation resilience]. Ethik Und Militär: Kontroversen in Militärethik & Sicherheitspolitik, 1, 34–41. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  231. Frischlich, L. (2019, May 2). Kritische Medienkompetenz als Säule demokratischer Resilienz in Zeiten von “Fake News” und Online-Desinformation [Critical media literacy as a pillar for democratic resilience in times of “fake news” and online disinformation]. Bundeszentrale für politische Bildung. https://www.bpb.de/themen/medien-­journalismus/digitale-desinformation/290527/kritische-medienkompetenz-als-saeule-demokratischer-resilienz-in-zeiten-von-fake-news-und-online-desinformation/ Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  232. Gambín, Á. F., Yazidi, A., Vasilakos, A., Haugerud, H., & Djenouri, Y. (2024). Deepfakes: Current and future trends. Artificial Intelligence Review, 57(3). https://doi.org/10.1007/s10462-023-10679-x Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  233. Godulla, A., Hoffmann, C. P., & Seibert, D. (2021). Dealing with deepfakes – An interdisciplinary examination of the state of research and implications for communication studies. SCM Studies in Communication and Media, 10(1), 72–96. https://doi.org/10.5771/2192-4007-2021-1-72 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  234. Goldstein, J. A., Sastry, G., Musser, M., DiResta, R., Gentzel, M., & Sedova, K. (2023). Generative language models and automated influence operations: Emerging threats and potential mitigations (arXiv:2301.04246). arXiv. https://doi.org/10.48550/ Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  235. arXiv.2301.04246 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  236. Graves, L., & Amazeen, M. (2019). Fact-checking as idea and practice in journalism. Oxford University Press. https://ora.ox.ac.uk/objects/uuid:a7450b2f-f5a7-4207-90e2-254ec5de14e2 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  237. Groh, M., Epstein, Z., Firestone, C., & Picard, R. (2022). Deepfake detection by human crowds, machines, and machine-informed crowds. Proceedings of the National Academy of Sciences, 119(1). https://doi.org/10.1073/pnas.2110013119 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  238. Guilbeault, D. (2018). Digital marketing in the disinformation age. Journal of International Affairs, 71(1.5), 33–42. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  239. Hameleers, M., Powell, T. E., Van Der Meer, T. G. L. A., & Bos, L. (2020). A picture paints a thousand lies? The effects and mechanisms of multimodal disinformation and rebuttals disseminated via social media. Political Communication, 37(2), 281–301. https://doi.org/10.1080/10584609.2019.1674979 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  240. Higley, J. (2018). Continuities and discontinuities in elite theory. In H. Best & J. Higley (Eds.), The Palgrave Handbook of Political Elites (pp. 25–39). Palgrave Macmillan UK. https://doi.org/10.1057/978-1-137-51904-7_4 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  241. Hoffmann-Lange, U. (2018). Methods of elite identification. In H. Best & J. Higley (Eds.), The Palgrave Handbook of Political Elites (pp. 79–92). Palgrave Macmillan UK. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  242. https://doi.org/10.1057/978-1-137-51904-7_8 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  243. Hu, K. (2023, February 2). ChatGPT sets record for fastest-growing user base – Analyst note. Reuters. https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/ Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  244. Hugger, K.-U. (2022). Medienkompetenz [Media competence]. In U. Sander, F. von Gross, & K.-U. Hugger (Eds.), Handbuch Medienpädagogik (pp. 67–80). Springer Fachmedien. https://doi.org/10.1007/978-3-658-23578-9_9 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  245. Hwang, Y., Ryu, J. Y., & Jeong, S.-H. (2021). Effects of disinformation using deepfake: The protective effect of media literacy education. Cyberpsychology, Behavior, and Social Networking, 24(3), 188–193. https://doi.org/10.1089/cyber.2020.0174 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  246. Kalsnes, B., Falasca, K., & Kammer, A. (2021). Scandinavian political journalism in a time of fake news and disinformation (pp. 283–304). Nordicom, University of Gothenburg. https://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-40895 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  247. Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1), 15–25. https://doi.org/10.1016/j.bushor.2018.08.004 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  248. Köbis, N. C., Doležalová, B., & Soraperra, I. (2021). Fooled twice: People cannot detect deepfakes but think they can. iScience, 24(11). https://doi.org/10.1016/j.isci.2021.103364 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  249. Kong, S.-C., Man-Yin Cheung, W., & Zhang, G. (2021). Evaluation of an artificial intelligence literacy course for university students with diverse study backgrounds. Computers and Education: Artificial Intelligence, 2. https://doi.org/10.1016/j.caeai.2021.100026 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  250. Leschzyk, D. K. (2021). Infodemic in Germany and Brazil: How the AfD and Jair Bolsonaro are sowing distrust during the Corona pandemic. Zeitschrift Für Literaturwissenschaft Und Linguistik, 51(3), 477–503. https://doi.org/10.1007/s41244-021-00210-6 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  251. Lintner, T. (2024). A systematic review of AI literacy scales. Npj Science of Learning, 9(1). https://doi.org/10.1038/s41539-024-00264-4 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  252. Liu, Z., Qi, X., & Torr, P. H. S. (2020). Global texture enhancement for fake face detection in the wild. 8060–8069. https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_Global_Texture_Enhancement_for_Fake_Face_Detection_in_the_Wild_CVPR_2020_paper.html Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  253. Maros, A., Almeida, J. M., & Vasconcelos, M. (2021). A study of misinformation in audio messages shared in whatsapp groups. In J. Bright, A. Giachanou, V. Spaiser, F. Spezzano, A. George, & A. Pavliuc (Eds.), Disinformation in Open Online Media (pp. 85–100). Springer International Publishing. https://doi.org/10.1007/978-3-030-87031-7_6 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  254. Martínez-Bravo, M. C., Sádaba Chalezquer, C., & Serrano-Puche, J. (2022). Dimensions of digital literacy in the 21st century competency frameworks. Sustainability, 14(3). Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  255. https://doi.org/10.3390/su14031867 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  256. Mayring, P. (2010). Qualitative Inhaltsanalyse. Grundlagen und Techniken [Qualitative content analysis. Basics and techniques]. Beltz. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  257. Millière, R. (2022). Deep learning and synthetic media. Synthese, 200(3). https://doi.org/10.1007/s11229-022-03739-2 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  258. Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  259. https://doi.org/10.1016/j.caeai.2021.100041 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  260. Nightingale, S. J., & Farid, H. (2022). AI-synthesized faces are indistinguishable from real faces and more trustworthy. Proceedings of the National Academy of Sciences, 119(8). https://doi.org/10.1073/pnas.2120481119 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  261. Pawelec, M., & Sievi, L. (2023). Falschinformationen in den sozialen Medien als Herausforderung für deutsche Sicherheitsbehörden und -organisationen [Disinformation on social media as a challenge for German security authorities and organizations]. Kriminologie – Das Online-Journal | Criminology – The Online Journal, 5(5). https://doi.org/10.18716/ojs/krimoj/2023.4.7 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  262. Koistinen, P., Alaraatikka, M., Sederholm, T., Savolainen, D., Huhtinen, A.-M., & Kaarkoski, M. (2022). Public authorities as a target of disinformation. European Conference on Cyber Warfare and Security, 21(1), 123–129. https://doi.org/10.34190/eccws. 21.1.371 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  263. Petratos, P. N. (2021). Misinformation, disinformation, and fake news: Cyber risks to business. Business Horizons, 64(6), 763–774. https://doi.org/10.1016/j.bushor.2021.07.012 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  264. Powell, T. E., Boomgaarden, H. G., De Swert, K., & de Vreese, C. H. (2015). A clearer picture: The contribution of visuals and text to framing effects. Journal of Communication, 65(6), 997–1017. https://doi.org/10/f3s2sj Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  265. Rana, M. S., Nobi, M. N., Murali, B., & Sung, A. H. (2022). Deepfake detection: A systematic literature review. IEEE Access, 10, 25494–25513. IEEE Access. https://doi.org/10.1109/ACCESS.2022.3154404 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  266. Roe, J., Perkins, M., & Furze, L. (2024). Deepfakes and higher education: A research agenda and scoping review of synthetic media. Journal of University Teaching and Learning Practice, 21(10). https://doi.org/10.53761/2y2np178 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  267. Rohs, M., & Seufert, S. (2020). Berufliche Medienkompetenz [Professional media literacy]. In R. Arnold, A. Lipsmeier, & M. Rohs (Eds.), Handbuch Berufsbildung (pp. 339–363). Springer Fachmedien. https://doi.org/10.1007/978-3-658-19312-6_29 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  268. Shao, C., Ciampaglia, G. L., Varol, O., Yang, K.-C., Flammini, A., & Menczer, F. (2018). The spread of low-credibility content by social bots. Nature Communications, 9(1). https://doi.org/10.1038/s41467-018-06930-7 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  269. Shen, B., RichardWebster, B., O’Toole, A., Bowyer, K., & Scheirer, W. J. (2021). A study of the human perception of synthetic faces. 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021), 1–8. https://doi.org/10.1109/FG52635.2021.9667066 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  270. Simonite, T. (2019, October 7). Most deepfakes are porn, and they’re multiplying fast. Wired. https://www.wired.com/story/most-deepfakes-porn-multiplying-fast/ Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  271. Stöcker, C. (2020). How Facebook and Google accidentally created a perfect ecosystem for targeted disinformation. In C. Grimme, M. Preuss, F. W. Takes, & A. Waldherr (Eds.), Disinformation in Open Online Media (Vol. 12021, pp. 129–149). Springer International Publishing. https://doi.org/10.1007/978-3-030-39627-5_11 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  272. Stroebel, L., Llewellyn, M., Hartley, T., Shan Ip, T., & Ahmed, M. (2023). A systematic literature review on the effectiveness of deepfake detection techniques. Journal of Cyber Security Technology, 7(2), 83–113. https://doi.org/10.1080/23742917.2023.2192888 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  273. Tandoc, E. C., Ling, R., Westlund, O., Duffy, A., Goh, D., & Zheng Wei, L. (2018). Audiences’ acts of authentication in the age of fake news: A conceptual framework. New Media and Society, 20(8), 2745–2763. https://doi.org/10/gc2fmd Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  274. Tiernan, P., Costello, E., Donlon, E., Parysz, M., & Scriney, M. (2023). Information and media literacy in the age of AI: Options for the future. Education Sciences, 13(9). Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  275. https://doi.org/10.3390/educsci13090906 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  276. Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131. https://doi.org/10.1126/science.185.4157.1124 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  277. Twomey, J., Ching, D., Aylett, M. P., Quayle, M., Linehan, C., & Murphy, G. (2023). Do deepfake videos undermine our epistemic trust? A thematic analysis of tweets that discuss deepfakes in the Russian invasion of Ukraine. PLOS ONE, 18(10). https://doi.org/10.1371/journal.pone.0291668 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  278. Vaccari, C., & Chadwick, A. (2020). Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media + Society, 6(1). https://doi.org/10.1177/2056305120903408 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  279. Wardle, C., & Derakhshan, H. (2017). Information disorder: Toward an interdisciplinary framework for research and policy making. Council of Europe report DGI (2017)09. https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  280. Wasner, B. (2013). Eliten in Europa: Einführung in Theorien, Konzepte und Befunde [Elites in Europe: Introduction to theories, concepts and findings]. Springer-Verlag. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  281. Williamson, S. M., & Prybutok, V. (2024). The era of artificial intelligence deception: Unraveling the complexities of false realities and emerging threats of misinformation. Information, 15(6). https://doi.org/10.3390/info15060299 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
  282. Wu, J., Gan, W., Chen, Z., Wan, S., & Lin, H. (2023). AI-generated content (AIGC): A survey (arXiv:2304.06632). arXiv. https://doi.org/10.48550/arXiv.2304.06632 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594

Citation


Download RIS Download BibTex