A new face of political advertising? Synthetic imagery in the 2025 German federal election campaigns on social media
Inhaltsverzeichnis
Bibliographische Infos

SCM Studies in Communication and Media
Jahrgang 14 (2025), Heft 4
- Autor:innen:
- | | | | | | | | | | | | | | | | | | | | | | | | | | |
- Verlag
- Nomos, Baden-Baden
- Copyrightjahr
- 2026
- ISSN-Online
- 2192-4007
- ISSN-Print
- 2192-4007
Kapitelinformationen
Jahrgang 14 (2025), Heft 4
A new face of political advertising? Synthetic imagery in the 2025 German federal election campaigns on social media
- Autor:innen:
- | | |
- ISSN-Print
- 2192-4007
- ISSN-Online
- 2192-4007
- Kapitelvorschau:
Die Zunahme von KI-generierten Inhalten stellt eine neue Herausforderung für die politische Kommunikation dar. Da synthetische Medien sich stetig weiterentwickeln und immer zugänglicher werden, muss ihre Rolle für die Meinungsbildung der Wähler*innen und für die öffentliche Debatte genauer untersucht werden. Die vorliegende Studie befasst sich mit der Verwendung KI-generierter Bilder im Wahlkampf zur Bundestagswahl 2025 und zeichnet deren Verbreitung, strategischen Einsatz und Transparenz nach. Anhand einer Inhaltsanalyse der Instagram-Beiträge der großen deutschen Parteien und ihrer Jugendorganisationen in den sechs Wochen vor der Wahl identifizieren wir KI-generierte Bilder, analysieren die Kennzeichnungspraktiken und untersuchen ihre kommunikativen und ideologischen Funktionen. Außerdem vergleichen wir die Unterschiede in der Akzeptanz und Nutzung der Bilder durch die verschiedenen Parteien, um mögliche Auswirkungen auf demokratische Prozesse zu bewerten. Unsere Ergebnisse zeigen, dass die rechtsextreme Alternative für Deutschland (AfD) deutlich mehr synthetische Bilder verwendet als andere Parteien. Diese KI-generierten Bilder sind überwiegend fotorealistisch und oft nicht eindeutig gekennzeichnet, was Bedenken hinsichtlich der Transparenz und einer möglichen Täuschung der Wähler aufkommen lässt. Die AfD nutzt solche Bilder in erster Linie für emotionale und ideologische Botschaften und setzt KI-generierte Inhalte ein, um ihre politischen Narrative zu verstärken und Unterstützung zu mobilisieren. Unsere Ergebnisse liefern eine strukturierte Bewertung von KI-generierten Inhalten in der deutschen politischen Kommunikation, die die potenziellen Risiken hervorhebt, die mit der unkontrollierten Verwendung solcher Inhalte verbunden sind. Unsere Forschung dient auch einer breiteren Diskussion über die ethischen Implikationen synthetischer Medien in demokratischen Gesellschaften.
Literaturverzeichnis
Es wurden keine Treffer gefunden. Versuchen Sie einen anderen Begriff.
- Bast, J. (2024). Managing the image. The visual communication strategy of European right-wing populist politicians on Instagram. Journal of Political Marketing, 23(1), 1–25. https://doi.org/10.1080/15377857.2021.1892901 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Bennett, W. L., & Livingston, S. (2018). The disinformation order: Disruptive communication and the decline of democratic institutions. European Journal of Communication, 33(2), 122–139. https://doi.org/10.1177/0267323118760317 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Bray, S. D., Johnson, S. D., & Kleinberg, B. (2023). Testing human ability to detect ‘deepfake’ images of human faces. Journal of Cybersecurity, 9(1), 1–18. https://doi.org/10.1093/ Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- cybsec/tyad011 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Brettschneider, F., Niedermayer, O., & Weßels, B. (2007). Die Bundestagswahl 2005: Analysen des Wahlkampfes und der Wahlergebnisse [The German federal election 2005: Analyses of the election campaign and results]. In: F. Brettschneider, O. Niedermayer, B. Weßels (Eds.), Die Bundestagswahl 2005 (pp. 9–18). VS https://doi.org/10.1007/978-3-531-90536-5_1 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Burrus, O., Curtis, A., & Herman, L. (2024). Unmasking AI: Informing authenticity decisions by labeling AI-generated content. Interactions, 31(4), 38–42. https://doi.org/10.1145/3665321 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Campbell, A. (1960). Surge and decline: A study of electoral change. Public Opinion Quarterly, 24(3), 397–418. https://psycnet.apa.org/doi/10.1086/266960 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Corsi, G., Marino, B., & Wong, W. (2024). The spread of synthetic media on X. Harvard Kennedy School (HKS) Misinformation Review. https://doi.org/10.37016/mr-2020-140 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Dalton, R. J. (2018). Citizen politics: Public opinion and political parties in advanced industrial democracies. CQ Press. Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- De Vreese, C. D., & Votta, F. (2023). AI and political communication. Political Communication Report, 2023. http://doi.org/10.17169/refubium-39047 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Dobber, T., Metoui, N., Trilling, D., Helberger, N., & De Vreese, C. (2021). Do (microtargeted) deepfakes have real effects on political attitudes? The International Journal of Press/Politics, 26(1), 69–91. https://doi.org/10.1177/1940161220944364 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Engesser, S., Ernst, N., Esser, F., & Büchel, F. (2017). Populism and social media: How politicians spread a fragmented ideology. Information, Communication & Society, 20(8), 1109–1126. https://doi.org/10.1080/1369118X.2016.1207697 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Ernst, N., Blassnig, S., Engesser, S., Büchel, F., & Esser, F. (2019). Populists prefer social media over talk shows: An analysis of populist messages and stylistic elements across six countries. Social Media+ Society, 5(1). https://doi.org/10.1177/2056305118823358 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Epstein, Z., Arechar, A. A., & Rand, D. (2023). What label should be applied to content produced by generative AI? PsyArxiv preprint, 2023. https://doi.org/10.31234/osf.io/v4mfz Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Esser, F., & Strömbäck, J. (2014). Mediatization of politics: Understanding the transformation of Western democracies. Springer. https://doi.org/10.1057/9781137275844 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Farrell, D.M., & Schmitt-Beck, R. (Eds.). (2002). Do political campaigns matter? Campaign Effects in Elections and Referendums (1st ed.). Routledge. https://doi.org/10.4324/9780203166956 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Geise, S., & Baden, C. (2015). Putting the image back into the frame: Modeling the linkage between visual communication and frame-processing theory. Communication Theory, 25(1), 46–69. https://doi.org/10.1111/comt.12048 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Geise, S., & Xu, Y. (2024). Effects of visual framing in multimodal media environments: A systematic review of studies between 1979 and 2023. Journalism & Mass Communication Quarterly, 102(3), 796–823. https://doi.org/10.1177/10776990241257586 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Gerbaudo, P. (2018). Social media and populism: an elective affinity? Media, Culture & Society, 40(5), 745–753. https://doi.org/10.1177/0163443718772192 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Gibson, R. K., & McAllister, I. (2015). Normalising or equalising party competition? Assessing the impact of the web on election campaigning. Political Studies, 63(3), 529–547. https://doi.org/10.1111/1467-9248.12107 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Godulla, A., Hoffmann, C. P., & Seibert, D. (2021). Dealing with deepfakes–An interdisciplinary examination of the state of research and implications for communication studies. SCM Studies in Communication and Media, 10(1), 72–96. https://doi.org/10.5771/2192-4007-2021-1-72 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Gosselin, R. D. (2025). AI detectors are poor western blot classifiers: A study of accuracy and predictive values. PeerJ, 13. https://doi.org/10.7717/peerj.18988 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Grabe, M. E., & Bucy, E. P. (2009). Image bite politics: News and the visual framing of elections. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780195372076.001.0001 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Graf & Koch-Kramer (2020) Instaloader. Retrieved January 10, 2025, from https://github.com/instaloader/instaloader. Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Grittmann, E. (2007). Das politische Bild: Fotojournalismus und Pressefotografie in Theorie und Empirie [The political image: Photojournalism and press photography in theory and empirical research]. Herbert von Halem. Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Habermas, J. (1983). Moralbewußtsein und kommunikatives Handeln [Moral consciousness and communicative action]. Suhrkamp. Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Hameleers, M., & Marquart, F. (2023). It’s nothing but a deepfake! The effects of misinformation and deepfake labels delegitimizing an authentic political speech. International Journal of Communication, 17, 6291–6311. Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Hameleers, M., & Schmuck, D. (2017). It’s us against them: A comparative experiment on the effects of populist messages communicated via social media. Information, Communication & Society, 20(9), 1425–1444. https://doi.org/10.1080/1369118X.2017.1328523 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Hameleers, M., van der Meer, T. G., & Dobber, T. (2024). Distorting the truth versus blatant lies: The effects of different degrees of deception in domestic and foreign political deepfakes. Computers in Human Behavior, 152. https://doi.org/10.1016/j.chb.2023.108096 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Hausken, L. (2024). Photorealism versus photography. AI-generated depiction in the age of visual disinformation. Journal of Aesthetics & Culture, 16(1). https://doi.org/10.1080/20004214.2024.2340787 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Hooghe, M., Stolle, D., & Stouthuysen, P. (2004). Head start in politics: The recruitment function of youth organizations of political parties in Belgium (Flanders). Party Politics, 10(2), 193–212. https://doi.org/10.1177/1354068804040503 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Klinger, U., Koc-Michalska, K., & Russmann, U. (2023). Are campaigns getting uglier, and who is to blame? Negativity, dramatization and populism on Facebook in the 2014 and 2019 EP election campaigns. Political Communication, 40(3), 263–282. https://doi.org/10.1080/10584609.2022.2133198 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Laba, N. (2024). Engine for the imagination? Visual generative media and the issue of representation. Media, Culture & Society, 46(8), 1599–1620. https://doi.org/10.1177/01634437241259950 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Leidecker-Sandmann, M., & Geise, S. (2020). Tradition statt Innovation. Die deutsche Presseberichterstattung über die Wahlkampfstrategien der Parteien zur Bundestagswahl 2017 [Tradition instead of innovation. The German press coverage of political parties’ campaign strategies in the run-up to the 2017 parliamentary elections]. SCM Studies in Communication and Media, 9(2), 264–307. https://doi.org/10.5771/2192-4007-2020-2-264 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Leidecker-Sandmann, M., & Thomas, F. (2023). “Never was there more to do.” Use of vaguely formulated statements in the 2021 German national election campaign and their potential effects. In C. Holtz-Bacha, (Ed.), Die (Massen-)Medien im Wahlkampf: Die Bundestagswahl 2021 (pp. 43–66). Springer Fachmedien Wiesbaden. Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Li, Y., Liu, Z., Zhao, J., Ren, L., Li, F., Luo, J., & Luo, B. (2024). The adversarial AI-art: Understanding, generation, detection, and benchmarking. In European Symposium on Research in Computer Security (pp. 311–331). Springer Nature Switzerland. https://doi.org/10.48550/arXiv.2404.14581 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Lu, Z., Huang, D., Bai, L., Qu, J., Wu, C., Liu, X., & Ouyang, W. (2023). Seeing is not always believing: Benchmarking human and model perception of AI-generated images. Advances in Neural Information Processing Systems, 36, 25435–25447. https://doi.org/10.48550/arXiv.2304.13023 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Mathys, M., Willi, M., & Meier, R. (2024). Synthetic photography detection: A visual guidance for identifying synthetic images created by AI. arXiv preprint arXiv:2408.06398. https://doi.org/10.48550/arXiv.2408.06398 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Magin, M., Podschuweit, N., Haßler, J., & Russmann, U. (2017). Campaigning in the fourth age of political communication. A multi-method study on the use of Facebook by German and Austrian parties in the 2013 national election campaigns. Information, Communication & Society, 20(11), 1698–1719. https://doi.org/10.1080/1369118X.2016.1254269 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Messaris, P., & Abraham, L. (2001). The role of images in framing news stories. In S. D. Reese, O. H. Gandy, Jr., & A. E. Grant (Eds.), Framing public life: Perspectives on media and our understanding of the social world (pp. 231–242). Routledge. Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Moernaut, R., Mast, J., & Pauwels, L. (2020). Visual and multimodal framing analysis. In L. Pauwels, D. Mannay (Eds.), The SAGE Handbook of Visual Research Methods (pp. 484–499). SAGE Publications. https://doi.org/10.4135/9781526417015.n30 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Momeni, M. (2025). Artificial intelligence and political deepfakes: Shaping citizen perceptions through misinformation. Journal of Creative Communications, 20(1), 41–56. https://doi.org/10.1177/09732586241277335 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Müller, M. G. (1997). Visuelle Wahlkampfkommunikation: Eine Typologie der Bildstrategien im amerikanischen Präsidentschaftswahlkampf [Visual campaign communication: A typology of image strategies in the American presidential election campaign]. Publizistik, 42(2), 205–228. https://doi.org/10.1007/BF03654575 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Perloff, R. M. (2021). The dynamics of political communication: Media and politics in a digital age. Routledge. https://doi.org/10.4324/9780429298851 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Powell, T. E., Boomgaarden, H. G., De Swert, K., & de Vreese, C. H. (2019). Framing fast and slow: A dual processing account of multimodal framing effects. Media Psychology, 22(4), 572–600. https://doi.org/10.1080/15213269.2018.1476891 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Peng, Q., Lu, Y., Peng, Y., Qian, S., Liu, X., & Shen, C. (2025, April). Crafting synthetic realities: Examining visual realism and misinformation potential of photorealistic AI-generated images. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (pp. 1–12). https://doi.org/10.1145/3706599.3719834 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Schmuck, D., & Matthes, J. (2017). Effects of economic and symbolic threat appeals in right-wing populist advertising on anti-immigrant attitudes: The impact of textual and visual appeals. Political Communication, 34(4), 607–626. https://doi.org/10.1080/10584609.2017.1316807 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Semetko, H. A., & Tworzecki, H. (2017). Campaign strategies, media, and voters: The fourth era of political communication. In J. Fisher, E. Fieldhouse, M.N. Franklin, R. Gibson, M. Cantijoch & C. Wlezien (Eds.), The Routledge Handbook of Elections, Voting Behavior and Public Opinion (pp. 293–304). Routledge. Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Seo, K. (2020). Meta-analysis on visual persuasion–does adding images to texts influence persuasion? Athens Journal of Mass Media and Communications, 6(3), 177–190. https://doi.org/10.30958/ajmmc.6-3-3 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Ternovski, J., Kalla, J., & Aronow, P. (2022). The negative consequences of informing voters about deepfakes: Evidence from two survey experiments. Journal of Online Trust and Safety, 1(2). https://doi.org/10.54501/jots.v1i2.28 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Vaccari, C., & Chadwick, A. (2020). Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media+ Society, 6(1), 2056305120903408. https://doi.org/10.1177/2056305120903408 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Ward, J. (2011). Reaching citizens online: How youth organizations are evolving their web presence. Information, Communication & Society, 14(6), 917–936. https://doi.org/10.1080/1369118X.2011.572982 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Weber, R. (2017). Political participation of young people in political parties. Zeitschrift für Politikwissenschaft, 27, 379–396. https://doi.org/10.1007/s41358-017-0106-z Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Wilke, J., & Leidecker, M. (2013). Regional – national – supranational. How the German press covers election campaigns on different levels of the political system. Central European Journal of Communication, 6(1(10)), 122–143 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Wilke, J., & Reinemann, C. (2003). Die Bundestagswahl 2002: Ein Sonderfall? [The German federal election 2002: A special case?]. In: C. Holtz-Bacha, C. (Eds.), Die Massenmedien im Wahlkampf (pp. 29–46). VS. https://doi.org/10.1007/978-3-322-80461-7_3 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- Wittenberg, C., Epstein, Z., Berkinsky, A.J., & Rand, D.G. (2023). Labeling AI-generated content: Promises, perils, and future directions. Topical Policy Brief, MIT Schwarzman College of Computing. https://computing.mit.edu/wp-content/uploads/2023/11/AI-Policy_Labeling.pdf Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-485
- 9News staff. (2024, March 7). 9ExPress. 9News. https://www.9news.com.au/technology/9express/16480c33-636a-461f-9c4f-d0e2522c722a Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Ananny, M., & Karr, J. (2025). How media unions stabilize technological hype: Tracing organized journalism’s discursive constructions of generative artificial intelligence. Digital Journalism. https://doi.org/10.1080/21670811.2025.2454516 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Attard, M., Davis, M., & Main, L. (2023). Gen AI and journalism. UTS Centre for Media Transition. https://doi.org/10.6084/m9.figshare.24751881.v3 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Australian Communications and Media Authority. (n.d.). Local content on regional commercial radio. Retrieved April 24, 2025, from https://www.acma.gov.au/local-content-regional-commercial-radio Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Australian Competition and Consumer Commission. (2019). Digital platforms inquiry final report. https://www.accc.gov.au/about-us/publications/digital-platforms-inquiry-final-report Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Australian Press Council. (2023, August). Submission to the Department of Industry, Transport, Regional Development and Communications on the Exposure Draft of the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2023 (sub. E3250). https://www.infrastructure.gov.au/sites/default/files/documents/acma2023-e3250-australian-press-council.pdf Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Avishai, T. (Director). (2023, December 4). Synthetic media: AI and journalism (No. 6) [Broadcast]. In Knowing Machines. https://engelberg-center-live.simplecast.com/episodes/synthetic-media-ai-and-journalism Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Bäck, A., Diakopoulos, N., Granroth-Wilding, M., Haapanen, L., Leppänen, L. J., Melin, M., Moring, T. A., Munezero, M. D., Siren-Heikel, S. J., Södergård, C., & Toivonen, H. (2019). News automation: The rewards, risks and realities of “machine journalism.” World Association of Newspapers and News Publishers, WAN-IFRA. Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Bagozzi, R. (2007). The legacy of the technology acceptance model and a proposal for a paradigm shift. Journal of the Association for Information Systems, 8(4), 244–254. https://doi.org/10.17705/1jais.00122 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Barnes, C., & Barraclough, T. (2020). Deepfakes and synthetic media. In R. Steff, J. Burton, & S. R. Soare (Eds.), Emerging Technologies and International Security: Machines, the State, and War (pp. 206–222). Routledge. https://doi.org/10.4324/9780367808846 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Becker, K. B., Simon, F. M., & Crum, C. (2025). Policies in parallel? A comparative study of journalistic AI policies in 52 global news organisations. Digital Journalism, 1–21. https://doi.org/10.1080/21670811.2024.2431519 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Beckett, C., & Yaseen, M. (2023). Generating change: The journalism AI report. Polis, London School of Economics and Political Science. https://www.journalismai.info/s/Generating-Change-_-The-Journalism-AI-report-_-English.pdf Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Borchardt, A., Simon, F., Zachrison, O., Bremme, K., Kurczabinska, J., Mulhall, E., & Johanny, Y. (2024). Trusted journalism in the age of generative AI. European Broadcasting Union. https://ora.ox.ac.uk/objects/uuid:8c874e2e-34de-4813-ba23-84e6300af110 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Borden, S. L., & Tew, C. (2007). The role of journalist and the performance of journalism: Ethical lessons from “fake” news (seriously). Journal of Mass Media Ethics, 22(4), 300–314. https://doi.org/10.1080/08900520701583586 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Broussard, M., Diakopoulos, N., Guzman, A. L., Abebe, R., Dupagne, M., & Chuan, C.-H. (2019). Artificial intelligence and journalism. Journalism & Mass Communication Quarterly, 96(3), 673–695. https://doi.org/10.1177/1077699019859901 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Cazzamatta, R., & Sarısakaloğlu, A. (2025). Mapping global emerging scholarly research and practices of AI-supported fact-checking tools in journalism. Journalism Practice, 19(10), 2422–2444. https://doi.org/10.1080/17512786.2025.2463470 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Center for News, Technology & Innovation. (2025). What it means to do journalism in the age of AI: Journalist views on safety, technology and government. https://innovating.news/2024-journalist-survey/ Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Chesney, R., & Citron, D. K. (2018). Deep fakes: A looming challenge for privacy, democracy, and national security. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3213954 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Cools, H., & Diakopoulos, N. (2024). Uses of generative AI in the newsroom: Mapping journalists’ perceptions of perils and possibilities. Journalism Practice, 1–19. https://doi.org/10.1080/17512786.2024.2394558 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Cools, H., & Koliska, M. (2024). News automation and algorithmic transparency in the newsroom: The case of the Washington Post. Journalism Studies, 25(6), 662–680. https://doi.org/10.1080/1461670X.2024.2326636 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- de-Lima-Santos, M.-F., Yeung, W. N., & Dodds, T. (2024). Guiding the way: A comprehensive examination of AI guidelines in global media. AI & Society, 40, 2585–2603. https://doi.org/10.1007/s00146-024-01973-5 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Diakopoulos, N., Cools, H., Li, C., Helberger, N., Kung, E., Rinehart, A., & Gibbs, L. (2024). Generative AI in journalism: The evolution of newswork and ethics in a generative information ecosystem. https://doi.org/10.13140/RG.2.2.31540.05765 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Dodds, T., Zamith, R., & Lewis, S. C. (2025). The AI turn in journalism: Disruption, adaptation, and democratic futures. Journalism. Advance online publication https://doi.org/10.1177/14648849251343518 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Dunstan, J., & Ortolan, M. (2024, January 31). An AI-generated image of a Victorian MP raises wider questions on digital ethics. ABC News. https://www.abc.net.au/news/2024-02-01/georgie-purcell-ai-image-nine-news-apology-digital-ethics/103408440 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Eder, M., & Sjøvaag, H. (2025). Falling behind the adoption curve: Local journalism’s struggle for innovation in the AI transformation. Journal of Media Business Studies, 22(4), 325–343. https://doi.org/10.1080/16522354.2025.2473301 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Elbeyi, E., Bruhn Jensen, K., Aronczyk, M., Asuka, J., Ceylan, G., Cook, J., Erdelyi, G., Ford, H., Milani, C., Mustafaraj, E., Ogenga, F., Yadin, S., Howard, P. N., Valenzuela, S., Brulle, R., Jacquet, J., Lewandowsky, S., & Roberts, T. (2025). Information integrity about climate science: A systematic review. International Panel on the Information Environment (IPIE). https://doi.org/10.61452/BTZP3426 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Etikan, I. (2016). Comparison of Convenience Sampling and Purposive Sampling. American Journal of Theoretical and Applied Statistics, 5(1), 1–4. Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- https://doi.org/10.11648/j.ajtas.20160501.11 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- European Broadcasting Union. (2025, May 5). Media outlets worldwide join call for AI companies to help protect news integrity. https://www.ebu.ch/news/2025/05/media-outlets-worldwide-join-call-for-ai-companies-to-help-protect-news-integrity Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Farhi, P. (2023, January 17). CNET used AI to write articles. It was a journalistic disaster. The Washington Post. https://www.washingtonpost.com/media/2023/01/17/cnet-ai-articles-journalism-corrections/ Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Feher, K. (2024). Exploring AI media. Definitions, conceptual model, research agenda. Journal of Media Business Studies, 21(4), 340–363. https://doi.org/10.1080/16522354.2024.2340419 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Ferrucci, P., & Perreault, G. (2021). The liability of newness: Journalism, innovation and the issue of core competencies. Journalism Studies, 22(11), 1436–1449. https://doi.org/10.1080/1461670X.2021.1916777 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Furtáková, L., & Janáčková, Ľ. (2023). AI in radio: The game changer you did not hear coming. In M. Prostináková Hossová, M. Martovič, & M. Solík (Eds.), Marketing identity: AI – The future of today. Proceedings from the International Scientific Conference. University of Ss. Cyril and Methodius. https://mmidentity.fmk.sk/wp-content/uploads/2024/10/MM_2023_eng.pdf Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Golding, P., & Murdock, G. (2022). The political economy of contemporary journalism and the crisis of public knowledge. In S. Allan (Ed.), The Routledge Companion to News and Journalism (2nd ed., pp. 36–45). Routledge. https://doi.org/10.4324/9781003174790-5 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Gutierrez Lopez, M., Porlezza, C., Cooper, G., Makri, S., MacFarlane, A., & Missaoui, S. (2023). A question of design: Strategies for embedding AI-driven tools into journalistic work routines. Digital Journalism, 11(3), 484–503. https://doi.org/10.1080/21670811.2022.2043759 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Gutiérrez-Caneda, B., Lindén, C.-G., & Vázquez-Herrero, J. (2024). Ethics and journalistic challenges in the age of artificial intelligence: Talking with professionals and experts. Frontiers in Communication, 9. https://doi.org/10.3389/fcomm.2024.1465178 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Hall, C. J. (2025). Platform journalism on YouTube: A democratic functions approach to analysing journalism on digital platforms. Australian Journalism Review, 47(1), 97–115. https://doi.org/10.1386/ajr_00178_7 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Harris, K. R. (2024). Synthetic media detection, the wheel, and the burden of proof. Philosophy & Technology, 37(131). https://doi.org/10.1007/s13347-024-00821-0 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- He, X., & Fang, L. (2024). Regulatory challenges in synthetic media governance: Policy frameworks for AI-generated content across image, video, and social platforms. Journal of Robotic Process Automation, AI Integration, and Workflow Optimization, 9(12), 36–54. https://helexscience.com/index.php/JRPAAIW/article/view/2024-12-13 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Helberger, N., van Drunen, M., Moeller, J., Vrijenhoek, S., & Eskens, S. (2022). Towards a normative perspective on journalistic AI: Embracing the messy reality of normative ideals. Digital Journalism, 10(10), 1605–1626. https://doi.org/10.1080/21670811.2022.2152195 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Hermida, A. (2015). Nothing but the truth: Redrafting the journalistic boundary of verification. In M. Carlson & S. C. Lewis (Eds.), Boundaries of Journalism (pp. 37–50). Routledge. Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Jones, B., Jones, R., & Luger, E. (2022). AI ‘everywhere and nowhere’: Addressing the AI intelligibility problem in public service journalism. Digital Journalism, 10(10), 1731–1755. https://doi.org/10.1080/21670811.2022.2145328 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Jones, B., Jones, R., & Luger, E. (2023). Generative AI & journalism: A rapid risk-based review. University of Edinburgh. https://www.research.ed.ac.uk/en/publications/generative-ai-amp-journalism-a-rapid-risk-based-review Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Kieran, M. (1998). Objectivity, impartiality and good journalism. In M. Kieran (Ed.), Media Ethics (1st ed., pp. 23–36). Routledge. Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Lin, B., & Lewis, S. C. (2022). The one thing journalistic AI just might do for democracy. Digital Journalism, 10(10), 1627–1649. https://doi.org/10.1080/21670811.2022.2084131 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Lindén, T. C.-G., & Dierickx, L. (2019). Robot journalism: The damage done by a metaphor. Unmediated, 2, 152–155. Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Mahadevan, A. (2025, March 20). An Italian newspaper launched a generative AI experiment. It’s not going well. Poynter. https://www.poynter.org/tech-tools/2025/il-foglio-newspaper-generated-artificial-intelligence/ Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Martin, A., & Newell, B. (2024). Synthetic data, synthetic media, and surveillance. Surveillance & Society, 22(4), 448–452. https://doi.org/10.24908/ss.v22i4.18334 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Matich, P., Thomson, T. J., & Thomas, R. J. (2025). Old threats, new name? Generative AI and visual journalism. Journalism Practice, 19(10), 2402–2421. https://doi.org/10.1080/17512786.2025.2451677 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Medianet. (2025). 2025 Australian media landscape report. https://engage.medianet.com.au/2025-media-landscape-report Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Meir, N. (2015, June 15). Automated earnings stories multiply. The Associated Press. https://www.ap.org/the-definitive-source/announcements/automated-earnings-stories-multiply/ Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Min, S. J., & Fink, K. (2021). Keeping up with the technologies: Distressed journalistic labor in the pursuit of “shiny” technologies. Journalism Studies, 22(14), 1987–2004. https://doi.org/10.1080/1461670X.2021.1979425 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Møller, L. A., Cools, H., & Skovsgaard, M. (2025). One size fits some: How journalistic roles shape the adoption of generative AI. Journalism Practice, 1–22. https://doi.org/10.1080/17512786.2025.2484622 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Montaña-Niño, S. (2024). Automated journalistic assemblages. A conceptual approach to the normative and ethical debates on AI implementation in newsrooms. Problemi Dell’informazione, 1, 17–40. https://doi.org/10.1445/113227 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Moran, C. (2023, April 6). ChatGPT is making up fake Guardian articles. Here’s how we’re responding. The Guardian. https://www.theguardian.com/commentisfree/2023/apr/06/ai-chatgpt-guardian-technology-risks-fake-article Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Moran, R. E., & Shaikh, S. J. (2022). Robots in the news and newsrooms: Unpacking meta-journalistic discourse on the use of artificial intelligence in journalism. Digital Journalism, 10(10), 1756–1774. https://doi.org/10.1080/21670811.2022.2085129 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Oliver, L. (2024, November 1). This chatbot helps tell the story of how women are affected by drug trafficking in Paraguay. Reuters Institute News. https://reutersinstitute.politics.ox.ac.uk/news/chatbot-helps-tell-story-how-women-are-affected-drug-trafficking-paraguay Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Paris Charter on AI and Journalism. (2023, November 10). https://rsf.org/sites/default/files/medias/file/2023/11/Paris%20Charter%20on%20AI%20and%20Journalism.pdf Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Partnership on AI. (2023, February 27). PAI’s responsible practices for synthetic media. https://partnershiponai.org/ Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Petković, B. (2014). Media integrity matters: Reclaiming public service values in media and journalism (1st ed). Peace Institute, Institute for Contemporary Social and Political Studies. Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Radcliffe, D. (2025). Journalism in the AI era (TRF Insights). Thomson Reuters Foundation. https://www.trust.org/wp-content/uploads/2025/01/TRF-Insights-Journalism-in-the-AI-Era.pdf Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Riordan, K. (2014). Accuracy, independence, and impartiality: How legacy media and digital natives approach standards in the digital age. Reuters Institute for the Study of Journalism. Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- https://reutersinstitute.politics.ox.ac.uk/our-research/accuracy-independence-and-impartiality-how-legacy-media-and-digital-natives-approach Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Roper, D., Henriksson, T., Hälbich, K., & Martin, O. (2023). Gauging generative AI’s impact on newsrooms. World Association of News Publishers (WAN-IFRA). https://wan-ifra.org/insight/gauging-generative-ais-impact-in-newsrooms/ Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Salas, A., Rivero-Calle, I., & Martinón-Torres, F. (2023). Chatting with ChatGPT to learn about safety of COVID-19 vaccines – A perspective. Human Vaccines & Immunotherapeutics, 19(2). https://doi.org/10.1080/21645515.2023.2235200 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Samosir, H. (2023, July 12). More countries across Asia are debuting digital artificial intelligence news readers. Could Australia follow suit? ABC News. https://www.abc.net.au/news/2023-07-13/artificial-intelligence-news-readers-becoming-common-in-asia/102591790 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Schell, K. (2024). AI transparency in journalism: Labels for a hybrid era. Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2025-01/RISJ%20Fellows%20Paper_Katja%20Schell_MT24_Final.pdf Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Simon, F. M. (2022). Uneasy bedfellows: AI in the news, platform companies and the issue of journalistic autonomy. Digital Journalism, 10(10), 1832–1854. https://doi.org/10.1080/21670811.2022.2063150 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Simon, F. M. (2024). Artificial intelligence in the news: How AI retools, rationalizes, and reshapes journalism and the public arena. Tow Center for Digital Journalism. Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- https://journalism.columbia.edu/news/tow-report-artificial-intelligence-news-and-how-ai-reshapes-journalism-and-public-arena Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Simon, F. M., Altay, S., & Mercier, H. (2023). Misinformation reloaded? Fears about the impact of generative AI on misinformation are overblown. Harvard Kennedy School Misinformation Review. https://doi.org/10.37016/mr-2020-127 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Simon, F. M., & Isaza-Ibarra, L. F. (2023). AI in the news: Reshaping the information ecosystem? Oxford Internet Institute. https://www.oii.ox.ac.uk/wp-content/uploads/2023/08/Minderoo_Report_Simon_Ibarra.pdf Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Sjøvaag, H. (2024). The business of news in the AI economy. AI Magazine, 45(2), 246–255. https://doi.org/10.1002/aaai.12172 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Society of Professional Journalists. (2014). SPJ code of ethics. https://www.spj.org/spj-code-of-ethics/ Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Squicciarini, M., Valdez Genao, J., & Sarmiento, C. (2024). Synthetic content and AI policy: A primer. UNESCO. https://policycommons.net/artifacts/17958669/synthetic-content-and-its-implications-for-ai-policy/18857919/ Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Ternovski, J., Kalla, J., & Aronow, P. M. (2022). The negative consequences of informing voters about deepfakes: Evidence from two survey experiments. Journal of Online Trust and Safety, 1(2). https://doi.org/10.54501/jots.v1i2.28 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Thomson, T. J., Thomas, R. J., & Matich, P. (2024). Generative visual AI in news organizations: Challenges, opportunities, perceptions, and policies. Digital Journalism, 1–22. https://doi.org/10.1080/21670811.2024.2331769 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Thomson, T. J., Thomas, R., Riedlinger, M., & Matich, P. (2025). Generative AI & journalism. RMIT University. https://doi.org/10.6084/m9.figshare.28068008 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Toff, B., & Simon, F. M. (2024). “Or they could just not use it?”: The dilemma of AI disclosure for audience trust in news. The International Journal of Press/Politics, 30(4), 881–903. https://doi.org/10.1177/19401612241308697 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Tran, M. (2006, August 18). Robots write the news. The Guardian. https://www.theguardian.com/news/blog/2006/aug/18/robotswriteth Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- WashPostPR. (2024, November 7). The Washington Post launches “Ask the Post AI,” a new search experience. The Washington Post. https://www.washingtonpost.com/pr/2024/11/07/washington-post-launches-ask-post-ai-new-search-experience/ Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Whittaker, L., Kietzmann, T. C., Kietzmann, J., & Dabirian, A. (2020). “All around me are synthetic faces”: The mad world of AI-generated media. IT Professional, 22(5), 90–99. https://doi.org/10.1109/MITP.2020.2985492 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Wilding, D., Fray, P., Molitorisz, S., & McKewon, E. (2018). The impact of digital platforms on news and journalistic content. UTS Centre for Media Transition. http://hdl.handle.net/10453/159124 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Wintterlin, F., Engelke, K. M., & Hase, V. (2020). Can transparency preserve journalism’s trustworthiness? Recipients’ views on transparency about source origin and verification regarding user-generated content in the news. SCM Studies in Communication and Media, 9(2), 218–240. https://doi.org/10.5771/2192-4007-2020-2-218 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Zier, J., & Diakopoulos, N. (2024, October 26). Labeling AI-generated news content: Matching journalist intentions with audience expectations. Proceedings of the Computation and Journalism Symposium 2024. https://cplusj2024.github.io/ Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-517
- Abbas, F., & Taeihagh, A. (2024). Unmasking deepfakes: A systematic review of deepfake detection and generation techniques using artificial intelligence. Expert Systems with Applications, 252. https://doi.org/10.1016/j.eswa.2024.124260 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-550
- Brams, S., Ziv, G., Levin, O., Spitz, J., Wagemans, J., Williams, A. M., & Helsen, W. F. (2019). The relationship between gaze behavior, expertise, and performance: A systematic review. Psychological Bulletin, 145(10), 980–1027. https://doi.org/10.1037/bul0000207 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-550
- Bray, S. D., Johnson, S. D., & Kleinberg, B. (2023). Testing human ability to detect ‘deepfake’ images of human faces. Journal of Cybersecurity, 9(1). https://doi.org/10.1093/cybsec/tyad011 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-550
- Burla, L., Knierim, B., Barth, J., Liewald, K., Duetz, M., & Abel, T. (2008). From text to codings: Intercoder reliability assessment in qualitative content analysis. Nursing Research, 57(2), 113–117. https://doi.org/10.1097/01.NNR.0000313482.33917.7d Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-550
- Caporusso, N., Zhang, K., & Carlson, G. (2020). Using eye-tracking to study the authenticity of images produced by generative adversarial networks. 2020 International Conference on Electrical, Communication, and Computer Engineering (ICECCE), 1–6. https://doi.org/10.1109/ICECCE49384.2020.9179472 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-550
- Cartella, G., Cuculo, V., Cornia, M., & Cucchiara, R. (2024). Unveiling the truth: Exploring human gaze patterns in fake images. IEEE Signal Processing Letters, 1–5. https://doi.org/10.1109/LSP.2024.3375288 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-550
- Diel, A., Lalgi, T., Schröter, I. C., MacDorman, K. F., Teufel, M., & Bäuerle, A. (2024). Human performance in detecting deepfakes: A systematic review and meta-analysis of 56 papers. Computers in Human Behavior Reports, 16. https://doi.org/10.1016/j.chbr.2024.100538 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-550
- Diel, A., Teufel, M., & Bäuerle, A. (2024). Inability to detect deepfakes: Deepfake detection training improves detection accuracy, but increases emotional distress and reduces self-efficacy. OSF. https://doi.org/10.31219/osf.io/muwnj Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-550
- Dolhansky, B., Bitton, J., Pflaum, B., Lu, J., Howes, R., Wang, M., & Cristian Canton Ferrer. (2020). The DeepFake Detection Challenge (DFDC) Dataset. arXiv. https://doi.org/10.48550/arxiv.2006.07397 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-550
- Dornbusch, A., Tye, T., Somoray, K., & Miller, D. J. (2025). Third person effects and the base-rate fallacy: Cognitive biases in deepfake detection [Manuscript in preparation]. Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-550
- El Mokadem, S. S. (2023). The effect of media literacy on misinformation and deep fake video detection. Arab Media & Society, 35, 53–78. https://www.arabmediasociety.com/ Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-550
- Flynn, A., Powell, A., Scott, A. J., & Cama, E. (2022). Deepfakes and digitally altered imagery abuse: A cross-country exploration of an emerging form of image-based sexual abuse. The British Journal of Criminology, 62(6), 1341–1358. https://doi.org/10.1093/bjc/azab111 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-550
- Godulla, A., Hoffmann, C. P., & Seibert, D. (2021). Dealing with deepfakes: An interdisciplinary examination of the state of research and implications for communication studies. Studies in Communication and Media, 10(1), 72–96. https://doi.org/10.5771/2192-4007-2021-1-72 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-550
- Gupta, P., Chugh, K., Dhall, A., & Subramanian, R. (2020). The eyes know it: FakeET- An eye-tracking database to understand deepfake perception. Proceedings of the 2020 International Conference on Multimodal Interaction, 519–527. https://doi.org/10.1145/ 3382507.3418857 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-550
- Köbis, N, C., Doležalová, B., & Soraperra., I. (2021). Fooled twice: People cannot detect deepfakes but think they can. iScience, 24(11). https://doi.org/10.2139/ssrn.3832978 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-550
- Kramer, R. S., Mireku, M. O., Flack, T. R., & Ritchie, K. L. (2019). Face morphing attacks: Investigating detection with humans and computers. Cognitive Research: Principles and Implications, 4(1). https://doi.org/10.1186/s41235-019-0181-4 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-550
- McMahon, L., Kleinman, Z., & Subramanian, C. (2025, January 8). Facebook and Instagram get rid of fact checkers. BBC News. https://www.bbc.com/news/articles/cly74mpy8klo Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-550
- Miller, D. J., Somoray, K., & Stevens, H. (2025). A shallow history of deepfakes. SSRN. http://dx.doi.org/10.2139/ssrn.5130379 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-550
- Ng, Y. L. (2023). An error management approach to perceived fakeness of deepfakes: The moderating role of perceived deepfake targeted politicians’ personality characteristics. Current Psychology, 42, 25658–25669. https://doi.org/10.1007/s12144-022-03621-x Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-550
- Robertson, D. J., Mungall, A., Watson, D. G., Wade, K. A., Nightingale, S. J., & Butler, S. (2018). Detecting morphed passport photos: A training and individual differences approach. Cognitive Research: Principles and Implications, 3. https://doi.org/10.1186/s41235-018-0113-8 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-550
- Senju, A., Vernetti, A., Kikuchi, Y., Akechi, H., Hasegawa, T., & Johnson, M. H. (2013). Cultural background modulates how we look at other persons’ gaze. International Journal of Behavioral Development, 37(2), 131–136. https://doi.org/10.1177/0165025412465360 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-550
- Silva, S. H., Bethany, M., Votto, A. M., Scarff, I. H., Beebe, N., & Najafirad, P. (2022). Deepfake forensics analysis: An explainable hierarchical ensemble of weakly supervised models. Forensic Science International: Synergy, 4. https://doi.org/10.1016/j.fsisyn.2022.100217 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-550
- Somoray, K., & Miller, D. J. (2023). Providing detection strategies to improve human detection of deepfakes: An experimental study. Computers in Human Behavior, 149. https://doi.org/10.1016/j.chb.2023.107917 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-550
- Somoray, K., Miller, D. J., & Holmes, M. (2025). Human performance in deepfake detection: A systematic review. Human Behavior and Emerging Technologies, 2025. https://doi.org/10.1155/hbe2/1833228 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-550
- Smith, H., & Mansted, K. (2020). Weaponised deep fakes: National security and democracy [Policy brief]. Australian Strategic Policy Institute. https://www.aspi.org.au/report/weaponised-deep-fakes Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-550
- Sütterlin, S., Ask, T. F., Mägerle, S., Glöckler, S., Wolf, L., Schray, J., Chandi, A., Bursac, T., Khodabakhsh, A., Know, B. J., Canham, M., & Lugo, R. G. (2023). Individual deep fake recognition skills are affected by viewer’s political orientation, agreement with content and device used. In D. D. Schmorrow & C. M. Fidopiastis (Eds.), Augmented Cognition: 17th International Conference, Held as Part of the 25th HCI International Conference, Copenhagen, Denmark, Proceedings: Vol. 14019 (pp. 269–284). Springer, Cham. https://doi.org/10.1007/978-3-031-35017-7_18 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-550
- Tahir, R., Batool, B., Jamshed, H., Jameel, M., Anwar, M., Ahmed, F., Zaffar, M. A., & Zaffar, M. F. (2021). Seeing is believing: Exploring perceptual differences in deepfake videos. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–16. https://doi.org/10.1145/3411764.3445699 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-550
- Thaw, N. N., July, T., Wai, A. N., Goh, D. H. L., & Chua, A. Y. (2020). Is it real? A study on detecting deepfake videos. Proceedings of the Association for Information Science and Technology, 57(1). https://doi.org/10.1002/pra2.366 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-550
- Wöhler, L., Zembaty, M., Castillo, S., & Magnor, M. (2021). Towards understanding perceptual differences between genuine and face-swapped videos. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–13. https://doi.org/10.1145/3411764.3445627 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-550
- World Economic Forum (2024). The Global Risks Report 2024. https://www3.weforum.org/docs/WEF_The_Global_Risks_Report_2024.pdf Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-550
- Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-550
- Ahmed, S. (2023). Examining public perception and cognitive biases in the presumed influence of deepfakes threat: Empirical evidence of third person perception from three studies. Asian Journal of Communication, 33(3), 308–331. https://doi.org/10.1080/01292986.2023.2194886 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Altay, S., & Acerbi, A. (2024). People believe misinformation is a threat because they assume others are gullible. New Media & Society, 26(11), 6440–6461. https://doi.org/10.1177/14614448231153379 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Baek, Y. M., Kang, H., & Kim, S. (2019). Fake news should be regulated because it influences both “others” and “me”: How and why the influence of presumed influence model should be extended. Mass Communication and Society, 22(3), 301–323. https://doi.org/10.1080/15205436.2018.1562076 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Birrer, A., & Just, N. (2024). What we know and don’t know about deepfakes: An investigation into the state of the research and regulatory landscape. New Media & Society. Advance online publication. https://doi.org/10.1177/14614448241 253138 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Bendahan Bitton, D. B., Hoffmann, C. P., & Godulla, A. (2024). Deepfakes in the context of AI inequalities: Analysing disparities in knowledge and attitudes. Information, Communication & Society, 295–315. https://doi.org/10.1080/1369118X.2024.2420037 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Chen, M., Yu, W., & Liu, K. (2023). A meta-analysis of third-person perception related to distorted information: Synthesizing the effect, antecedents, and consequences. Information Processing & Management, 60(5). https://doi.org/10.1016/j.ipm.2023.103425 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Chung, M., & Wihbey, J. (2024). Social media regulation, third-person effect, and public views: A comparative study of the United States, the United Kingdom, South Korea, and Mexico. New Media & Society, 26(8), 4534–4553. https://doi.org/10.1177/14614448221122996 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Corbu, N., Oprea, D.-A., Negrea-Busuioc, E., & Radu, L. (2020). ‘They can’t fool me, but they can fool the others!’ Third person effect and fake news detection. European Journal of Communication, 35(2), 165–180. https://doi.org/10.1177/ 0267323120903686 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Davison, W. P. (1983). The third-person effect in communication. Public Opinion Quarterly, 47(1), 1–15. https://doi.org/10.1086/268763 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- de Ruiter, A. (2021). The distinct wrong of deepfakes. Philosophy & Technology, 34(4), 1311–1332. https://doi.org/10. 1007/s13347-021-00459-2 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Gardner, G. T., & Gould, L. C. (1989). Public perceptions of the risks and benefits of technology. Risk Analysis, 9(2), 225–242. https://doi.org/10.1111/j.1539-6924. 1989.tb01243.x Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Godulla, A., Hoffmann, C. P., & Seibert, D. (2021). Dealing with deepfakes – An interdisciplinary examination of the state of research and implications for communication studies. SCM Studies in Communication and Media, 10(1), 72–96. https://doi.org/10.5771/2192-4007-2021-1-72 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Gosse, C., & Burkell, J. (2020). Politics and porn: How news media characterizes problems presented by deepfakes. Critical Studies in Media Communication, 37(5), 497–511. https://doi.org/10.1080/15295036.2020.1832697 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Gunther, A. C., & Storey, J. D. (2003). The influence of presumed influence. Journal of Communication, 53(2), 199–215. https://doi.org/10.1111/j.1460- 2466.2003.tb02586.x Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Hameleers, M., Van Der Meer, T. G. L. A., & Dobber, T. (2022). You won’t believe what they just said! The effects of political deepfakes embedded as vox Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- populi on social media. Social Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Media + Society, 8(3). https://doi.org/10.1177/20563051221116346 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Jang, S. M., & Kim, J. K. (2018). Third person effects of fake news: Fake news regulation and media literacy interventions. Computers in Human Behavior, 80, 295–302. https://doi.org/10.1016/j.chb.2017.11.034 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Jungherr, A., & Rauchfleisch, A. (2024). Negative downstream effects of alarmist disinformation discourse: Evidence from the United States. Political Behavior, 46(4), 2123–2143. https://doi.org/10.1007/s11109-024-09911-3 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Jungherr, A., & Rauchfleisch, A. (in press). Public Opinion on the Politics of AI Alignment: Cross-National Evidence on Expectations for AI Moderation from Germany and the United States. Social Media + Society. Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Kalogeropoulos, A., Toff, B., & Fletcher, R. (2022). The watchdog press in the doghouse: A comparative study of attitudes about accountability journalism, trust in news, and news avoidance. The International Journal of Press/Politics, 29(2), 485–506. https://doi.org/10. 1177/19401612221112572 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Karaboga, M., Frei, N., Puppis, M., Vogler, D., Raemy, P., Ebbers, F., Runge, G., Rauchfleisch, A., de Seta, G., Gurr, G., Friedewald, M., & Rovelli, S. (2024). Deepfakes und manipulierte Realitäten: Technologiefolgenabschätzung und Handlungsempfehlungen für die Schweiz [Deepfakes and manipulated realities: Technology impact assessment and policy recommendations for Switzerland]. vdf Hochschulverlag AG. Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Kim, M. (2025). A direct and indirect effect of third-person perception of COVID-19 fake news on support for fake news regulations on social media: Investigating the role of negative emotions and political views. Mass Communication and Society, 28(2), 229–252. https://doi.org/10.1080/15205436.2023.2227601 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Lima, M. L., Barnett, J., & Vala, J. (2005). Risk perception and technological development at a societal level. Risk Analysis, 25(5), 1229–1239. https://doi.org/10. 1111/j.1539-6924.2005.00664.x Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Liu, P. L., & Huang, L. V. (2020). Digital disinformation about COVID-19 and the third-person effect: Examining the channel differences and negative emotional outcomes. Cyberpsychology, Behavior, and Social Networking, 23(11), 789–793. https://doi.org/10.1089/cyber. 2020.0363 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Marien, S., & Hooghe, M. (2011). Does political trust matter? An empirical investigation into the relation between political trust and support for law compliance: does political trust matter? European Journal of Political Research, 50(2), 267–291. https://doi.org/10.1111/j.1475-6765.2010.01930.x Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Nguyen, D. (2023). How news media frame data risks in their coverage of big data and AI. Internet Policy Review, 12(2). https://policyreview.info/articles/analysis/how-news-media-frame-data-risks-big-data-and-ai Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Paradise, A., & Sullivan, M. (2012). (In)visible threats? The third-person effect in perceptions of the influence of Facebook. Cyberpsychology, Behavior, and Social Networking, 15(1), 55–60. https://doi.org/10.1089/cyber.2011.0054 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- PytlikZillig, L. M., Kimbrough, C. D., Shockley, E., Neal, T. M. S., Herian, M. N., Hamm, J. A., Bornstein, B. H., & Tomkins, A. J. (2017). A longitudinal and experimental study of the impact of knowledge on the bases of institutional trust. PLOS ONE, 12(4). https://doi.org/10.1371/journal.pone.0175387 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Rauchfleisch, A., Vogler, D., & de Seta, G. (2025). Deepfakes or synthetic media? The effect of euphemisms for labeling technology on risk and benefit perceptions. Social Media + Society. https://doi.org/10.1177/20563051251350975 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Riedl, M. J., Whipple, K. N., & Wallace, R. (2022). Antecedents of support for social media content moderation and platform regulation: The role of presumed effects on self and others. Information, Communication & Society, 25(11), 1632–1649. https://doi.org/10.1080/1369118X.2021.1874040 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Six, F. (2013). Trust in regulatory relations: How new insights from trust research improve regulation theory. Public Management Review, 15(2), 163–185. https://doi.org/10.1080/14719037.2012.727461 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Slovic, P., Fischhoff, B., & Lichtenstein, S. (1982). Why study risk perception? Risk Analysis, 2(2), 83–93. https://doi.org/10.1111/j.1539-6924.1982.tb01369.x Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Swissinfo.ch (2025, May 9). Switzerland rejects deepfake regulation. Retrieved from https://www.swissinfo.ch/eng/ai-governance/switzerland-rejects-deepfake-regulation/89277391 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Thouvenin, F.; Eisenegger, M.; Volz, S.; Vogler, D.; Jaffé, M., (2023). Governance von Desinformation in digitalisierten Öffentlichkeiten. Bericht für das Bundesamt für Kommunikation (BAKOM) [Governance of disinformation in digitalized publics. Report for the Federal Office of Communication]. Retrieved from: https://www.bakom.admin.ch/bakom/de/home/elektronische-medien/studien/ Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- einzelstudien.html Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Vaccari, C., & Chadwick, A. (2020). Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media + Society, 6(1). https://doi.org/10.1177/2056305120903408 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Verhoest, K., Redert, B., Maggetti, M., Levi-Faur, D., & Jordana, J. (2025). Trust and regulation. In F. Six, J. A. Hamm, D. Latusek, E. V. Zimmeren, & K. Verhoest (Eds.), Handbook on Trust in Public Governance (pp. 360–380). Edward Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Elgar Publishing. https://doi.org/10. 4337/9781802201406.00030 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Wang, S., & Kim, S. (2022). Users’ emotional and behavioral responses to deepfake videos of K-pop idols. Computers in Human Behavior, 134. https://doi.org/10.1016/j.chb.2022.107305 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Wolf, C. (2021). Public trust and biotech innovation: A theory of trustworthy regulation of (scary!) technology. Social Philosophy and Policy, 38(2), 29–49. https://doi.org/10.1017/S0265052522 000036 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Yadlin-Segal, A., & Oppenheim, Y. (2021). Whose dystopia is it anyway? Deepfakes and social media regulation. Convergence: The International Journal of Research into New Media Technologies, 27(1), 36–51. https://doi.org/10. 1177/1354856520923963 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Yu, E., Song, H., Jung, J., & Kim, Y. J. (2023). Perception and attitude toward the regulation of online video streaming (in South Korea). Online Media and Global Communication, 2(4), 651–679. https://doi.org/10.1515/omgc-2023-0059 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-570
- Aïmeur, E., Amri, S., & Brassard, G. (2023). Fake news, disinformation and misinformation in social media: A review. Social Network Analysis and Mining, 13(1). https://doi.org/10.1007/s13278-023-01028-5 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Akhtar, P., Ghouri, A. M., Khan, H. U. R., Amin ul Haq, M., Awan, U., Zahoor, N., Khan, Z., & Ashraf, A. (2023). Detecting fake news and disinformation using artificial intelligence and machine learning to avoid supply chain disruptions. Annals of Operations Research, 327(2), 633–657. https://doi.org/10.1007/s10479-022-05015-5 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Bennett, W. L., & Livingston, S. (2023). A brief history of the disinformation age: Information wars and the decline of institutional authority. In S. Salgado & S. Papathanassopoulos (Eds.), Streamlining Political Communication Concepts (pp. 43–73). Springer International Publishing. https://doi.org/10.1007/978-3-031-45335-9_4 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Bösch, M., & Divon, T. (2024). The sound of disinformation: TikTok, computational propaganda, and the invasion of Ukraine. New Media & Society, 26(9), 5081–5106. https://doi.org/10.1177/14614448241251804 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Bray, S. D., Johnson, S. D., & Kleinberg, B. (2023). Testing human ability to detect ‘deepfake’ images of human faces. Journal of Cybersecurity, 9(1). https://doi.org/10.1093/cybsec/tyad011 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Calvo, D., Cano-Orón, L., & Abengozar, A. E. (2020). Materials and assessment of literacy level for the recognition of social bots in political misinformation contexts. ICONO 14, Revista de Comunicación y Tecnologías Emergentes, 18(2), 111–136. Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- CDEI. (2019, September 12). Snapshot paper – Deepfakes and audiovisual disinformation. Centre for Data Ethics and Innovation. https://www.gov.uk/government/publications/cdei-publishes-its-first-series-of-three-snapshot-papers-ethical-issues-in-ai/snapshot-paper-deepfakes-and-audiovisual-disinformation Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Cho, H., Cannon, J., Lopez, R., & Li, W. (2024). Social media literacy: A conceptual framework. New Media & Society, 26(2), 941–960. https://doi.org/10.1177/14614448211068530 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Cole, S. (2017, December 11). AI-assisted fake porn is here and we’re all fucked. VICE. https://www.vice.com/en/article/gal-gadot-fake-ai-porn/ Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Dan, V., Paris, B., Donovan, J., Hameleers, M., & Roozenbeek, J. (2021). Visual mis- and disinformation, social media, and democracy. Journalism & Mass Communication Quarterly, 98(3), 641–664. https://doi.org/10.1177/10776990211035395 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Darius, P., & Stephany, F. (2022). How the Far-Right polarises Twitter: ‘Hashjacking’ as a disinformation strategy in times of COVID-19. In R. M. Benito, C. Cherifi, H. Cherifi, E. Moro, L. M. Rocha, & M. Sales-Pardo (Eds.), Complex Networks & Their Applications X (Vol. 1073, pp. 100–111). Springer International Publishing. https://doi.org/10.1007/978-3-030-93413-2_9 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Dobber, T., Metoui, N., Trilling, D., Helberger, N., & de Vreese, C. (2021). Do (microtargeted) deepfakes have real effects on political attitudes? The International Journal of Press/Politics, 26(1), 69–91. https://doi.org/10.1177/1940161220944364 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Dresing, T., & Pehl, T. (2013). Praxisbuch Interview, Transkription & Analyse [Practical guide interview, transcription & analysis] (5th ed.). Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Filipovic, A., & Schülke, A. (2023). Desinformation und Desinformationsresilienz [Disinformation and disinformation resilience]. Ethik Und Militär: Kontroversen in Militärethik & Sicherheitspolitik, 1, 34–41. Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Frischlich, L. (2019, May 2). Kritische Medienkompetenz als Säule demokratischer Resilienz in Zeiten von “Fake News” und Online-Desinformation [Critical media literacy as a pillar for democratic resilience in times of “fake news” and online disinformation]. Bundeszentrale für politische Bildung. https://www.bpb.de/themen/medien-journalismus/digitale-desinformation/290527/kritische-medienkompetenz-als-saeule-demokratischer-resilienz-in-zeiten-von-fake-news-und-online-desinformation/ Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Gambín, Á. F., Yazidi, A., Vasilakos, A., Haugerud, H., & Djenouri, Y. (2024). Deepfakes: Current and future trends. Artificial Intelligence Review, 57(3). https://doi.org/10.1007/s10462-023-10679-x Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Godulla, A., Hoffmann, C. P., & Seibert, D. (2021). Dealing with deepfakes – An interdisciplinary examination of the state of research and implications for communication studies. SCM Studies in Communication and Media, 10(1), 72–96. https://doi.org/10.5771/2192-4007-2021-1-72 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Goldstein, J. A., Sastry, G., Musser, M., DiResta, R., Gentzel, M., & Sedova, K. (2023). Generative language models and automated influence operations: Emerging threats and potential mitigations (arXiv:2301.04246). arXiv. https://doi.org/10.48550/ Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- arXiv.2301.04246 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Graves, L., & Amazeen, M. (2019). Fact-checking as idea and practice in journalism. Oxford University Press. https://ora.ox.ac.uk/objects/uuid:a7450b2f-f5a7-4207-90e2-254ec5de14e2 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Groh, M., Epstein, Z., Firestone, C., & Picard, R. (2022). Deepfake detection by human crowds, machines, and machine-informed crowds. Proceedings of the National Academy of Sciences, 119(1). https://doi.org/10.1073/pnas.2110013119 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Guilbeault, D. (2018). Digital marketing in the disinformation age. Journal of International Affairs, 71(1.5), 33–42. Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Hameleers, M., Powell, T. E., Van Der Meer, T. G. L. A., & Bos, L. (2020). A picture paints a thousand lies? The effects and mechanisms of multimodal disinformation and rebuttals disseminated via social media. Political Communication, 37(2), 281–301. https://doi.org/10.1080/10584609.2019.1674979 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Higley, J. (2018). Continuities and discontinuities in elite theory. In H. Best & J. Higley (Eds.), The Palgrave Handbook of Political Elites (pp. 25–39). Palgrave Macmillan UK. https://doi.org/10.1057/978-1-137-51904-7_4 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Hoffmann-Lange, U. (2018). Methods of elite identification. In H. Best & J. Higley (Eds.), The Palgrave Handbook of Political Elites (pp. 79–92). Palgrave Macmillan UK. Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- https://doi.org/10.1057/978-1-137-51904-7_8 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Hu, K. (2023, February 2). ChatGPT sets record for fastest-growing user base – Analyst note. Reuters. https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/ Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Hugger, K.-U. (2022). Medienkompetenz [Media competence]. In U. Sander, F. von Gross, & K.-U. Hugger (Eds.), Handbuch Medienpädagogik (pp. 67–80). Springer Fachmedien. https://doi.org/10.1007/978-3-658-23578-9_9 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Hwang, Y., Ryu, J. Y., & Jeong, S.-H. (2021). Effects of disinformation using deepfake: The protective effect of media literacy education. Cyberpsychology, Behavior, and Social Networking, 24(3), 188–193. https://doi.org/10.1089/cyber.2020.0174 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Kalsnes, B., Falasca, K., & Kammer, A. (2021). Scandinavian political journalism in a time of fake news and disinformation (pp. 283–304). Nordicom, University of Gothenburg. https://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-40895 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1), 15–25. https://doi.org/10.1016/j.bushor.2018.08.004 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Köbis, N. C., Doležalová, B., & Soraperra, I. (2021). Fooled twice: People cannot detect deepfakes but think they can. iScience, 24(11). https://doi.org/10.1016/j.isci.2021.103364 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Kong, S.-C., Man-Yin Cheung, W., & Zhang, G. (2021). Evaluation of an artificial intelligence literacy course for university students with diverse study backgrounds. Computers and Education: Artificial Intelligence, 2. https://doi.org/10.1016/j.caeai.2021.100026 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Leschzyk, D. K. (2021). Infodemic in Germany and Brazil: How the AfD and Jair Bolsonaro are sowing distrust during the Corona pandemic. Zeitschrift Für Literaturwissenschaft Und Linguistik, 51(3), 477–503. https://doi.org/10.1007/s41244-021-00210-6 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Lintner, T. (2024). A systematic review of AI literacy scales. Npj Science of Learning, 9(1). https://doi.org/10.1038/s41539-024-00264-4 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Liu, Z., Qi, X., & Torr, P. H. S. (2020). Global texture enhancement for fake face detection in the wild. 8060–8069. https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_Global_Texture_Enhancement_for_Fake_Face_Detection_in_the_Wild_CVPR_2020_paper.html Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Maros, A., Almeida, J. M., & Vasconcelos, M. (2021). A study of misinformation in audio messages shared in whatsapp groups. In J. Bright, A. Giachanou, V. Spaiser, F. Spezzano, A. George, & A. Pavliuc (Eds.), Disinformation in Open Online Media (pp. 85–100). Springer International Publishing. https://doi.org/10.1007/978-3-030-87031-7_6 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Martínez-Bravo, M. C., Sádaba Chalezquer, C., & Serrano-Puche, J. (2022). Dimensions of digital literacy in the 21st century competency frameworks. Sustainability, 14(3). Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- https://doi.org/10.3390/su14031867 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Mayring, P. (2010). Qualitative Inhaltsanalyse. Grundlagen und Techniken [Qualitative content analysis. Basics and techniques]. Beltz. Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Millière, R. (2022). Deep learning and synthetic media. Synthese, 200(3). https://doi.org/10.1007/s11229-022-03739-2 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2. Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- https://doi.org/10.1016/j.caeai.2021.100041 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Nightingale, S. J., & Farid, H. (2022). AI-synthesized faces are indistinguishable from real faces and more trustworthy. Proceedings of the National Academy of Sciences, 119(8). https://doi.org/10.1073/pnas.2120481119 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Pawelec, M., & Sievi, L. (2023). Falschinformationen in den sozialen Medien als Herausforderung für deutsche Sicherheitsbehörden und -organisationen [Disinformation on social media as a challenge for German security authorities and organizations]. Kriminologie – Das Online-Journal | Criminology – The Online Journal, 5(5). https://doi.org/10.18716/ojs/krimoj/2023.4.7 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Koistinen, P., Alaraatikka, M., Sederholm, T., Savolainen, D., Huhtinen, A.-M., & Kaarkoski, M. (2022). Public authorities as a target of disinformation. European Conference on Cyber Warfare and Security, 21(1), 123–129. https://doi.org/10.34190/eccws. 21.1.371 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Petratos, P. N. (2021). Misinformation, disinformation, and fake news: Cyber risks to business. Business Horizons, 64(6), 763–774. https://doi.org/10.1016/j.bushor.2021.07.012 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Powell, T. E., Boomgaarden, H. G., De Swert, K., & de Vreese, C. H. (2015). A clearer picture: The contribution of visuals and text to framing effects. Journal of Communication, 65(6), 997–1017. https://doi.org/10/f3s2sj Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Rana, M. S., Nobi, M. N., Murali, B., & Sung, A. H. (2022). Deepfake detection: A systematic literature review. IEEE Access, 10, 25494–25513. IEEE Access. https://doi.org/10.1109/ACCESS.2022.3154404 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Roe, J., Perkins, M., & Furze, L. (2024). Deepfakes and higher education: A research agenda and scoping review of synthetic media. Journal of University Teaching and Learning Practice, 21(10). https://doi.org/10.53761/2y2np178 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Rohs, M., & Seufert, S. (2020). Berufliche Medienkompetenz [Professional media literacy]. In R. Arnold, A. Lipsmeier, & M. Rohs (Eds.), Handbuch Berufsbildung (pp. 339–363). Springer Fachmedien. https://doi.org/10.1007/978-3-658-19312-6_29 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Shao, C., Ciampaglia, G. L., Varol, O., Yang, K.-C., Flammini, A., & Menczer, F. (2018). The spread of low-credibility content by social bots. Nature Communications, 9(1). https://doi.org/10.1038/s41467-018-06930-7 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Shen, B., RichardWebster, B., O’Toole, A., Bowyer, K., & Scheirer, W. J. (2021). A study of the human perception of synthetic faces. 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021), 1–8. https://doi.org/10.1109/FG52635.2021.9667066 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Simonite, T. (2019, October 7). Most deepfakes are porn, and they’re multiplying fast. Wired. https://www.wired.com/story/most-deepfakes-porn-multiplying-fast/ Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Stöcker, C. (2020). How Facebook and Google accidentally created a perfect ecosystem for targeted disinformation. In C. Grimme, M. Preuss, F. W. Takes, & A. Waldherr (Eds.), Disinformation in Open Online Media (Vol. 12021, pp. 129–149). Springer International Publishing. https://doi.org/10.1007/978-3-030-39627-5_11 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Stroebel, L., Llewellyn, M., Hartley, T., Shan Ip, T., & Ahmed, M. (2023). A systematic literature review on the effectiveness of deepfake detection techniques. Journal of Cyber Security Technology, 7(2), 83–113. https://doi.org/10.1080/23742917.2023.2192888 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Tandoc, E. C., Ling, R., Westlund, O., Duffy, A., Goh, D., & Zheng Wei, L. (2018). Audiences’ acts of authentication in the age of fake news: A conceptual framework. New Media and Society, 20(8), 2745–2763. https://doi.org/10/gc2fmd Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Tiernan, P., Costello, E., Donlon, E., Parysz, M., & Scriney, M. (2023). Information and media literacy in the age of AI: Options for the future. Education Sciences, 13(9). Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- https://doi.org/10.3390/educsci13090906 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131. https://doi.org/10.1126/science.185.4157.1124 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Twomey, J., Ching, D., Aylett, M. P., Quayle, M., Linehan, C., & Murphy, G. (2023). Do deepfake videos undermine our epistemic trust? A thematic analysis of tweets that discuss deepfakes in the Russian invasion of Ukraine. PLOS ONE, 18(10). https://doi.org/10.1371/journal.pone.0291668 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Vaccari, C., & Chadwick, A. (2020). Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media + Society, 6(1). https://doi.org/10.1177/2056305120903408 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Wardle, C., & Derakhshan, H. (2017). Information disorder: Toward an interdisciplinary framework for research and policy making. Council of Europe report DGI (2017)09. https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Wasner, B. (2013). Eliten in Europa: Einführung in Theorien, Konzepte und Befunde [Elites in Europe: Introduction to theories, concepts and findings]. Springer-Verlag. Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Williamson, S. M., & Prybutok, V. (2024). The era of artificial intelligence deception: Unraveling the complexities of false realities and emerging threats of misinformation. Information, 15(6). https://doi.org/10.3390/info15060299 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594
- Wu, J., Gan, W., Chen, Z., Wan, S., & Lin, H. (2023). AI-generated content (AIGC): A survey (arXiv:2304.06632). arXiv. https://doi.org/10.48550/arXiv.2304.06632 Google Scholar öffnen doi.org/10.5771/2192-4007-2025-4-594