Synthetic disinformation detection among German information elites – Strategies in politics, administration, journalism, and business
Table of contents
Bibliographic information

SCM Studies in Communication and Media
Volume 14 (2025), Issue 4
- Authors:
- | | | | | | | | | | | | | | | | | | | | | | | | | | |
- Publisher
- Nomos, Baden-Baden
- Copyright Year
- 2026
- ISSN-Online
- 2192-4007
- ISSN-Print
- 2192-4007
Chapter information
Volume 14 (2025), Issue 4
Synthetic disinformation detection among German information elites – Strategies in politics, administration, journalism, and business
- Authors:
- | | | | | |
- ISSN-Print
- 2192-4007
- ISSN-Online
- 2192-4007
- Preview:
Since the technology for generating synthetic media content became available to a wider audience in 2022, the social and communication sciences face the urgent question of how these technologies can be used to spread disinformation and how well recipients are equipped to deal with this risk. Research so far has focused primarily on the phenomenon of deepfakes, which mostly refers to visual media generated or modified by artificial intelligence. Most studies aim to test how well recipients can detect such deepfakes, and they generally conclude that recipients are rather poor at detecting them. In contrast, this analysis focuses on the broader concept of synthetic disinformation, which includes all forms of AI-generated content for the purpose of deception. We investigate the process of how actors with professional expertise in the field of disinformation try to detect AI-generated disinformation in text, visual and audio content and which strategies and resources they employ. To gauge an upper bound for societal preparedness, we conducted guided interviews with 41 actors in elite positions from four sectors of German society (politics, corporations, media and administration) and asked them about their strategies for detecting synthetic disinformation in text, visual and audio content. The respondents apply different detection strategies for the three media formats. The data shows substantial differences between the four groups when it comes to detection strategies. Only the media professionals consistently describe analytical, rather than simply intuitive, methods for verification.
Bibliography
No match found. Try another term.
- Aïmeur, E., Amri, S., & Brassard, G. (2023). Fake news, disinformation and misinformation in social media: A review. Social Network Analysis and Mining, 13(1). https://doi.org/10.1007/s13278-023-01028-5 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Akhtar, P., Ghouri, A. M., Khan, H. U. R., Amin ul Haq, M., Awan, U., Zahoor, N., Khan, Z., & Ashraf, A. (2023). Detecting fake news and disinformation using artificial intelligence and machine learning to avoid supply chain disruptions. Annals of Operations Research, 327(2), 633–657. https://doi.org/10.1007/s10479-022-05015-5 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Bennett, W. L., & Livingston, S. (2023). A brief history of the disinformation age: Information wars and the decline of institutional authority. In S. Salgado & S. Papathanassopoulos (Eds.), Streamlining Political Communication Concepts (pp. 43–73). Springer International Publishing. https://doi.org/10.1007/978-3-031-45335-9_4 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Bösch, M., & Divon, T. (2024). The sound of disinformation: TikTok, computational propaganda, and the invasion of Ukraine. New Media & Society, 26(9), 5081–5106. https://doi.org/10.1177/14614448241251804 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Bray, S. D., Johnson, S. D., & Kleinberg, B. (2023). Testing human ability to detect ‘deepfake’ images of human faces. Journal of Cybersecurity, 9(1). https://doi.org/10.1093/cybsec/tyad011 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Calvo, D., Cano-Orón, L., & Abengozar, A. E. (2020). Materials and assessment of literacy level for the recognition of social bots in political misinformation contexts. ICONO 14, Revista de Comunicación y Tecnologías Emergentes, 18(2), 111–136. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- CDEI. (2019, September 12). Snapshot paper – Deepfakes and audiovisual disinformation. Centre for Data Ethics and Innovation. https://www.gov.uk/government/publications/cdei-publishes-its-first-series-of-three-snapshot-papers-ethical-issues-in-ai/snapshot-paper-deepfakes-and-audiovisual-disinformation Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Cho, H., Cannon, J., Lopez, R., & Li, W. (2024). Social media literacy: A conceptual framework. New Media & Society, 26(2), 941–960. https://doi.org/10.1177/14614448211068530 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Cole, S. (2017, December 11). AI-assisted fake porn is here and we’re all fucked. VICE. https://www.vice.com/en/article/gal-gadot-fake-ai-porn/ Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Dan, V., Paris, B., Donovan, J., Hameleers, M., & Roozenbeek, J. (2021). Visual mis- and disinformation, social media, and democracy. Journalism & Mass Communication Quarterly, 98(3), 641–664. https://doi.org/10.1177/10776990211035395 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Darius, P., & Stephany, F. (2022). How the Far-Right polarises Twitter: ‘Hashjacking’ as a disinformation strategy in times of COVID-19. In R. M. Benito, C. Cherifi, H. Cherifi, E. Moro, L. M. Rocha, & M. Sales-Pardo (Eds.), Complex Networks & Their Applications X (Vol. 1073, pp. 100–111). Springer International Publishing. https://doi.org/10.1007/978-3-030-93413-2_9 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Dobber, T., Metoui, N., Trilling, D., Helberger, N., & de Vreese, C. (2021). Do (microtargeted) deepfakes have real effects on political attitudes? The International Journal of Press/Politics, 26(1), 69–91. https://doi.org/10.1177/1940161220944364 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Dresing, T., & Pehl, T. (2013). Praxisbuch Interview, Transkription & Analyse [Practical guide interview, transcription & analysis] (5th ed.). Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Filipovic, A., & Schülke, A. (2023). Desinformation und Desinformationsresilienz [Disinformation and disinformation resilience]. Ethik Und Militär: Kontroversen in Militärethik & Sicherheitspolitik, 1, 34–41. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Frischlich, L. (2019, May 2). Kritische Medienkompetenz als Säule demokratischer Resilienz in Zeiten von “Fake News” und Online-Desinformation [Critical media literacy as a pillar for democratic resilience in times of “fake news” and online disinformation]. Bundeszentrale für politische Bildung. https://www.bpb.de/themen/medien-journalismus/digitale-desinformation/290527/kritische-medienkompetenz-als-saeule-demokratischer-resilienz-in-zeiten-von-fake-news-und-online-desinformation/ Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Gambín, Á. F., Yazidi, A., Vasilakos, A., Haugerud, H., & Djenouri, Y. (2024). Deepfakes: Current and future trends. Artificial Intelligence Review, 57(3). https://doi.org/10.1007/s10462-023-10679-x Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Godulla, A., Hoffmann, C. P., & Seibert, D. (2021). Dealing with deepfakes – An interdisciplinary examination of the state of research and implications for communication studies. SCM Studies in Communication and Media, 10(1), 72–96. https://doi.org/10.5771/2192-4007-2021-1-72 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Goldstein, J. A., Sastry, G., Musser, M., DiResta, R., Gentzel, M., & Sedova, K. (2023). Generative language models and automated influence operations: Emerging threats and potential mitigations (arXiv:2301.04246). arXiv. https://doi.org/10.48550/ Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- arXiv.2301.04246 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Graves, L., & Amazeen, M. (2019). Fact-checking as idea and practice in journalism. Oxford University Press. https://ora.ox.ac.uk/objects/uuid:a7450b2f-f5a7-4207-90e2-254ec5de14e2 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Groh, M., Epstein, Z., Firestone, C., & Picard, R. (2022). Deepfake detection by human crowds, machines, and machine-informed crowds. Proceedings of the National Academy of Sciences, 119(1). https://doi.org/10.1073/pnas.2110013119 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Guilbeault, D. (2018). Digital marketing in the disinformation age. Journal of International Affairs, 71(1.5), 33–42. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Hameleers, M., Powell, T. E., Van Der Meer, T. G. L. A., & Bos, L. (2020). A picture paints a thousand lies? The effects and mechanisms of multimodal disinformation and rebuttals disseminated via social media. Political Communication, 37(2), 281–301. https://doi.org/10.1080/10584609.2019.1674979 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Higley, J. (2018). Continuities and discontinuities in elite theory. In H. Best & J. Higley (Eds.), The Palgrave Handbook of Political Elites (pp. 25–39). Palgrave Macmillan UK. https://doi.org/10.1057/978-1-137-51904-7_4 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Hoffmann-Lange, U. (2018). Methods of elite identification. In H. Best & J. Higley (Eds.), The Palgrave Handbook of Political Elites (pp. 79–92). Palgrave Macmillan UK. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- https://doi.org/10.1057/978-1-137-51904-7_8 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Hu, K. (2023, February 2). ChatGPT sets record for fastest-growing user base – Analyst note. Reuters. https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/ Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Hugger, K.-U. (2022). Medienkompetenz [Media competence]. In U. Sander, F. von Gross, & K.-U. Hugger (Eds.), Handbuch Medienpädagogik (pp. 67–80). Springer Fachmedien. https://doi.org/10.1007/978-3-658-23578-9_9 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Hwang, Y., Ryu, J. Y., & Jeong, S.-H. (2021). Effects of disinformation using deepfake: The protective effect of media literacy education. Cyberpsychology, Behavior, and Social Networking, 24(3), 188–193. https://doi.org/10.1089/cyber.2020.0174 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Kalsnes, B., Falasca, K., & Kammer, A. (2021). Scandinavian political journalism in a time of fake news and disinformation (pp. 283–304). Nordicom, University of Gothenburg. https://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-40895 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1), 15–25. https://doi.org/10.1016/j.bushor.2018.08.004 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Köbis, N. C., Doležalová, B., & Soraperra, I. (2021). Fooled twice: People cannot detect deepfakes but think they can. iScience, 24(11). https://doi.org/10.1016/j.isci.2021.103364 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Kong, S.-C., Man-Yin Cheung, W., & Zhang, G. (2021). Evaluation of an artificial intelligence literacy course for university students with diverse study backgrounds. Computers and Education: Artificial Intelligence, 2. https://doi.org/10.1016/j.caeai.2021.100026 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Leschzyk, D. K. (2021). Infodemic in Germany and Brazil: How the AfD and Jair Bolsonaro are sowing distrust during the Corona pandemic. Zeitschrift Für Literaturwissenschaft Und Linguistik, 51(3), 477–503. https://doi.org/10.1007/s41244-021-00210-6 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Lintner, T. (2024). A systematic review of AI literacy scales. Npj Science of Learning, 9(1). https://doi.org/10.1038/s41539-024-00264-4 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Liu, Z., Qi, X., & Torr, P. H. S. (2020). Global texture enhancement for fake face detection in the wild. 8060–8069. https://openaccess.thecvf.com/content_CVPR_2020/html/Liu_Global_Texture_Enhancement_for_Fake_Face_Detection_in_the_Wild_CVPR_2020_paper.html Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Maros, A., Almeida, J. M., & Vasconcelos, M. (2021). A study of misinformation in audio messages shared in whatsapp groups. In J. Bright, A. Giachanou, V. Spaiser, F. Spezzano, A. George, & A. Pavliuc (Eds.), Disinformation in Open Online Media (pp. 85–100). Springer International Publishing. https://doi.org/10.1007/978-3-030-87031-7_6 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Martínez-Bravo, M. C., Sádaba Chalezquer, C., & Serrano-Puche, J. (2022). Dimensions of digital literacy in the 21st century competency frameworks. Sustainability, 14(3). Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- https://doi.org/10.3390/su14031867 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Mayring, P. (2010). Qualitative Inhaltsanalyse. Grundlagen und Techniken [Qualitative content analysis. Basics and techniques]. Beltz. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Millière, R. (2022). Deep learning and synthetic media. Synthese, 200(3). https://doi.org/10.1007/s11229-022-03739-2 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- https://doi.org/10.1016/j.caeai.2021.100041 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Nightingale, S. J., & Farid, H. (2022). AI-synthesized faces are indistinguishable from real faces and more trustworthy. Proceedings of the National Academy of Sciences, 119(8). https://doi.org/10.1073/pnas.2120481119 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Pawelec, M., & Sievi, L. (2023). Falschinformationen in den sozialen Medien als Herausforderung für deutsche Sicherheitsbehörden und -organisationen [Disinformation on social media as a challenge for German security authorities and organizations]. Kriminologie – Das Online-Journal | Criminology – The Online Journal, 5(5). https://doi.org/10.18716/ojs/krimoj/2023.4.7 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Koistinen, P., Alaraatikka, M., Sederholm, T., Savolainen, D., Huhtinen, A.-M., & Kaarkoski, M. (2022). Public authorities as a target of disinformation. European Conference on Cyber Warfare and Security, 21(1), 123–129. https://doi.org/10.34190/eccws. 21.1.371 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Petratos, P. N. (2021). Misinformation, disinformation, and fake news: Cyber risks to business. Business Horizons, 64(6), 763–774. https://doi.org/10.1016/j.bushor.2021.07.012 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Powell, T. E., Boomgaarden, H. G., De Swert, K., & de Vreese, C. H. (2015). A clearer picture: The contribution of visuals and text to framing effects. Journal of Communication, 65(6), 997–1017. https://doi.org/10/f3s2sj Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Rana, M. S., Nobi, M. N., Murali, B., & Sung, A. H. (2022). Deepfake detection: A systematic literature review. IEEE Access, 10, 25494–25513. IEEE Access. https://doi.org/10.1109/ACCESS.2022.3154404 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Roe, J., Perkins, M., & Furze, L. (2024). Deepfakes and higher education: A research agenda and scoping review of synthetic media. Journal of University Teaching and Learning Practice, 21(10). https://doi.org/10.53761/2y2np178 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Rohs, M., & Seufert, S. (2020). Berufliche Medienkompetenz [Professional media literacy]. In R. Arnold, A. Lipsmeier, & M. Rohs (Eds.), Handbuch Berufsbildung (pp. 339–363). Springer Fachmedien. https://doi.org/10.1007/978-3-658-19312-6_29 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Shao, C., Ciampaglia, G. L., Varol, O., Yang, K.-C., Flammini, A., & Menczer, F. (2018). The spread of low-credibility content by social bots. Nature Communications, 9(1). https://doi.org/10.1038/s41467-018-06930-7 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Shen, B., RichardWebster, B., O’Toole, A., Bowyer, K., & Scheirer, W. J. (2021). A study of the human perception of synthetic faces. 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021), 1–8. https://doi.org/10.1109/FG52635.2021.9667066 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Simonite, T. (2019, October 7). Most deepfakes are porn, and they’re multiplying fast. Wired. https://www.wired.com/story/most-deepfakes-porn-multiplying-fast/ Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Stöcker, C. (2020). How Facebook and Google accidentally created a perfect ecosystem for targeted disinformation. In C. Grimme, M. Preuss, F. W. Takes, & A. Waldherr (Eds.), Disinformation in Open Online Media (Vol. 12021, pp. 129–149). Springer International Publishing. https://doi.org/10.1007/978-3-030-39627-5_11 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Stroebel, L., Llewellyn, M., Hartley, T., Shan Ip, T., & Ahmed, M. (2023). A systematic literature review on the effectiveness of deepfake detection techniques. Journal of Cyber Security Technology, 7(2), 83–113. https://doi.org/10.1080/23742917.2023.2192888 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Tandoc, E. C., Ling, R., Westlund, O., Duffy, A., Goh, D., & Zheng Wei, L. (2018). Audiences’ acts of authentication in the age of fake news: A conceptual framework. New Media and Society, 20(8), 2745–2763. https://doi.org/10/gc2fmd Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Tiernan, P., Costello, E., Donlon, E., Parysz, M., & Scriney, M. (2023). Information and media literacy in the age of AI: Options for the future. Education Sciences, 13(9). Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- https://doi.org/10.3390/educsci13090906 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131. https://doi.org/10.1126/science.185.4157.1124 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Twomey, J., Ching, D., Aylett, M. P., Quayle, M., Linehan, C., & Murphy, G. (2023). Do deepfake videos undermine our epistemic trust? A thematic analysis of tweets that discuss deepfakes in the Russian invasion of Ukraine. PLOS ONE, 18(10). https://doi.org/10.1371/journal.pone.0291668 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Vaccari, C., & Chadwick, A. (2020). Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media + Society, 6(1). https://doi.org/10.1177/2056305120903408 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Wardle, C., & Derakhshan, H. (2017). Information disorder: Toward an interdisciplinary framework for research and policy making. Council of Europe report DGI (2017)09. https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Wasner, B. (2013). Eliten in Europa: Einführung in Theorien, Konzepte und Befunde [Elites in Europe: Introduction to theories, concepts and findings]. Springer-Verlag. Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Williamson, S. M., & Prybutok, V. (2024). The era of artificial intelligence deception: Unraveling the complexities of false realities and emerging threats of misinformation. Information, 15(6). https://doi.org/10.3390/info15060299 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594
- Wu, J., Gan, W., Chen, Z., Wan, S., & Lin, H. (2023). AI-generated content (AIGC): A survey (arXiv:2304.06632). arXiv. https://doi.org/10.48550/arXiv.2304.06632 Open Google Scholar DOI: 10.5771/2192-4007-2025-4-594