Responsible Use of Artificial Intelligence as Continuous Proportionalization: Fashion Image Generation at OTTO

Table of contents

Bibliographic information


Cover of Volume: Swiss Journal of Business Volume 80 (2026), Edition 1
Open Access Full access

Swiss Journal of Business

Volume 80 (2026), Edition 1


Authors:
Publisher
Nomos, Baden-Baden
Copyright year
2026
ISSN-Online
2944-3741
ISSN-Print
2944-3741

Chapter information


Open Access Full access

Volume 80 (2026), Edition 1

Responsible Use of Artificial Intelligence as Continuous Proportionalization: Fashion Image Generation at OTTO


Authors:
ISSN-Print
2944-3741
ISSN-Online
2944-3741


Preview:

The inherent ambivalence of machine learning-based artificial intelligence (AI) technologies makes ensuring their responsible use a pressing concern. While the literature converges on the importance of governance at the organizational level, uncertainty remains about what responsible (use of) AI actually is. We conceptualize responsible AI as both a process and outcome of social evaluation. We propose a model (“continuous proportionalization”) that explains how organizations construct collective interpretations of responsible AI along the dimensions of legitimacy, suitability, necessity, and proportionality. We illustrate the model through a case study of AI-based fashion image generation at Germany’s largest e-commerce company, OTTO.

Bibliography


  1. Kranzberg, M. (1986). Technology and History: ‘Kranzberg’s Laws.’Technology and Culture 27(3): 544–560. https://doi.org/10.2307/3105385. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-1
  2. Bach, T. A., Kaarstad, M., Solberg, E., & Babic, A. (2025). Insights into suggested Responsible AI (RAI) practices in real-world settings: a systematic literature review. AI and Ethics, 5(3), 3185–3232. https://doi.org/10.1007/s43681-024-00648-7 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  3. Bitektine, A., & Haack, P. (2015). The “macro” and the “micro” of legitimacy: Toward a multilevel theory of the legitimacy process. Academy of Management Review, 40(1), 49–75. https://doi.org/10.5465/amr.2013.0318 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  4. Bughin, J. (2025a). Doing versus saying: responsible AI among large firms. AI & Society, 40(4), 2751–2763. https://doi.org/10.1007/s00146-024-02014-x Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  5. Bughin, J. (2025b). The role of AI assets and capabilities in shaping responsible AI deepening: a random forest machine learning view. AI and Ethics, 5(6), 6313–6327. https://doi.org/10.1007/s43681-025-00802-9 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  6. Christophersen, T., & Pärn, J. (2021). Data Science bei OTTO. In P. Buxmann & H. Schmidt (Eds.), Künstliche Intelligenz (pp. 101–115). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-662-61794-6_6 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  7. Cihon, P., Schuett, J., & Baum, S. D. (2021). Corporate governance of artificial intelligence in the public interest. Information (Basel), 12(7), 275. https://doi.org/10.3390/info12070275 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  8. Coeckelbergh, M. (2020). AI Ethics. MIT Press. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  9. Coeckelbergh, M. (2024). Artificial intelligence, the common good, and the democratic deficit in AI governance. AI and Ethics, 5(2), 1491–1497. https://doi.org/10.1007/s43681-024-00492-9 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  10. Cohen-Eliya, M., & Porat, I. (2010). American balancing and German proportionality: The historical origins. International Journal of Constitutional Law, 8(2), 263–286. https://doi.org/10.1093/icon/moq004 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  11. de-Lima-Santos, M.-F., Yeung, W. N., & Dodds, T. (2025). Guiding the way: a comprehensive examination of AI guidelines in global media. AI & Society, 40(4), 2585–2603. https://doi.org/10.1007/s00146-024-01973-5 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  12. Elia, M., Ziethmann, P., Krumme, J., Schlögl-Flierl, K., & Bauer, B. (2025). Responsible AI, ethics, and the AI lifecycle: how to consider the human influence? AI and Ethics, 5(4), 4011–4028. https://doi.org/10.1007/s43681-025-00666-z Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  13. Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People-An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  14. Garibay, O. O., Winslow, B., Andolina, S., Antona, M., Bodenschatz, A., Coursaris, C., Falco, G., Fiore, S. M., Garibay, I., Grieman, K., Havens, J. C., Jirotka, M., Kacorri, H., Karwowski, W., Kider, J., Konstan, J., Koon, S., Lopez-Gonzalez, M., Maifeld-Carucci, I., … Xu, W. (2023). Six Human-Centered Artificial Intelligence Grand Challenges. International Journal of Human–Computer Interaction, 39(3), 391–437. https://doi.org/10.1080/10447318.2022.2153320 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  15. Gerken, T. (2024, December 11). Chatbot “encouraged teen to kill parents over screen time limit.” BBC News. https://www.bbc.com/news/articles/cd605e48q1vo Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  16. Gogoll, J., Zuber, N., Kacianka, S., Greger, T., Pretschner, A., & Nida-Rümelin, J. (2021). Ethics in the Software Development Process: from Codes of Conduct to Ethical Deliberation. Philosophy & Technology, 34(4), 1085–1108. https://doi.org/10.1007/s13347-021-00451-w Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  17. Hagendorff, T. (2022). A virtue-based framework to support putting AI ethics into practice. Philosophy & Technology, 35(3), 1–24. https://doi.org/10.1007/s13347-022-00553-z Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  18. Heger, A. K., Passi, S., Dhanorkar, S., Kahn, Z., Wang, R., & Vorvoreanu, M. (2025). Towards a Responsible AI Organizational Maturity model. Proceedings of the ACM on Human-Computer Interaction, 9(7), 1–33. https://doi.org/10.1145/3757514 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  19. Horneber, D. (2025). Understanding the implementation of responsible artificial intelligence in organizations: A Neo-institutional theory perspective. Communications of the Association for Information Systems, 57, 8. https://doi.org/10.17705/1cais.05708 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  20. Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  21. Kallina, E., & Singh, J. (2024). Stakeholder involvement for responsible AI development: A process framework. Proceedings of the 4th ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, 1–14. https://doi.org/10.1145/3689904.3694698 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  22. Karliuk, M. (2023). Proportionality principle for the ethics of artificial intelligence. AI and Ethics, 3(3), 985–990. https://doi.org/10.1007/s43681-022-00220-1 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  23. Katirai, A., Garcia, N., Ide, K., Nakashima, Y., & Kishimoto, A. (2024). Situating the social issues of image generation models in the model life cycle: a sociotechnical approach. AI and Ethics, 5(2), 1769–1786. https://doi.org/10.1007/s43681-024-00517-3 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  24. Krijger, J., Thuis, T., de Ruiter, M., Ligthart, E., & Broekman, I. (2023). The AI ethics maturity model: a holistic approach to advancing ethical data science in organizations. AI and Ethics, 3(2), 355–367. https://doi.org/10.1007/s43681-022-00228-7 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  25. Kuznia, R., Gordon, A., & Lavandera, E. (2025, November 6). “You”re not rushing. You’re just ready:’ Parents say ChatGPT encouraged son to kill himself. CNN. https://www.cnn.com/2025/11/06/us/openai-chatgpt-suicide-lawsuit-invs-vis Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  26. Mikalef, P., Conboy, K., Lundström, J. E., & Popovič, A. (2022). Thinking responsibly about responsible AI and “the dark side” of AI. European Journal of Information Systems: An Official Journal of the Operational Research Society, 31(3), 257–268. https://doi.org/10.1080/0960085x.2022.2026621 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  27. Minkkinen, M., Zimmer, M. P., & Mäntymäki, M. (2023). Co-shaping an ecosystem for responsible AI: Five types of expectation work in response to a technological frame. Information Systems Frontiers: A Journal of Research and Innovation, 25(1), 103–121. https://doi.org/10.1007/s10796-022-10269-2 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  28. Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), 501–507. https://doi.org/10.1038/s42256-019-0114-4 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  29. Otto. (2021, September 16). Ethik vs. Wirtschaftlichkeit, können wir KI vertrauen? | MAIN Session – OTTO [Ethics vs. economics, can we trust AI?]. Youtube. https://www.youtube.com/watch?v=aiay2hfDiOg Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  30. Polanyi, M. (1962). The Republic of science: Its political and economic theory. Minerva, 1(1), 54–73. https://doi.org/10.1007/bf01101453 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  31. Radford, A., Kim, J. W., Xu, T., Brockman, G., McLeavey, C., & Sutskever, I. (2022). Robust Speech Recognition via Large-Scale Weak Supervision. In arXiv [eess.AS]. arXiv. http://arxiv.org/abs/2212.04356 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  32. Rufo, Y. (2025, July 27). What Guess’s AI model in Vogue means for beauty standards. BBC News. https://www.bbc.com/news/articles/cgeqe084nn4o Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  33. Ryan, M., & Stahl, B. C. (2021). Artificial intelligence ethics guidelines for developers and users: clarifying their content and normative implications. Journal of Information Communication and Ethics in Society, 19(1), 61–86. https://doi.org/10.1108/jices-12-2019-0138 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  34. Saharia, C., Chan, W., Saxena, S., Li, L., Whang, J., Denton, E. L., Ghasemipour, S. K. S., Ayan, B. K., Mahdavi, S. S., Lopes, R. G., Salimans, T., Ho, J., Fleet, D. J., & Norouzi, M. (2022). Photorealistic text-to-image diffusion models with deep language understanding. Neural Information Processing Systems, abs/2205.11487, 36479–36494. https://doi.org/10.48550/arXiv.2205.11487 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  35. Sobek, T., & Montag, J. (2018). Proportionality Test. In Encyclopedia of Law and Economics (pp. 1–5). Springer New York. https://doi.org/10.1007/978-1-4614-7883-6_721-1 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  36. Stahl, B. C. (2012). Responsible research and innovation in information systems. European Journal of Information Systems: An Official Journal of the Operational Research Society, 21(3), 207–211. https://doi.org/10.1057/ejis.2012.19 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  37. Suchman, M. C. (1995). Managing legitimacy: Strategic and institutional approaches. Academy of Management Review, 20(3), 571. https://doi.org/10.2307/258788 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  38. Suddaby, R., Bitektine, A., & Haack, P. (2017). Legitimacy. Academy of Management Annals, 11(1), 451–478. https://doi.org/10.5465/annals.2015.0101 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  39. Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751–752. https://doi.org/10.1126/science.aat5991 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  40. Watson, D. S., Mökander, J., & Floridi, L. (2025). Competing narratives in AI ethics: a defense of sociotechnical pragmatism. AI & Society, 40(5), 3163–3185. https://doi.org/10.1007/s00146-024-02128-2 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  41. Yilma, K. (2025). From principles to process: the principlist approach to AI ethics and lessons from Internet bills of rights. AI and Ethics, 5(4), 4279–4291. https://doi.org/10.1007/s43681-025-00719-3 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  42. Yin, R. K. (1981). The Case Study Crisis: Some Answers. Administrative Science Quarterly, 26(1), 58–65. https://doi.org/10.2307/2392599 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  43. Yin, R. K. (1994). Case Study Research: Design and Methods. SAGE Publications. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  44. Zhang, Z., & Wang, J. (2024). Can AI replace psychotherapists? Exploring the future of mental health care. Frontiers in Psychiatry, 15, 1444382. https://doi.org/10.3389/fpsyt.2024.1444382 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-7
  45. Acemoglu, D., & Restrepo, P. (2020). Robots and jobs: Evidence from US labor markets. Journal of political economy, 128(6), 2188-2244. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  46. AI Champions. (2025). Stop the Clock: Open Letter Calling for an EU AI Act Pause. Available online at: https://aichampions.eu (Call 16.01.2026) Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  47. Beckmann, M., & Pies, I. (2016). The constitution of responsibility: Toward an ordonomic framework for interpreting (corporate social) responsibility in different social settings. In Order ethics: An ethical framework for the social market economy (pp. 221-250). Cham: Springer International Publishing. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  48. Bijker, W. E. (1997). Of bicycles, bakelites, and bulbs: Toward a theory of sociotechnical change. MIT press. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  49. Binns, R. (2018). Fairness in Machine Learning: Lessons from Political Philosophy. Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency (FAT), 149–159. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  50. Buchanan, J. M. (2000). Reason of Rules—Constitutional Political Economy. Liberty Fund Incorporated, us. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  51. Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big data & society, 3(1), 2053951715622512. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  52. Eucken, W. (1952/1990). Grundsätze der Wirtschaftspolitik. Tübingen: J.C.B. Mohr (Paul Siebeck). Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  53. Floridi, L. & Cowls, J. (2019). A Unified Framework of Five Principles for AI in Society. Philosophy & Technology, 32(4), 685–703. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  54. Friedman, B. & Hendry, D. (2019). Value Sensitive Design: Shaping Technology with Moral Imagination. MIT Press. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  55. Future of Life Institute. (2023). Pause Giant AI Experiments: An Open Letter. Available online at: (https://futureoflife.org/open-letter/pause-giant-ai-experiments/ ) (Call 16.01.2026) Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  56. Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Iii, H. D., & Crawford, K. (2021). Datasheets for datasets. Communications of the ACM, 64(12), 86-92. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  57. Hedfeld, P. (2025a). Implicit decision voting made by humans as normative and implementable rules with the help of language models. In R. Buchkremer, O. Koch & A. Lischka (Hrsg.), ifid Schriftenreihe: Beiträge zu IT-Management & Digitalisierung (Bd. 3). FOM-Hochschule für Oekonomie & Management. ISBN 978-3-89275-395-7. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  58. Hedfeld, P. (2025b). Essay: Mit der Langfristigkeit im Herzen–Nachhaltigkeit und Generationengerechtigkeit, eine interdisziplinäre Perspektive zwischen Sozialpädagogik und Wirtschaftsethik. Zeitschrift für Sozialpädagogik, (1). Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  59. Homann, K. (2002). Vorteile und Anreize: Zur Grundlegung einer Ethik der Zukunft. Mohr Siebeck. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  60. Homann, K., & Pies, I. (2000). Wirtschaftsethik und Ordnungspolitik–Die Rolle wissenschaftlicher Aufklärung. Ordnungstheorie und Ordnungspolitik–Konzeptionen und Entwicklungsperspektiven, Stuttgart, 329-346. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  61. Jobin, A., Ienca, M. & Vayena, E. (2019). The Global Landscape of AI Ethics Guidelines. Nature Machine Intelligence, 1(9), 389–399. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  62. Minnameier, G. (2016). Rationalität und Moralität: Zum systematischen Ort der Moral im Kontext von Präferenzen und Restriktionen. Zeitschrift für Wirtschafts-und Unternehmensethik, 17(2), 259. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  63. Minnameier, G. (2025). Ordonomik und Bildung: Verantwortung für die moderne Gesellschaft (p. 372). wbv Media. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  64. Mitchell, M. (2019). Artificial Intelligence: A Guide for Thinking Humans. Farrar, Straus and Giroux. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  65. Mitchell, M., Wu, S., Zaldivar, A., et al. (2019). Model Cards for Model Reporting. Proceedings of FAT 2019, 220–229. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  66. OECD. (2019). OECD Principles on Artificial Intelligence. Paris: OECD Publishing. https://www.oecd.org/en/topics/ai-principles.html (Call 16.01.2026) Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  67. Pies, I. (2000). Ordnungspolitik in der Demokratie: Ein ökonomischer Ansatz diskursiver Politikberatung. Tübingen: Mohr Siebeck. ISBN 3-16-147507-0. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  68. Pies, I. (2017a). Ordonomik als Methode zur Generierung von Überbietungsargumenten: Eine Illustration anhand der Flüchtlings (politik) debatte (No. 2017-03). Diskussionspapier. https://doi.org/10.5771/1439-880X-2017-2-171 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  69. Pies, I. (2017b). The ordonomic approach to business ethics. Available at SSRN 2973614. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  70. Pies, I. (2022). Kapitalismus und das Moralparadoxon der Moderne. Berlin: wvb Wissenschaftlicher Verlag Berlin. ISBN 978-3-96138-310-8 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  71. Pies, I. (2025). The interplay of incentives and ideas: An intellectual journey from order economics through order ethics to ordonomics (No. 2025-08). Diskussionspapier. https://www.econstor.eu/bitstream/10419/325828/1/1936155664.pdf (Call 16.01.2025) Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  72. Raghavan, M., Barocas, S., Kleinberg, J. & Levy, K. (2020). Mitigating Bias in Algorithmic Hiring: Evaluating Claims and Practices. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAccT), 469–481. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  73. Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., ... & Barnes, P. (2020, January). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 33-44). Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  74. Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., & Vertesi, J. (2019, January). Fairness and abstraction in sociotechnical systems. In Proceedings of the conference on fairness, accountability, and transparency (pp. 59-68). Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  75. Seufert, S., & Meier, C. (2023). Hybrid Intelligence: Collaboration with AI Systems for Knowledge Work. HMD Praxis der Wirtschaftsinformatik, 60(6), 1194-1209. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  76. Stilgoe, J., Owen, R. & Macnaghten, P. (2013). Developing a Framework for Responsible Innovation. Research Policy, 42(9), 1568–1580. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  77. Suchman, L. (2007). Human-Machine Reconfigurations: Plans and Situated Actions. Cambridge University Press. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  78. UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence. Paris: UNESCO. https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence (Call 16.01.2026) Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  79. Voss, J. P., Bauknecht, D., & Kemp, R. (Eds.). (2006). Reflexive governance for sustainable development. Edward Elgar Publishing. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  80. Williamson, B., & Piattoeva, N. (2022). Education governance and datafication. Education and Information Technologies, 27, 3515-3531. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-30
  81. Agarwal, A., & Nene, M. J. (2025). A five-layer framework for AI governance: Integrating regulation, standards, and certification. Transforming Government: People, Process and Policy, 19(3), 535–555. https://doi.org/10.1108/TG-03-2025-0065 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  82. Alanoca, S., Gur-Arieh, S., Zick, T., & Klyman, K. (2025). Comparing Apples to Oranges: A Taxonomy for Navigating the Global Landscape of AI Regulation. Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency, 914–937. https://doi.org/10.1145/3715275.3732059 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  83. Attard-Frost, B., & Walters, D. R. (o. J.). The Ethics of AI Business Practices: A Review of 47 AI Ethics Guidelines. AI Ethics 3, 389–406 (2023). https://doi.org/10.1007/s43681-022-00156-6 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  84. Bankins, S., & Formosa, P. (2023). The Ethical Implications of Artificial Intelligence (AI) For Meaningful Work. Journal of Business Ethics, 185(4), 725–740. https://doi.org/10.1007/s10551-023-05339-7 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  85. Batool, A., Zowghi, D., & Bano, M. (2025). AI governance: A systematic literature review. AI and Ethics. https://doi.org/10.1007/s43681-024-00653-w Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  86. Berretta, S., Tausch, A., Ontrup, G., Gilles, B., Peifer, C., & Kluge, A. (2023). Defining human-AI teaming the human-centered way: A scoping review and network analysis. Frontiers in Artificial Intelligence, 6, 1250725. https://doi.org/10.3389/frai.2023.1250725 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  87. Birkstedt, T., Minkkinen, M., Tandon, A., & Mäntymäki, M. (2023). AI governance: Themes, knowledge gaps and future agendas. Internet Research, 33(7), 133–167. https://doi.org/10.1108/INTR-01-2022-0042 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  88. Brynjolfsson, E., Li, D., & Raymond, L. (2025). Generative AI at Work. The Quarterly Journal of Economics, 140(2), 889–942. https://doi.org/10.1093/qje/qjae044 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  89. Cabiddu, F., Lauro, S. D., Samaan, D., & Tursunbayeva, A. (o. J.). Governing AI in the World of Work: An International Review of 245 Ethics Guidelines. Available at SSRN. http://dx.doi.org/10.2139/ssrn.5397353 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  90. Camilleri, M. A. (2024). Artificial intelligence governance: Ethical considerations and implications for social responsibility. Expert Systems, 41(7), Article 7. https://doi.org/10.1111/exsy.13406 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  91. Corrêa, N. K., Galvão, C., Santos, J. W., Del Pino, C., Pinto, E. P., Barbosa, C., Massmann, D., Mambrini, R., Galvão, L., Terem, E., & De Oliveira, N. (2023). Worldwide AI ethics: A review of 200 guidelines and recommendations for AI governance. Patterns, 4(10), 100857. https://doi.org/10.1016/j.patter.2023.100857 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  92. Di Vito, J., & Trottier, K. (2022). A Literature Review on Corporate Governance Mechanisms: Past, Present, and Future*. Accounting Perspectives, 21(2), 207–235. https://doi.org/10.1111/1911-3838.12279 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  93. European Partliament (2024). Artificial Intelligence Act (Regulation (EU), 2024/1689) Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  94. Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  95. Hagendorff, T. (2020). The Ethics of AI Ethics: An Evaluation of Guidelines. Minds and Machines, 30(1), Article 1. https://doi.org/10.1007/s11023-020-09517-8 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  96. Herrmann, T., & Pfeiffer, S. (2023). Keeping the organization in the loop: A socio-technical extension of human-centered artificial intelligence. AI & SOCIETY, 38(4), 1523–1542. https://doi.org/10.1007/s00146-022-01391-5 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  97. Hickman, E., & Petrin, M. (2021). Trustworthy AI and Corporate Governance: The EU’s Ethics Guidelines for Trustworthy Artificial Intelligence from a Company Law Perspective. European Business Organization Law Review, 22(4), Article 4. https://doi.org/10.1007/s40804-021-00224-0 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  98. Hickok, M. (2021). Lessons learned from AI ethics principles for future actions. AI and Ethics, 1(1), Article 1. https://doi.org/10.1007/s43681-020-00008-1 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  99. Jarzabkowski, P., & Paul Spee, A. (2009). Strategy‐as‐practice: A review and future directions for the field. International Journal of Management Reviews, 11(1), 69–95. https://doi.org/10.1111/j.1468-2370.2008.00250.x Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  100. Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), Article 9. https://doi.org/10.1038/s42256-019-0088-2 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  101. Leonardi, P. M., & Treem, J. W. (2020). Behavioral Visibility: A new paradigm for organization studies in the age of digitization, digitalization, and datafication. Organization Studies, 41(12), 1601–1625. https://doi.org/10.1177/0170840620970728 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  102. Lombard, M., Snyder-Duch, J., & Bracken, C. C. (2002). Content Analysis in Mass Communication: Assessment and Reporting of Intercoder Reliability. Human Communication Research, 28(4), 587–604. https://doi.org/10.1111/j.1468-2958.2002.tb00826.x Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  103. Maman, L., & Feldman, Y. (2025). Compliance and Effectiveness of Industry Self-Regulation: A Systematic Literature Review. Available at SSRN. http://dx.doi.org/10.2139/ssrn.5233166 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  104. Mäntymäki, M., Minkkinen, M., Birkstedt, T., & Viljanen, M. (2022a). Putting AI Ethics into Practice: The Hourglass Model of Organizational AI Governance (No. arXiv:2206.00335; Nummer arXiv:2206.00335). arXiv. https://doi.org/10.48550/arXiv.2206.00335 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  105. Mäntymäki, M., Minkkinen, M., Birkstedt, T., & Viljanen, M. (2022b). Defining organizational AI governance. AI and Ethics, 2(4), Article 4. https://doi.org/10.1007/s43681-022-00143-x Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  106. Mayring, P., & Fenzl, T. (2014). Qualitative Inhaltsanalyse. In N. Baur & J. Blasius (Eds.), Handbuch Methoden der empirischen Sozialforschung (pp. 543–556). Springer. https://doi.org/10.1007/978-3-531-18939-0_38 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  107. Mayring, P. (2019). Qualitative Inhaltsanalyse – Abgrenzungen, Spielarten, Weiterentwicklungen. Forum Qualitative Social Research, 20(3), 15. https://doi.org/10.17169/fqs-20.3.3343 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  108. Meske, C., Bunde, E., Schneider, J., & Gersch, M. (2022). Explainable Artificial Intelligence: Objectives, Stakeholders, and Future Research Opportunities. Information Systems Management, 39(1), 53–63. https://doi.org/10.1080/10580530.2020.1849465 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  109. Mueller, B. (2022). Corporate Digital Responsibility. Business & Information Systems Engineering, 64(5), 689–700. https://doi.org/10.1007/s12599-022-00760-0 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  110. Nitsch, V., Rick, V., Kluge, A., & Wilkens, U. (2024). Human-centered approaches to AI-assisted work: The future of work? Zeitschrift Für Arbeitswissenschaft, 78(3), 261–267. https://doi.org/10.1007/s41449-024-00437-2 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  111. Orlikowski, W. J., & Scott, S. V. (2008). 10 Sociomateriality: Challenging the Separation of Technology, Work and Organization. Academy of Management Annals, 2(1), 433–474. https://doi.org/10.5465/19416520802211644 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  112. Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., … Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ, n71. https://doi.org/10.1136/bmj.n71 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  113. Papagiannidis, E., Mikalef, P., & Conboy, K. (2025). Responsible artificial intelligence governance: A review and research framework. The Journal of Strategic Information Systems, 34(2), 101885. https://doi.org/10.1016/j.jsis.2024.101885 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  114. Parker, S. K., & Grote, G. (2022). Automation, Algorithms, and Beyond: Why Work Design Matters More Than Ever in a Digital World. Applied Psychology, 71(4), 1171–1204. https://doi.org/10.1111/apps.12241 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  115. Prem, E. (2023). From ethical AI frameworks to tools: A review of approaches. AI and Ethics, 3(3), 699–716. https://doi.org/10.1007/s43681-023-00258-9 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  116. Prem, E. (2024). Approaches to Ethical AI. In H. Werthner, C. Ghezzi, J. Kramer, J. Nida-Rümelin, B. Nuseibeh, E. Prem, & A. Stanger (Hrsg.), Introduction to Digital Humanism (S. 225–239). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-45304-5_15 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  117. Schneider, J., Abraham, R., Meske, C., & Vom Brocke, J. (2023). Artificial Intelligence Governance For Businesses. Information Systems Management, 40(3), Article 3. https://doi.org/10.1080/10580530.2022.2085825 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  118. Schneider, J., Abraham, R., & Meske, C. (2024). Governance of generative artificial intelligence for companies. arXiv preprint arXiv:2403.08802. https://doi.org/10.48550/arXiv.2403.08802 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  119. Seidl, D., Ma, S., & Splitter, V. (2024). What makes activities strategic: Toward a new framework for strategy‐as‐practice research. Strategic Management Journal, 45(12), 2395–2419. https://doi.org/10.1002/smj.3668 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  120. Snyder, H. (2019). Literature review as a research methodology: An overview and guidelines. Journal of Business Research, 104, 333–339. https://doi.org/10.1016/j.jbusres.2019.07.039 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  121. Stahl, B. C., Antoniou, J., Ryan, M., Macnish, K., & Jiya, T. (2022). Organisational responses to the ethical issues of artificial intelligence. AI & SOCIETY, 37(1), Article 1. https://doi.org/10.1007/s00146-021-01148-6 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  122. Viljanen, M., & Parviainen, H. (2022). AI Applications and Regulation: Mapping the Regulatory Strata. Frontiers in Computer Science, 3, 779957. https://doi.org/10.3389/fcomp.2021.779957 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  123. Von Krogh, G. (2018). Artificial Intelligence in Organizations: New Opportunities for Phenomenon-Based Theorizing. Academy of Management Discoveries, 4(4), 404–409. https://doi.org/10.5465/amd.2018.0084 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  124. Whittington, R. (2006). Completing the Practice Turn in Strategy Research. Organization Studies, 27(5), 613–634. https://doi.org/10.1177/0170840606064101 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  125. Widder, D. G., & Nafus, D. (2023). Dislocated accountabilities in the “AI supply chain”: Modularity and developers’ notions of responsibility. Big Data & Society, 10(1), Article 1. https://doi.org/10.1177/20539517231177620 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  126. Wilkens, U., Lupp, D., & Langholf, V. (2023). Configurations of human-centered AI at work: Seven actor-structure engagements in organizations. Frontiers in Artificial Intelligence, 6, 1272159. https://doi.org/10.3389/frai.2023.1272159 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  127. Wirtz, B. W., Weyerer, J. C., & Kehl, I. (2022). Governance of artificial intelligence: A risk and guideline-based integrative framework. Government Information Quarterly, 39(4), 101685. https://doi.org/10.1016/j.giq.2022.101685 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-50
  128. European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-72
  129. Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People – An ethical framework for a good AI society: Opportunities and risks. Minds and Machines, 28(4), 689-707. https://doi.org/10.1007/s11023-018-9482-5 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-72
  130. Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709-734. https://doi.org/10.5465/amr.1995.9508080335 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-72
  131. Organisation for Economic Co-operation and Development. (2019). OECD AI principles. Adopted May 2019; updated May 2024. OECD. https://oecd.ai/en/ai-principles Open Google Scholar doi.org/10.5771/2944-3741-2026-1-72
  132. UNESCO. (2021). Recommendation on the ethics of artificial intelligence. Adopted 23 November 2021. United Nations Educational, Scientific and Cultural Organization. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-72
  133. VanderWeele, T. J. (2017). On the promotion of human flourishing. Proceedings of the National Academy of Sciences of the United States of America, 114(31), 8148-8156. https://doi.org/10.1073/pnas.1702996114 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-72
  134. Heikkilae, M. (2022, December 12). The viral AI avatar app Lensa undressed me – without my consent. MIT Technology Review. https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent Open Google Scholar doi.org/10.5771/2944-3741-2026-1-77
  135. IDA. (2025). International Data-Based Systems Agency IDA at the UN: Supporters of IDA. IDA. Retrieved October 9, 2025, from https://idaonline.ch/supporters-of-ida/ Open Google Scholar doi.org/10.5771/2944-3741-2026-1-77
  136. Johnson, D. G. (2006). Computer systems: Moral entities but not moral agents. Ethics and Information Technology, 8(4), 195-204. https://doi.org/10.1007/s10676-006-9111-5 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-77
  137. Kirchschlaeger, P. G. (2021). Digital Transformation and Ethics. Ethical Considerations on the robotization and automation of society and the economy and the use of Artificial Intelligence. Nomos. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-77
  138. Kirchschlaeger, P. G. (2022). Ethische KI? Datenbasierte Systeme (DS) mit Ethik. HMD-Praxis der Wirtschaftsinformatik, 59(2), 482-494. https://doi.org/10.1365/s40702-022-00843-2 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-77
  139. Kirchschlaeger, P. G. (2023). Ethical Decision-Making. Nomos. https://doi.org/10.5771/9783748918684 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-77
  140. Kirchschlaeger, P. G. (2024a, April 11). In an era of digital disruptions, ethics can’t be an afterthought – Part 1. Business and Human Rights Journal Blog. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-77
  141. https://www.cambridge.org/core/blog/2024/04/11/in-an-era-of-digital-disruptions-ethics-cant-be-an-afterthought/ Open Google Scholar doi.org/10.5771/2944-3741-2026-1-77
  142. Kirchschlaeger, P. G. (2024b, April 12). In an era of digital disruptions, ethics can’t be an afterthought – Part 2. Business and Human Rights Journal Blog. https://www.cambridge.org/core/blog/2024/04/12/in-an-era-of-digital-disruptions-ethics-cant-be-an-afterthought-part-2/ Open Google Scholar doi.org/10.5771/2944-3741-2026-1-77
  143. Kirchschlaeger, P. G. (2024c). Artificial intelligence and the complexity of ethics. Asian Horizons, 14(3), 375-389. https://dvkjournals.in/index.php/ah/article/view/4590/3752 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-77
  144. Kirchschlaeger, P. G. (2024d, December 3). Protecting children from Anti-Social media. Project Syndicate. https://www.project-syndicate.org/commentary/australia-ban-on-children-using-social-media-should-be-emulated-by-peter-g-kirchschlager-2024-12 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-77
  145. Kirchschlaeger, P. G. (2024e). The need for an International Data-Based Systems Agency (IDA) at the UN: governing “AI” globally by keeping the planet sustainably and protecting the weaker from the powerful. Journal of AI Humanities, 18, 213-248. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-77
  146. Kirchschlaeger, P. G. (2024f). An International Data-Based Systems Agency IDA: striving for a peaceful, sustainable, and Human Rights-Based future. Philosophies, 9(3), 73. https://doi.org/10.3390/philosophies9030073 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-77
  147. Kirchschlaeger, P. G. (2024g). Securing a peaceful, sustainable, and humane future through an International Data-based Systems Agency (IDA) at the UN. Data & Policy, 6(78). https://doi.org/10.1017/dap.2024.38 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-77
  148. Kirchschlaeger, P.G. (2025). Artificial Intelligence – an Analysis from the Rights of the Child Perspective. Berkley Journal of International Law. https://www.berkeleyjournalofinternationallaw.com/post/artificial-intelligence-an-analysis-from-the-rights-of-the-child-perspective Open Google Scholar doi.org/10.5771/2944-3741-2026-1-77
  149. Lensa. (2025). Lensa AI: Influencers’ best kept secret. Lensa App. Retrieved October 9, 2025, from https://lensa.app/ Open Google Scholar doi.org/10.5771/2944-3741-2026-1-77
  150. Misselhorn, C. (2018). Grundfragen der Maschinenethik. Reclam. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-77
  151. Snow, O. (2022, December 7). ‘Magic Avatar’ app Lensa generated nudes from my childhood photos. The dreamy picture-editing AI is a nightmare waiting to happen. Wired. https://www.wired.com/story/lensa-artificial-intelligence-csem/?bxid=5cc9e15efc942d13eb203f10 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-77
  152. Yampolskiy, R.V. (2013). Artificial Intelligence Safety Engineering: Why Machine Ethics Is a Wrong Approach. In V. Müller (Ed), Philosophy and Theory of Artificial Intelligence. Studies in Applied Philosophy, Epistemology and Rational Ethics (pp. 389-396). Springer. https://doi.org/10.1007/978-3-642-31674-6_2 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-77
  153. Alavi, M., & Leidner, D. E. (2001). Review: Knowledge Management and Knowledge Management Systems: Conceptual Foundations and Research Issues. MIS Quarterly, 25(1), 107–136. https://doi.org/10.2307/3250961 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-83
  154. Caddy, I. (2000). Intellectual capital: recognizing both assets and liabilities. Journal of Intellectual Capital, 1(2), 129-146. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-83
  155. Carayannis, E. G., & Morawska-Jancelewicz, J. (2022). The Futures of Europe: Society 5.0 and Industry 5.0 as Driving Forces of Future Universities. Journal of the Knowledge Economy, 13(4), 3445-3471. https://doi.org/10.1007/s13132-021-00854-2 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-83
  156. Dalkir, K. (2025). Handbook of inclusive knowledge management. CRC Press, Abingdon, Oxon. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-83
  157. Davenport, T. H. (1994). Saving IT's Soul: Human Centered Information Management. Harvard Business Review, March-April, 72(2), 119-131. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-83
  158. Davenport, T., & Prusak, L. (1998). Working Knowledge: How Organizations Manage What They Know. Ubiquity 2000, August, Article 6 (August 1 - August 31, 2000). https://doi.org/10.1145/347634.348775 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-83
  159. de Holan, P. M., & Phillips, N. (2004). Organizational forgetting as strategy. Strategic Organization, 2(4), 423-433. https://doi.org/10.1177/1476127004047620 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-83
  160. Du Plessis, M. (2007). The role of knowledge management in innovation. Journal of knowledge management, 11(4), 20-29. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-83
  161. Durst, S. (2024). A plea for responsible and inclusive knowledge management at the world level. VINE Journal of Information and Knowledge Management Systems; 54(1), 211–219. https://doi.org/10.1108/VJIKMS-09-2021-0204 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-83
  162. Durst, S., & Foli, S. (2024). Responsible and inclusive knowledge management made concrete. In Handbook of Inclusive Knowledge Management: Ensuring Inclusivity, Diversity, and Equity in Knowledge Processing Activities (pp. 1-12). CRC Press. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-83
  163. Durst, S., & Khadir, Y. (2025). Towards Responsible Knowledge Management. In S. Durst and Y. Khadir (eds) Knowledge Management at the Crossroads. Navigating Risks and Benefits (pp. 79-88). Springer, Cham. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-83
  164. Durst, S., & Zieba, M. (2018). Mapping knowledge risks: towards a better understanding of knowledge management. Knowledge Management Research & Practice, 17(1), 1-13, https://doi.org/10.1080/14778238.2018.1538603 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-83
  165. Dyllick, T., & Muff, K. (2016). Clarifying the Meaning of Sustainable Business: Introducing a Typology From Business-as-Usual to True Business Sustainability. Organization & Environment, 29(2), 156-174. https://doi.org/10.1177/1086026615575176 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-83
  166. Ferreira, J., Mueller, J., & Papa, A. (2020). Strategic knowledge management: theory, practice and future challenges. Journal of Knowledge Management, 24 (2), 121–126. https://doi.org/10.1108/JKM-07-2018-0461 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-83
  167. McGill, M. E., & Slocum, J. W. (1993). Unlearning the organization. Organizational Dynamics, 22(2), 67–79. https://doi.org/10.1016/0090-2616(93)90054-5 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-83
  168. Merhi, M. I. (2023). An evaluation of the critical success factors impacting artificial intelligence implementation. International Journal of Information Management, 69, 102545. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-83
  169. Mökander, J., Sheth, M., Gersbro-Sundler, M., Blomgren, P., & Floridi, L. (2022). Challenges and best practices in corporate AI governance: Lessons from the biopharmaceutical industry. Frontiers in Computer Science, 4, 1068361. https://doi.org/10.3389/fcomp.2022.1068361 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-83
  170. Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2, 100041. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-83
  171. Schneider, J., Abraham, R., Meske, C., & Vom Brocke, J. (2023). Artificial Intelligence Governance For Businesses. Information Systems Management, 40(3), 229–249. https://doi.org/10.1080/10580530.2022.2085825 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-83
  172. Williams, C., & Durst, S. (2019). Exploring the transition phase in offshore outsourcing: Decision making amidst knowledge at risk. Journal of Business Research, 103, 460-471. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-83
  173. Zack, M.H. (2002). Developing a knowledge strategy. California Management Review, 41(3), 125-223. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-83
  174. Baudrillard, J. (1981). Simulacra and simulation. University of Michigan Press. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-89
  175. California Legislature. (2025). Senate Bill No. 243: Companion chatbots. https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202520260SB243 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-89
  176. Chandra Kruse, L., Bergener, K., Conboy, K., Lundström, J. E., Maedche, A., Sarker, S., Seeber, I., Stein, A., & Tømte, C. E. (2023). Understanding the Digital Companions of Our Future Generation. Communications of the Association for Information Systems, 52, 465-479. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-89
  177. European Parliament. (2025, October 16). New EU measures needed to make online services safer for minors. https://www.europarl.europa.eu/news/en/press-room/20251013IPR30892/new-eu-measures-needed-to-make-online-services-safer-for-minors Open Google Scholar doi.org/10.5771/2944-3741-2026-1-89
  178. Huang, S., Lai, X., Ke, L., Li, Y., Wang, H., Zhao, X., ... & Wang, Y. (2024). AI technology panic—is AI dependence bad for mental health? A cross-lagged panel model and the mediating roles of motivations for AI use among adolescents. Psychology Research and Behavior Management, 1087-1102. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-89
  179. Langhof, J. G., & Güldenberg, S. (2022). The rise of the robot servant-leaders? Next generation leadership. The International Journal of Servant-Leadership, 16(1), 381–424. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-89
  180. McKinsey (2025). Superagency in the workplace: Empowering people to unlock AI’s full potential (AI in the workplace: A report for 2025). https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work Open Google Scholar doi.org/10.5771/2944-3741-2026-1-89
  181. Naddaf, M. (2025). AI chatbots are sycophants – and it's harming science. Nature, 647, 13-14. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-89
  182. Papagiannidis, E., Mikalef, P., & Conboy, K. (2025). Responsible artificial intelligence governance: A review and research framework. The Journal of Strategic Information Systems, 34(2), 101885. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-89
  183. Richet, J. L. (2025). AI companionship or digital entrapment? Investigating the impact of anthropomorphic AI-based chatbots. Journal of Innovation & Knowledge, 10(6), 100835. Open Google Scholar doi.org/10.5771/2944-3741-2026-1-89
  184. Zao-Sanders, M. (2025, April 9) How People Are Really Using Gen AI in 2025, Harvard Business Review. https://hbr.org/2025/04/how-people-are-really-using-gen-ai-in-2025 Open Google Scholar doi.org/10.5771/2944-3741-2026-1-89

Citation


Download RIS Download BibTex