The scientific journal Morals & Machines addresses the question of how algorithms in general and artificial intelligence (AI) especially change society, the economy and the working world, the media, the healthcare system, technology, language, gender relations, and art and culture in a pluralistic manner. It investigates the questions of which ethical risks arise from general and artificial intelligence, what potential they offer and what challenges they pose to legal systems worldwide in relation to technological applications, robotics and the integration of AI. The journal examines these questions from an interdisciplinary, global and critical perspective at the interface between the humanities, social science, law and computer science.
There is growing interest in explanations as an ethical and technical solution to the problem of 'opaque' AI systems. In this essay we point out that technical and ethical approaches to Explainable AI (XAI) have different assumptions and aims....
Advances in AI technology affect knowledge work in diverse fields, including healthcare, engineering, and management. Although automation and machine support can increase efficiency and lower costs, it can also, as an unintended consequence, deskill...
The use of digital technologies for workplace monitoring renders organizational responsibilities murky and opaque. However, clear responsibility for monitoring practices is key for both legal compliance and potential liability, as well as for...
In this paper I will approach the problem of machine opacity in law, according to an understanding of it as a problem revolving around the underlying philosophical tension between description and prescription in law and legal theory. I will use the...
Over recent years, the EU has increasingly looked at the regulation of various forms of automation and the use of algorithms. For recommender systems specifically, two recent legislative proposals by the European Commission, the Digital Services Act...
What does the gambling industry have in common with the digital economy? Silicon Valley has learned from Las Vegas to drive “user engagement” on platforms, such as Facebook and Twitter, and in gaming. These platforms rely on the same...
In this intervention, we discuss to what extent the term “decision” serves as an adequate terminology for what algorithms actually do. Although calculations of algorithms might be perceived as or be an important basis for a decision, we argue,...
This article explores the use of participatory art and technology workshops as an approach to create more diverse and inclusive modes of engagement in the design of digital technologies. Taking the starting point in diverse works of science fiction,...
Casey, B., Farhangi, A. & Vogl, R. (2019). Rethinking Explainable Machines: The GDPR's 'Right to Explanation' Debate and the Rise of Algorithmic Audits in Enterprise. Berkeley Technology Law Journal, Vol. 34, available at SSRN: https://ssrn.com/abstract=3143325 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-3
European Commission (2021). Proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts. COM/2021/206 final. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-3
Goanta, C., & Spanakis, G. (2020). Influencers and Social Media Recommender Systems: Unfair Commercial Practices in EU and US Law. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3592000 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-3
Rudin, C., & Radin, J. (2019). Why are we using black box models in AI when we don’t need to? A lesson from an explainable AI competition. Harvard Data Science Review, 1(2). doi: https://doi.org/10.1162/99608f92.5a8a3a3d Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-3
Wanner, J., Herm, L. V., & Janiesch, C. (2020). How much is the black box? The value of explainability in machine learning models. ECIS 2020 Research-in-Progress Papers. 85. https://aisel.aisnet.org/ecis2020_rip/85 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-3
Watson, David S., Krutzinna, J., Bruce, I. N., Griffiths, C. E. M., McInness, I. B., Barnes, M. R. & Floridi, L. (2019). Clinical applications of machine learning algorithms: beyond the black box. BMJ 2019; 364: l886 doi:10.1136/bmj.l886 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-3
Webb, M. E., Fluck, A., Magenheim, J., Malyn-Smith, J., Waters, J., Deschênes, M., & Zagami, J. (2020). Machine learning for human learners: opportunities, issues, tensions and threats. Educational Technology Research and Development, 1-22. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-3
Cecez-Kecmanovic, D., R. D. Galliers, O. Henfridsson, S. Newell and R. Vidgen (2014). "The Sociomateriality of Information Systems: Current Status, Future Directions", MIS Quarterly, 38, 809-830. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
Dastin, J. (2018) 'Amazon scraps secret AI recruiting tool that showed bias against women', Available: Reuters. Available at: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G (Accessed 20 May 2019). Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
Doshi-Velez, F., M. Kortz, R. Budish, C. Bavitz, S. Gershman, D. O'Brien, S. Schieber, J. Waldo, D. Weinberger and A. Wood (2017). "Accountability of AI under the law: The role of explanation", arXiv preprint arXiv:1711.01134. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
Gunning, D. (2017) Explainable Artificial Intelligence (XAI): Defense Advanced Research Projects Agency. Available at: https://www.darpa.mil/program/explainable-artificial-intelligence (Accessed: May 18 2019 2019). Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
Langley, A., C. Smallman, H. Tsoukas and A. H. Van de Ven (2013). "Process studies of change in organization and management: unveiling temporality, activity, and flow", Academy of Management Journal, 56, 1-13. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
Michal, P., D. Pawel, S. Wenhan, R. Rafal and A. Kenji (2009). "Towards context aware emotional intelligence in machines: computing contextual appropriateness of affective states", In Proceedings of the Twenty-First International Joint Conference on Artificial Intelligence (IJCAI-09), pp. 1469-1474. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
Nakatani, L. H. and J. A. Rohrlich (1983). "Soft machines: A philosophy of user-computer interface design", In Proceedings of the SIGCHI conference on Human Factors in Computing Systems, pp. 19-23. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
Santiago, D. and T. Escrig (2017) Why explainable AI must be central to responsible AI: Accenture. Available at: https://www.accenture.com/us-en/blogs/blogs-why-explainable-ai-must-central-responsible-ai (Accessed: 1/6/2019 2019). Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
Akata, Z., Balliet, D., de Rijke, M., Dignum, F., Dignum, V., Eiben, G., Fokkens, A., Grossi, D., Hindriks, K., Hoos, H., Hung, H., Jonker, C., Monz, C., Neerincx, M., Oliehoek, F., Prakken, H., Schlobach, S., van der Gaag, L., van Harmelen, F., … Welling, M. (2020). A Research Agenda for Hybrid Intelligence: Augmenting Human Intellect With Collaborative, Adaptive, Responsible, and Explainable Artificial Intelligence. Computer, 53(8), 18–28. https://doi.org/10.1109/MC.2020.2996587 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Arnold, V., and Sutton, S.G. (1998). The theory of technology dominance: Understanding the impact of intelligent decision aids on decision makers’ judgments. Advances in Accounting Behavioral Research, 1(3), 175–194. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Basu, S., Garimella, A., Han, W., & Dennis, A. (2021, January). Human Decision Making in AI Augmented Systems: Evidence from the Initial Coin Offering Market. Proceedings of the 54th Hawaii International Conference on System Sciences, 176. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Beaudouin-Lafon, M. (2004) Designing interaction, not interfaces. Proceedings of the working conference on Advanced visual interfaces, 15-22. https://dl.acm.org/doi/pdf/10.1145/989863.989865 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Bednar, P. M., & Welch, C. (2020). Socio-Technical Perspectives on Smart Working: Creating Meaningful and Sustainable Systems. Information Systems Frontiers, 22(2), 281–298. https://doi.org/10.1007/s10796-019-09921-1 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Bell, S. E., Hullinger, A., & Brislen, L. (2015). Manipulated Masculinities: Agribusiness, Deskilling, and the Rise of the Businessman‐Farmer in the United States. Rural Sociology, 80(3), 285-313. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Bisen, V.S. (2020). What is Human in the Loop Machine Learning: Why & How Used in AI?. Vshingbisen. https://medium.com/vsinghbisen/what-is-human-in-the-loop-machine-learning-why-how-used-in-ai-60c7b44eb2c0 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Bresnahan TF, Brynjolfsson E and Hitt LM (2002). Information technology, workplace organization, and the demand for skilled labor: Firm-level evidence. The quarterly journal of economics, 117(1), 339-376. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Brugger, F., & Gehrke, C. (2018). Skilling and deskilling: Technological change in classical economic theory and its empirical evidence. Theory and Society, 47(5), 663–689. https://doi.org/10.1007/s11186-018-9325-7 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Buoy Health. (2018). Buoy Health Partners With Boston Children's Hospital To Improve The Way Parents Currently Assess Their Children's Symptoms Online. https://www.prnewswire.com/news-releases/buoy-health-partners-with-boston-childrens-hospital-to-improve-the-way-parents-currently-assess-their-childrens-symptoms-online-300693055.html Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Collins, A., Brown, J. S., & Newman, S. E. (1989). Cognitive apprenticeship: Teaching the crafts of reading, writing, and mathematics. In L. B. Resnick (Ed.), Knowing, learning, and instruction: Essays in honor of Robert Glaser (pp. 453–494). Lawrence Erlbaum Associates, Inc. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Coroamă, V. C., & Pargman, D. (2020, June). Skill rebound: On an unintended effect of digitalization. Proceedings of the 7th International Conference on ICT for Sustainability, 213-219. https://doi.org/10.1145/3401335.3401362 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Dellermann, D., Calma, A., Lipusch, N., Weber, T., Weigel, S., & Ebel, P. (2019). The future of human-AI collaboration: a taxonomy of design knowledge for hybrid intelligence systems. Proceedings of the 52nd Hawaii International Conference on System Sciences. arXiv preprint arXiv:2105.03354 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Dellermann, D., Ebel, P., Söllner, M., & Leimeister, J. M. (2019). Hybrid Intelligence. Business & Information Systems Engineering, 61(5), 637–643. https://doi.org/10.1007/s12599-019-00595-2 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Dellermann, D., Lipusch, N., Ebel, P., & Leimeister, J. M. (2019). Design principles for a hybrid intelligence decision support system for business model validation. Electronic Markets, 29(3), 423–441. https://doi.org/10.1007/s12525-018-0309-2 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Dikmen, M., & Burns, C. (2017). Trust in autonomous vehicles: The case of Tesla Autopilot and Summon. 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 1093–1098. https://doi.org/10.1109/SMC.2017.8122757 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Dilla, W. N., & Stone, D. N. (1997). Representations as decision aids: The asymmetric effects of words and numbers on auditors' inherent risk judgments. Decision Sciences, 28(3), 709-743. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Ferris, T., Sarter, N., & Wickens, C. D. (2010). Chapter 15 - Cockpit Automation: Still Struggling to Catch Up…. In E. Salas & D. Maurino (Eds.), Human Factors in Aviation (2nd ed, pp. 479–503). Academic Press. https://doi.org/10.1016/B978-0-12-374518-7.00015-8 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Frey, C. B., & Osborne, M. A. (2013). The future of employment: How susceptible are jobs to computerisation? Technological Forecasting and Social Change, 114, 254–280. https://doi.org/10.1016/j.techfore.2016.08.019 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Lyytinen, K., Nickerson, J. V., & King, J. L. (2020). Metahuman systems = humans + machines that learn. Journal of Information Technology, 026839622091591. https://doi.org/10.1177/0268396220915917 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Mascha, M. F., & Smedley, G. (2007). Can computerized decision aids do “damage”? A case for tailoring feedback and task complexity based on task experience. International Journal of Accounting Information Systems, 8(2), 73–91. https://doi.org/10.1016/j.accinf.2007.03.001 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Nahavandi, S. (2017). Trusted Autonomy Between Humans and Robots: Toward Human-on-the-Loop in Robotics and Autonomous Systems. IEEE Systems, Man, and Cybernetics Magazine, 3(1), 10–17. https://doi.org/10.1109/MSMC.2016.2623867 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
National Highway Traffic Safety Administration (NHTS). (2013). Preliminary statement of policy concerning automated vehicles. Washington DC, 1, 14. http://www.nhtsa.gov/staticfiles/ rulemaking/pdf/Automated_Vehicles_Policy.pdf Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Nibert, D. (2011). Origins and Consequences of the Animal Industrial Complex. In S.Best; R. Kahn; A.J. Nocella, & P. McLaren (Eds.). The Global Industrial Complex: Systems of Domination. (pp. 208). Rowman & Littlefield. ISBN 978-0739136980 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Noga, T., & Arnold, V. (2002). Do tax decision support systems affect the accuracy of tax compliance decisions?. International Journal of Accounting Information Systems, 3(3), 125-144. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Paré, G., Sicotte, C., & Jacques, H. (2006). The effects of creating psychological ownership on physicians' acceptance of clinical information systems. Journal of the American Medical Informatics Association, 13(2), 197-205. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Pol, E. and Reveley, J., 2017. Robot induced technological unemployment: Towards a youth-focused coping strategy. Psychosociological Issues in Human Resource Management, 5(2), pp.169-186. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Quintana, C., Reiser, B. J., Davis, E. A., Krajcik, J., Fretz, E., Duncan, R. G., Kyza, E., Edelson, D., & Soloway, E. (2004). A Scaffolding Design Framework for Software to Support Science Inquiry. Journal of the Learning Sciences, 13(3), 337–386. https://doi.org/10.1207/s15327809jls1303_4 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Prakash, N., & Mathewson, K. W. (2020). Conceptualization and Framework of Hybrid Intelligence Systems. NeurIPS 2020 Workshop on Human And Model in the Loop Evaluation and Training Strategies. https://openreview.net/pdf/b2ded20e201d9d15a39193b3154342de7b6ef81a.pdf Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Rinard, R. G. (1996). Technology, deskilling, and nurses: The impact of the technologically changing environment. Advances in Nursing Science, 18(4), 60–69. https://doi.org/10.1097/00012272-199606000-00008 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Rinta-Kahila, T., Penttinen, E., Salovaara, A., & Soliman, W. (2018). Consequences of Discontinuing Knowledge Work Automation – Surfacing of Deskilling Effects and Methods of Recovery. Proceedings of the 51st Hawaii International Conference on System Sciences, 5244 - 5253. URI: http://hdl.handle.net/10125/50543 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Stone, G. D., Brush, S., Busch, L., Cleveland, D. A., Dove, M. R., Herring, R. J., ... & Stone, G. D. (2007). Agricultural deskilling and the spread of genetically modified cotton in Warangal. Current anthropology, 48(1), 67-103. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Sutton, S. G., Arnold, V., & Holt, M. (2018). How Much Automation Is Too Much? Keeping the Human Relevant in Knowledge Work. Journal of Emerging Technologies in Accounting, 15(2), 15–25. https://doi.org/10.2308/jeta-52311 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Travis, G. (2019). How the Boeing 737 Max disaster looks to a Software Developer. IEEE Spectrum. https://spectrum.ieee.org/how-the-boeing-737-max-disaster-looks-to-a-software-developer. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Trösterer, S., Gärtner, M., Mirnig, A., Meschtscherjakov, A., McCall, R., Louveton, N., ... & Engel, T. (2016, October). You never forget how to drive: driver skilling and deskilling in the advent of autonomous vehicles. Proceedings of the 8th international conference on automotive user interfaces and interactive vehicular applications, 209-216. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Trunk, A., Birkel, H., & Hartmann, E. (2020). On the current state of combining human and artificial intelligence for strategic organizational decision making. Business Research, 13(3), 875–919. https://doi.org/10.1007/s40685-020-00133-x Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Wang, D., Maes, P., Ren, X., Shneiderman, B., Shi, Y., & Wang, Q. (2021). Designing AI to Work WITH or FOR People? Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, 1–5. https://doi.org/10.1145/3411763.3450394 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Wilmer, H. H., Sherman, L. E., & Chein, J. M. (2017). Smartphones and cognition: A review of research exploring the links between mobile technology habits and cognitive functioning. Frontiers in psychology, 8, 605. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Wortmann, C., Fischer, P. M., & Reinecke, S. (2015). Too much of a good thing? How Big Data changes managerial decision making. 36th Society of Judgment and Decision Making. Society for Judgment and Decision Making (SJDM), Annual Conference. https://www.alexandria.unisg.ch/245736/ Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Zihsler, J., Hock, P., Walch, M., Dzuba, K., Schwager, D., Szauer, P., & Rukzio, E. (2016). Carvatar Increasing Trust in Highly-Automated Driving Through Social Cues. Adjunct Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, 9–14. https://doi.org/10.1145/3004323.3004354 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
Ananny, M., Crawford, K. (2016). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New media & society, 20(3), 973–989. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
Angrave, D., Charlwood, A., Kirkpatrick, I., Lawrence, M., Stuart, M. (2016). HR and analytics: why HR is set to fail the big data challenge. Human Resource Management Journal, 26(1), 1–11. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
Ebert, I., Wildhaber, I., & Adams-Prassl, J. (2021). Big Data in the workplace: Privacy Due Diligence as a human rights-based approach to employee privacy protection. Big Data & Society, 1–14. https://doi.org/10.1177/20539517211013051. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
Galasso, A., Luo, H. (2018). Punishing Robots: Issues in the Economics of Tort Liability and Innovation in Artificial Intelligence. In A. Agrawal, J. Gans, & A. Goldfarb (Eds.), The Economics of Artificial Intelligence: An Agenda (pp. 493–504). University of Chicago Press. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
Gillespie, P. (Host) (2015, April 22). AI & Robotics Working Group, Santa Clara County Bar Association, Employment and Labor Law Issues Arising from the Development and Use of Robotics in the Workplace [Audio podcast episode]. https://app.box.com/s/idpfm3glxyqcqeraumas7tegn42zok5x. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 1–21. https://doi.org/10.1177/2053951716679679. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
Schafheitle, S. D., Weibel, A., Rickert, A. (2021). The Bermuda Triangle of Leadership in the AI Era? Emerging Trust Implications from “Two-Leader-Situations” in the Eyes of Employees. Proceedings of the 54th Hawaii International Conference on System Sciences (HICSS). 5473–5482. DOI:10.24251/HICSS.2021.665. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
Thelisson E. (2017). Towards Trust, Transparency and Liability in AI/AS Systems. In C. Sierra (Ed.), Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17) (pp. 5215–5216). International Joint Conferences on Articial Intelligence. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
Trindel, K. for U.S. Equal Employment Opportunity Commission (2016). Big data in the workplace, Written Testimony. 13 October. https://www.eeoc.gov/meetings/meeting-october-13-2016-big-data-workplace-examining-implications-equal-employment/trindel%2C%20phd. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
Wachter S., Mittelstadt, B., Floridi, L. (2017). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2), 76–99. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
Wildhaber, I., Lohmann, M. and Kasper, G. (2019): Diskriminierung durch Algorithmen – Überlegungen zum schweizerischen Recht am Beispiel prädiktiver Analytik am Arbeitsplatz, ZSR 2019 I, 459, 479 f. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
European Commission. (2019). Liability for Artificial Intelligence and other emerging digital technologies. Report from the Expert Group on Liability and New Technologies – New Technologies Formation. https://op.europa.eu/de/publication-detail/-/publication/1c5e30be-1197-11ea-8c1f-01aa75ed71a1. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
European Commission. (2020a). Whitepaper on Artificial Intelligence - A European approach to excellence and trust. https://ec.europa.eu/info/sites/default/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf, 16 f. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
European Commission. (2020b). Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52020DC0064&from=EN. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
European Parliament. (2016). European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103 (INL)). https://www.europarl.europa.eu/doceo/document/TA-8-2017-0051_EN.html. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
European Parliament. (2020a). European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)). https://www.europarl.europa.eu/doceo/document/TA-9-2020-0276_EN.html. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
European Parliament. (2020b). European Parliament resolution of 20 October 202 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)). https://www.europarl.europa.eu/doceo/document/TA-9-2020-0276_EN.html. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
United Kingdom House of Lords. (2018). Artificial Intelligence Committee, AI in the UK: ready, willing and able?. https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
United Nations Human Rights Council. (2021). The right to privacy in the digital age* Report of the United Nations High Commissioner for Human Rights. Annual report of the United Nations High Commissioner for Human Rights and reports of the Office of the High Commissioner and the Secretary-General. https://www.ohchr.org/EN/NewsEvents/Pages/DisplayNews.aspx?NewsID=27469&LangID=E. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
United Nations Human Rights. (2020a). Key Characteristics of Buisness Respect for Human Rights. https://www.ohchr.org/Documents/Issues/Business/B-Tech/key-characteristics-business-respect.pdf. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
United Nations Human Rights. (2021). Designing and implementing effective company-based grievance mechanisms. https://www.ohchr.org/Documents/Issues/Business/B-Tech/access-to-remedy-company-based-grievance-mechanisms.pdf Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
United States Executive Office of the President. (2016). Artificial intelligence, automation and the economy. https://obamawhitehouse.archives.gov/sites/whitehouse.gov/files/documents/Artificial-Intelligence-Automation-Economy.PDF. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
Attwooll. B (1998). “Legal idealism”, doi: 10.4324/9780415249126-T020-1. Routledge Encyclopedia of Philosophy, Taylor and Francis, https://www.rep.routledge.com/articles/thematic/legal-idealism/v-1, 04/06/21. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-50
Rutherford, M. 2010. “Spinoza’s Conception of Law: metaphysics and ethics”, in Melamed and Rosenthal (eds.) Spinoza’s Theological-Political Treatise, A Critical Guide, Cambridge: Cambridge University Press. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-50
Spinoza, B. 1670 and 1677 Ethics and Theological Political Treatise in Curley. E (1985 and 2016). The Collected Works of Spinoza Vol. I and II. Curley, E. (ed. and trans.). Princeton, NJ: Princeton University Press. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-50
Alfano, M., Fard, A. E., Carter, J. A., Clutton, P., & Klein, C. (2020). Technologically scaffolded atypical cognition: The case of YouTube’s recommender system. Synthese. https://doi.org/10.1007/s11229-020-02724-x Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
Appelman, N., Quintais, J., & Fahy, R. (2021, May 31). Article 12 DSA: Will platforms be required to apply EU fundamental rights in content moderation decisions? DSA Observatory. https://dsa-observatory.eu/2021/05/31/article-12-dsa-will-platforms-be-required-to-apply-eu-fundamental-rights-in-content-moderation-decisions/ Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
Article 19. (2021a, May 14). EU: Regulation of recommender systems in the Digital Services Act. https://www.article19.org/resources/eu-regulation-of-recommender-systems-in-the-digital-services-act/ Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
Article 19. (2021b, May 21). EU: Due diligence obligations in the proposed Digital Services Act. https://www.article19.org/resources/eu-due-diligence-obligations-in-the-proposed-digital-services-act/ Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
Belkin, N. J., & Croft, W. B. (1992). Information filtering and information retrieval: Two sides of the same coin? Communications of the ACM, 35(12), 29–38. https://doi.org/10.1145/138859.138861 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
Bellogín, A., & Said, A. (2019). Information Retrieval and Recommender Systems. In A. Said & V. Torra (Eds.), Data Science in Practice (Vol. 46, pp. 79–96). Springer International Publishing. https://doi.org/10.1007/978-3-319-97556-6_5 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
Council of Europe. (2019). Declaration by the Committee of Ministers on the manipulative capabilities of Algorithmic processes (Adopted by the Committee of Ministers on 13 February 2019 at the 1337th meeting of the Ministers’ Deputies). Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
EU Disinfo Lab. (2021, April 1). How the Digital Services Act (DSA) Can Tackle Disinformation. https://www.disinfo.eu/advocacy/how-the-digital-services-act-(dsa)-can-tackle-disinformation/ Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
European Data Protection Supervisor (EDPS). (2021). Opinion 1/2021 on the Proposal 649 for a Digital Services Act. https://edps.europa.eu/ 650 system/files/2021-02/21-02-10- opinion_on_digital_services_act_en.pdf Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
Gomez-Uribe, C. A., & Hunt, N. (2016). The Netflix Recommender System: Algorithms, Business Value, and Innovation. ACM Transactions on Management Information Systems, 6(4), 1–19. https://doi.org/10.1145/2843948 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society, 7(1), 205395171989794. https://doi.org/10.1177/2053951719897945 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
Helberger, N., Karppinen, K., & D’Acunto, L. (2018). Exposure diversity as a design principle for recommender systems. Information, Communication & Society, 21(2), 191–207. https://doi.org/10.1080/1369118X.2016.1271900 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
Helberger, N., van Drunen, M., Vrijenhoek, S., & Möller, J. (2021). Regulation of news recommenders in the Digital Services Act: Empowering David against the Very Large Online Goliath. Internet Policy Review. https://policyreview.info/articles/news/regulation-news-recommenders-digital-services-act-empowering-david-against-very-large Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
Jeckmans, A. J. P., Beye, M., Erkin, Z., Hartel, P., Lagendijk, R. L., & Tang, Q. (2013). Privacy in Recommender Systems. In N. Ramzan, R. van Zwol, J.-S. Lee, K. Clüver, & X.-S. Hua (Eds.), Social Media Retrieval (pp. 263–281). Springer London. https://doi.org/10.1007/978-1-4471-4555-4_12 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
Krebs, L. M., Alvarado Rodriguez, O. L., Dewitte, P., Ausloos, J., Geerts, D., Naudts, L., & Verbert, K. (2019). Tell Me What You Know: GDPR Implications on Designing Transparency and Accountability for News Recommender Systems. Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, 1–6. https://doi.org/10.1145/3290607.3312808 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
Leerssen, P. (2021, September 7). Platform research access in Article 31 of the Digital Services Act – Sword without a shield? Verfassungsblog. https://verfassungsblog.de/power-dsa-dma-14/ Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
Panoptykon Foundation. (2021, August 2). Can the EU Digital Services Act contest the power of Big Tech’s algorithms? EDRi. https://edri.org/our-work/can-the-eu-digital-services-act-contest-the-power-of-big-techs-algorithms/ Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
Quintais, J., & Schwemer, S. F. (2021). The Interplay between the Digital Services Act and Sector Regulation: How Special is Copyright? SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3841606 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
Schwemer, S. F., Tomada, L., & Pasini, Tommaso. (2021). Legal AI Systems in the EU’s proposed Artificial Antelligence Act. Joint Proceedings of the Workshops on Automated Semantic Analysis of Information in Legal Text (ASAIL 2021) and AI and Intelligent Assistance for Legal Professionals in the Digital Workplace (LegalAIIA 2021), 2888, 51–58. http://ceur-ws.org/Vol-2888/ Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
Senftleben, M., Margoni, T., Antal, D., Bodó, B., Gompel, S. van, Handke, C., Kretschmer, M., Poort, J., Quintais, J., & Schwemer, S. F. (2021). Ensuring the Visibility and Accessibility of European Creative Content on the World Market: The Need for Copyright Data Improvement in the Light of New Technologies. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3785272 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
Spotify. (2020, November 2). Amplifying Artist Input in Your Personalized Recommendations. https://newsroom.spotify.com/2020-11-02/amplifying-artist-input-in-your-personalized-recommendations/ Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
Stasi, M. L. (2019). Social media platforms and content exposure: How to restore users’ control. Competition and Regulation in Network Industries, 20(1), 86–110. https://doi.org/10.1177/1783591719847545 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
Ter Hoeve, M., Heruer, M., Odijk, D., Schuth, A., & de Rijke, M. (2017). Do news consumers want explanations for personalized news rankings. FATREC Workshop on Responsible Recommendation Proceedings. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
van Drunen, M. Z., Helberger, N., & Bastian, M. (2019). Know your algorithm: What media organizations need to explain to their users about news personalization. International Data Privacy Law, 9(4), 220–235. https://doi.org/10.1093/idpl/ipz011 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
Whittaker, J., Looney, S., Reed, A., & Votta, F. (2021). Recommender systems and the amplification of extremist content. Internet Policy Review, 10(2). https://doi.org/10.14763/2021.2.1565 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
Bartsch, A. (2010). Zeitungs-Sucht, Lesewut und Fernsehfieber. In M. Buck, F. Hartling, & S. Pfau (Eds.), Rand-gänge der Mediengeschichte (pp. 109–122). VS Verlag für Sozialwissenschaften. https://doi.org/10.1007/978-3-531-91957-7_7 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
Baumgartner, S. E., Weeda, W. D., van der Heijden, L. L., & Huizinga, M. (2014). The Relationship Between Media Multitasking and Executive Function in Early Adolescents. The Journal of Early Adolescence, 34(8), 1120–1144. https://doi.org/10.1177/0272431614523133 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
Ho, R. C., Zhang, M. W., Tsang, T. Y., Toh, A. H., Pan, F., Lu, Y., Cheng, C., Yip, P. S., Lam, L. T., Lai, C.-M., Watanabe, H., & Mak, K.-K. (2014). The association between internet addiction and psychiatric co-morbidity: A meta-analysis. BMC Psychiatry, 14(1), 183. https://doi.org/10.1186/1471-244X-14-183 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
Karpf, D. (2019, December 10). On Digital Disinformation and Democratic Myths. MediaWell, Social Science Research Council. https://mediawell.ssrc.org/expert-reflections/on-digital-disinformation-and-democratic-myths/ Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
Kuniecki, M., Pilarczyk, J., & Wichary, S. (2015). The color red attracts attention in an emotional context. An ERP study. Frontiers in Human Neuroscience, 9. https://doi.org/10.3389/fnhum.2015.00212 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
Meshi, D., Morawetz, C., & Heekeren, H. R. (2013). Nucleus accumbens response to gains in reputation for the self relative to gains for others predicts social media use. Frontiers in Human Neuroscience, 7. https://doi.org/10.3389/fnhum.2013.00439 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
Park, S., Jeon, H. J., Bae, J. N., Seong, S. J., & Hong, J. P. (2017). Prevalence and Psychiatric Comorbidities of Internet Addiction in a Nationwide Sample of Korean Adults. Psychiatry Investigation, 14(6), 879–882. https://doi.org/10.4306/pi.2017.14.6.879 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
Rus, H. M., & Tiemensma, J. (2017). Social Media under the Skin: Facebook Use after Acute Stress Impairs Cortisol Recovery. Frontiers in Psychology, 8. https://doi.org/10.3389/fpsyg.2017.01609 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
SINUS-Institut, & DIVSI. (2018). DIVSI U25-Studie – Euphorie war gestern. Eine Grundlagenstudie des SI-NUS-Instituts Heidelberg im Auftrag des Deutschen Instituts für Vertrauen und Sicherheit im Internet. 116. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
Sweney, M., & Davidson, H. (2021, August 3). China’s Tencent tightens games controls for children after state media attack. The Guardian. http://www.theguardian.com/business/2021/aug/03/chinas-tencent-tightens-controls-for-children-amid-games-addiction-fears Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
Turkle, S. (2008). Always-On/Always-On-You: The Tethered Self. The MIT Press. http://mitpress.universitypressscholarship.com/view/10.7551/mitpress/9780262113120.001.0001/upso-9780262113120-chapter-10 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
Westcott, B. (2020, July 8). Children in China locked up for as long as 10 days at internet addiction camp. CNN. https://www.cnn.com/2020/07/08/asia/china-court-abuse-internet-addiction-intl-hnk/index.html Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
Andersen, S. M., Moskowitz, G. B., Blair, I. V., & Nosek, B. A. (2007): Automatic thought, in: Higgins, E. T. & Kruglanski, A. W. (Eds.): Social psychology: Handbook of basic principles, pp. 138–175, 2nd ed., New York. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-78
Barassi, V. (2020): The Human Error in AI and question about Children’s Rights. Online: http://childdatacitizen.com/cdc/wp-content/uploads/2020/06/The-Human-Error-in-AI-and-Children-Rights_Prof.-Barassi_Response-to-AI-White-Paper-.pdf Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-78
Dastin, J. (2018): Amazon scraps secret AI recruiting tool that showed bias against women [Press release]. Online: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-78
Enarsson, T.; Enqvist, L. & Naarttijärvi, M. (2021): Approaching the human in the loop – legal perspectives on hybrid human/algorithmic decision-making in three contexts: in: Information & Communication Technology Law, doi.org/10.1080/13600834.2021.1958860 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-78
Martinez, E. & Kirchner, L. (2021, August 25). The Secret Bias Hidden in Mortgage-Approval Algorithms. the Markup. Retrieved from https://themarkup.org/denied/2021/08/25/the-secret-bias-hidden-in-mortgage-approval-algorithms Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-78
Catts, O. & Zurr, I. (2004-2005). Ingestion / Disembodied Cuisine: Towards victimless meat. Cabinet Magazine. Retrieved August 27, 2021 from https://www.cabinetmagazine.org/issues/16/catts_zurr.php (27 August, 2021) Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86
Hertz, G. D. (2015). Conversations in Critical Making. CTheory Books. Retrieved October 8, 2021 from https://www.researchgate.net/publication/320344201_Conversations_in_Critical_Making Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86
Kohno, T. & Johnson, B. D. (2011). Science fiction prototyping and security education: Cultivating contextual and societal thinking in computer security education and beyond. SIGCSE’11. Retrieved August 30, 2021 from: https://homes.cs.washington.edu/~yoshi/papers/SIGCSE/csefp118-kohno.pdf Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86
Kong, B., Liang R.-H., Liu, M., Chang, S.H., Tseng, H.-C. & Ju, C.-H. (2021). Neuromancer workshop: Towards designing experiential entanglement with science fiction. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 26, 1–17 https://doi.org/10.1145/3411764.3445273. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86
Kohno, T. and Johnson, B. D. (2011). Science fiction prototyping and security education: Cultivating contextual and societal thinking on computer security education and beyond, SIGCSE ’11: Proceedings of the 42nd ACM Technical Symposium on Computer Science Education, 9-11. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86
Linehan, C., Kirman, B. J., Reeves, S., Blythe, M. A., Tanenbaum, T. J.,Desjardins, A. & Wakkary, A. (2014) Alternate endings: Using fiction to explore design futures. CHI ’14 Extended Abstracts on Human Factors in Computing Systems, 45–48. CHI EA ’14. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/2559206.2560472. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86
Okorafor, N. (2017, November). Sci-fi stories that imagine a future Africa [Video]. TED Conferences. Retrieved October 9, 2021 from https://www.ted.com/talks/nnedi_okorafor_sci_fi_stories_that_imagine_a_future_africa?language=en Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86
Mubin, O., Obaid, M., Jordan, P., Alves-Oliveira, P., Eriksson, T., Barendregt, W., Sjolle, D., Fjeld, M., Simoff, S. and Billinghurst, M. (2016). Towards an agenda for sci-fi inspired HCI research. Proceedings of the 13th International Conference on Advances in Computer Entertainment, 10, 1-6. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86
Shklovski, I. & Grönvall, E. (2020). CreepyLeaks: Participatory speculation through demos. In Proceedings of the 11th Nordic Conference on Human-Computer Interaction, 1–12. Tallinn Estonia: ACM. https://doi.org/10.1145/3419249.3420168. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86
Zaidi, L. (2019). Worldbuilding in science fiction, foresight and design. Journal of Future Studies, 23(4), 15-25. Retrieved August 28, 2021 from https://jfsdigital.org/articles-and-essays/vol-23-no-4-june-2019/worldbuilding-in-science-fiction-foresight-and-design/ Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86