, um zu prüfen, ob Sie einen Vollzugriff auf diese Publikation haben.
Ausgabe Teilzugriff

Jahrgang 1 (2021), Heft 2

Morals & Machines
Autor:innen:
Zeitschrift:
Morals & Machines
Verlag:
 2022

Über die Zeitschrift

The scientific journal Morals & Machines addresses the question of how algorithms in general and artificial intelligence (AI) especially change society, the economy and the working world, the media, the healthcare system, technology, language, gender relations, and art and culture in a pluralistic manner. It investigates the questions of which ethical risks arise from general and artificial intelligence, what potential they offer and what challenges they pose to legal systems worldwide in relation to technological applications, robotics and the integration of AI. The journal examines these questions from an interdisciplinary, global and critical perspective at the interface between the humanities, social science, law and computer science.

Publikation durchsuchen


Bibliographische Angaben

ISSN-Print
2747-5174
ISSN-Online
2747-5182
Verlag
Nomos, Baden-Baden
Sprache
Englisch
Produkttyp
Ausgabe

Artikel

Cover der Ausgabe: Morals & Machines Jahrgang 1 (2021), Heft 2
Artikel
Kein Zugriff

Seite 1 - 2
Nomos Verlagsgesellschaft mbH & Co. KG, Baden-Baden 2021

Cover der Ausgabe: Morals & Machines Jahrgang 1 (2021), Heft 2
Artikel
Kein Zugriff

Seite 3 - 6
Nomos Verlagsgesellschaft mbH & Co. KG, Baden-Baden 2021

Cover der Ausgabe: Morals & Machines Jahrgang 1 (2021), Heft 2
Artikel
Kein Zugriff

Seite 7 - 8
Nomos Verlagsgesellschaft mbH & Co. KG, Baden-Baden 2021

Cover der Ausgabe: Morals & Machines Jahrgang 1 (2021), Heft 2
Artikel
Kein Zugriff

Seite 9 - 9
Nomos Verlagsgesellschaft mbH & Co. KG, Baden-Baden 2021

Cover der Ausgabe: Morals & Machines Jahrgang 1 (2021), Heft 2
Artikel
Vollzugriff

Seite 10 - 23
There is growing interest in explanations as an ethical and technical solution to the problem of 'opaque' AI systems. In this essay we point out that technical and ethical approaches to Explainable AI (XAI) have different assumptions and aims....
Nomos Verlagsgesellschaft mbH & Co. KG, Baden-Baden 2021
Autor:innen:

Cover der Ausgabe: Morals & Machines Jahrgang 1 (2021), Heft 2
Artikel
Kein Zugriff

Seite 24 - 39
Advances in AI technology affect knowledge work in diverse fields, including healthcare, engineering, and management. Although automation and machine support can increase efficiency and lower costs, it can also, as an unintended consequence, deskill...
Nomos Verlagsgesellschaft mbH & Co. KG, Baden-Baden 2021
Autor:innen:

Cover der Ausgabe: Morals & Machines Jahrgang 1 (2021), Heft 2
Artikel
Kein Zugriff

Seite 40 - 49
The use of digital technologies for workplace monitoring renders organizational responsibilities murky and opaque. However, clear responsibility for monitoring practices is key for both legal compliance and potential liability, as well as for...
Nomos Verlagsgesellschaft mbH & Co. KG, Baden-Baden 2021
Autor:innen:

Cover der Ausgabe: Morals & Machines Jahrgang 1 (2021), Heft 2
Artikel
Kein Zugriff

Seite 50 - 59
In this paper I will approach the problem of machine opacity in law, according to an understanding of it as a problem revolving around the underlying philosophical tension between description and prescription in law and legal theory. I will use the...
Nomos Verlagsgesellschaft mbH & Co. KG, Baden-Baden 2021
Autor:innen:

Cover der Ausgabe: Morals & Machines Jahrgang 1 (2021), Heft 2
Artikel
Vollzugriff

Seite 60 - 69
Over recent years, the EU has increasingly looked at the regulation of various forms of automation and the use of algorithms. For recommender systems specifically, two recent legislative proposals by the European Commission, the Digital Services Act...
Nomos Verlagsgesellschaft mbH & Co. KG, Baden-Baden 2021
Autor:innen:

Cover der Ausgabe: Morals & Machines Jahrgang 1 (2021), Heft 2
Artikel
Kein Zugriff

Seite 70 - 77
What does the gambling industry have in common with the digital economy? Silicon Valley has learned from Las Vegas to drive “user engagement” on platforms, such as Facebook and Twitter, and in gaming. These platforms rely on the same...
Nomos Verlagsgesellschaft mbH & Co. KG, Baden-Baden 2021
Autor:innen:

Cover der Ausgabe: Morals & Machines Jahrgang 1 (2021), Heft 2
Artikel
Kein Zugriff

Seite 78 - 85
In this intervention, we discuss to what extent the term “decision” serves as an adequate terminology for what algorithms actually do. Although calculations of algorithms might be perceived as or be an important basis for a decision, we argue,...
Nomos Verlagsgesellschaft mbH & Co. KG, Baden-Baden 2021
Autor:innen:

Cover der Ausgabe: Morals & Machines Jahrgang 1 (2021), Heft 2
Artikel
Kein Zugriff

Seite 86 - 92
This article explores the use of participatory art and technology workshops as an approach to create more diverse and inclusive modes of engagement in the design of digital technologies. Taking the starting point in diverse works of science fiction,...
Nomos Verlagsgesellschaft mbH & Co. KG, Baden-Baden 2021
Autor:innen:

Literaturverzeichnis (328 Einträge)

  1. Adams, D. (1979). The Hitchhiker’s Guide to the Galaxy. New York: Harmony Books. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-3
  2. Brynjolfsson, E. & McAfee, A (2017). The Business of Artificial Intelligence. Harvard Business Review. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-3
  3. Bucher, T. (2018). If... then: Algorithmic Power and Politics. Oxford University Press. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-3
  4. Casey, B., Farhangi, A. & Vogl, R. (2019). Rethinking Explainable Machines: The GDPR's 'Right to Explanation' Debate and the Rise of Algorithmic Audits in Enterprise. Berkeley Technology Law Journal, Vol. 34, available at SSRN: https://ssrn.com/abstract=3143325 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-3
  5. Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science Advances, 4(1), eaao5580. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-3
  6. Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin's Press. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-3
  7. European Commission (2021). Proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts. COM/2021/206 final. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-3
  8. Goanta, C., & Spanakis, G. (2020). Influencers and Social Media Recommender Systems: Unfair Commercial Practices in EU and US Law. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3592000 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-3
  9. Hall, P., & Gill, N. (2019). An introduction to machine learning interpretability. Second edition. Sebastopol, CA: O'Reilly Media. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-3
  10. Holm, E. A. (2019). In defense of the black box. Science, 364(6435), 26-27. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-3
  11. Jaiman, V., & Urovi, V. (2020). A consent model for blockchain-based health data sharing platforms. IEEE Access, 8, 143734-143745. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-3
  12. Pasquale, F. (2015). The black box society. Harvard University Press. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-3
  13. Ranchordás, S. (2020). Nudging citizens through technology in smart cities. International Review of Law, Computers & Technology, 34(3), 254-276. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-3
  14. Rudin, C., & Radin, J. (2019). Why are we using black box models in AI when we don’t need to? A lesson from an explainable AI competition. Harvard Data Science Review, 1(2). doi: https://doi.org/10.1162/99608f92.5a8a3a3d Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-3
  15. Tsamados, A., Aggarwal, N., Cowls, J., Morley, J., Roberts, H., Taddeo, M., & Floridi, L. (2021). The ethics of algorithms: key problems and solutions. AI & SOCIETY, 1-16. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-3
  16. van Dijck, J. (2020). Governing digital societies: Private platforms, public values. Computer Law & Security Review, 36, 105377. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-3
  17. Wanner, J., Herm, L. V., & Janiesch, C. (2020). How much is the black box? The value of explainability in machine learning models. ECIS 2020 Research-in-Progress Papers. 85. https://aisel.aisnet.org/ecis2020_rip/85 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-3
  18. Watson, David S., Krutzinna, J., Bruce, I. N., Griffiths, C. E. M., McInness, I. B., Barnes, M. R. & Floridi, L. (2019). Clinical applications of machine learning algorithms: beyond the black box. BMJ 2019; 364: l886 doi:10.1136/bmj.l886 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-3
  19. Webb, M. E., Fluck, A., Magenheim, J., Malyn-Smith, J., Waters, J., Deschênes, M., & Zagami, J. (2020). Machine learning for human learners: opportunities, issues, tensions and threats. Educational Technology Research and Development, 1-22. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-3
  20. Alvesson, M. and D. Kärreman (2007). "Constructing Mystery: Empirical Matters in Theory Development", The Academy of Management Review, 32, 1265-1281. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  21. Alvesson, M. and J. Sandberg (2013). Constructing research questions: Doing interesting research, SAGE, London. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  22. Brown, J. S. and P. Duguid (2017). The social life of information: Updated, with a new preface, Harvard Business Review Press. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  23. Burrell, J. (2016). "How the machine ‘thinks’: Understanding opacity in machine learning algorithms", Big Data & Society, 3, 2053951715622512. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  24. Cecez-Kecmanovic, D., R. D. Galliers, O. Henfridsson, S. Newell and R. Vidgen (2014). "The Sociomateriality of Information Systems: Current Status, Future Directions", MIS Quarterly, 38, 809-830. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  25. DARPA, Agency, D.A.R.P. (2016) Broad Agency Announcement: Explainable Artificial Intelligence (XAI). Arlington, VA. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  26. Dastin, J. (2018) 'Amazon scraps secret AI recruiting tool that showed bias against women', Available: Reuters. Available at: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G (Accessed 20 May 2019). Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  27. Dobbe, R., S. Dean, T. Gilbert and N. Kohli (2018). "A Broader View on Bias in Automated Decision-Making: Reflecting on Epistemology and Dynamics", arXiv preprint arXiv:1807.00553. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  28. Doran, D., S. Schulz and T. R. Besold (2017). "What does explainable AI really mean? A new conceptualization of perspectives", arXiv preprint arXiv:1710.00794. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  29. Doshi-Velez, F., M. Kortz, R. Budish, C. Bavitz, S. Gershman, D. O'Brien, S. Schieber, J. Waldo, D. Weinberger and A. Wood (2017). "Accountability of AI under the law: The role of explanation", arXiv preprint arXiv:1711.01134. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  30. Edwards, L. and M. Veale (2017). "Slave to the algorithm: Why a right to an explanation is probably not the remedy you are looking for", Duke L. & Tech. Rev., 16, 18. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  31. Faraj, S., S. Pachidi and K. Sayegh (2018). "Working and organizing in the age of the learning algorithm", Information and Organization, 28, 62-70. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  32. Gunning, D. (2017) Explainable Artificial Intelligence (XAI): Defense Advanced Research Projects Agency. Available at: https://www.darpa.mil/program/explainable-artificial-intelligence (Accessed: May 18 2019 2019). Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  33. Harding, N., J. Ford and B. Gough (2010). "Accounting for ourselves: are academics exploited workers?", Critical Perspectives on Accounting, 21, 159-168. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  34. High Level Expert Group on AI Ethics Guidelines for Trustworthy AI (2019): The European Commission. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  35. Introna, L. D. (2016). "Algorithms, governance, and governmentality: On governing academic writing", Science, Technology, & Human Values, 41, 17-49. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  36. Langley, A., C. Smallman, H. Tsoukas and A. H. Van de Ven (2013). "Process studies of change in organization and management: unveiling temporality, activity, and flow", Academy of Management Journal, 56, 1-13. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  37. Lee, M. K. (2018). "Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management", Big Data & Society, 5, 2053951718756684. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  38. Michal, P., D. Pawel, S. Wenhan, R. Rafal and A. Kenji (2009). "Towards context aware emotional intelligence in machines: computing contextual appropriateness of affective states", In Proceedings of the Twenty-First International Joint Conference on Artificial Intelligence (IJCAI-09), pp. 1469-1474. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  39. Nakatani, L. H. and J. A. Rohrlich (1983). "Soft machines: A philosophy of user-computer interface design", In Proceedings of the SIGCHI conference on Human Factors in Computing Systems, pp. 19-23. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  40. O'Neil, C. (2017). Weapons of math destruction: How big data increases inequality and threatens democracy, Broadway Books. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  41. O’Neil Risk Consulting & Algorithmic Auditing (ORCAA). Available at: http://www.oneilrisk.com/. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  42. Ohsawa, Y. and S. Tsumoto (2006). Chance discoveries in real world decision making: data-based interaction of human intelligence and artificial intelligence, Springer. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  43. Orlikowski, W. J. (2016). "Digital work: a research agenda". Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  44. Roberts, J. (2009). "No one is perfect: The limits of transparency and an ethic for ‘intelligent’accountability", Accounting, Organizations and Society, 34, 957-970. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  45. Rosenberg, L. (2016). "Artificial Swarm Intelligence, a Human-in-the-loop approach to AI", In AAAI, pp. 4381-4382. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  46. Rudin, C. (2018). "Please stop explaining black box models for high stakes decisions", arXiv preprint arXiv:1811.10154. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  47. Russell, S., S. Hauert, R. Altman and M. Veloso (2015). "Ethics of artificial intelligence", Nature, 521, 415-416. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  48. Santiago, D. and T. Escrig (2017) Why explainable AI must be central to responsible AI: Accenture. Available at: https://www.accenture.com/us-en/blogs/blogs-why-explainable-ai-must-central-responsible-ai (Accessed: 1/6/2019 2019). Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  49. Satava, D., C. Caldwell and L. Richards (2006). "Ethics and the auditing culture: Rethinking the foundation of accounting and auditing", Journal of Business Ethics, 64, 271-284. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  50. Schulzke, M. (2013). "Autonomous weapons and distributed responsibility", Philosophy & Technology, 26, 203-219. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  51. Schwartz, D. G. (2014). "The disciplines of information: Lessons from the history of the discipline of medicine", Information Systems Research, 25, 205-221. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  52. Shapiro, V. (2018) 'Explaining System Intelligence', SAP User Experience Community. Available at: https://experience.sap.com/skillup/explaining-system-intelligence/. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  53. Suchman, L. A. (2007). Human-machine reconfigurations : plans and situated actions, Cambridge University Press, Cambridge ; New York. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  54. Susskind, R. E. and D. Susskind (2015). The future of the professions: How technology will transform the work of human experts, Oxford University Press, USA. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  55. Tsoukas, H. (1997). "The tyranny of light: The temptations and the paradoxes of the information society", Futures, 29, 827-843. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  56. Wagner, B. (2018). "Ethics as an Escape from Regulation: From ethics-washing to ethics-shopping?", Being Profiling. Cogitas Ergo Sum. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  57. Woolgar, S. (1990). "Configuring the user: the case of usability trials", The Sociological Review, 38, 58-99. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-10
  58. Akata, Z., Balliet, D., de Rijke, M., Dignum, F., Dignum, V., Eiben, G., Fokkens, A., Grossi, D., Hindriks, K., Hoos, H., Hung, H., Jonker, C., Monz, C., Neerincx, M., Oliehoek, F., Prakken, H., Schlobach, S., van der Gaag, L., van Harmelen, F., … Welling, M. (2020). A Research Agenda for Hybrid Intelligence: Augmenting Human Intellect With Collaborative, Adaptive, Responsible, and Explainable Artificial Intelligence. Computer, 53(8), 18–28. https://doi.org/10.1109/MC.2020.2996587 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  59. Altman, R. B. (1999). AI in medicine: The spectrum of challenges from managed care to molecular medicine. AI magazine, 20(3), 67-67. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  60. Arnold, V., and Sutton, S.G. (1998). The theory of technology dominance: Understanding the impact of intelligent decision aids on decision makers’ judgments. Advances in Accounting Behavioral Research, 1(3), 175–194. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  61. Atewell, P. (1987). The Deskilling Controversy. Work and Occupations, 14(3), 323–346. https://doi.org/10.1177/0730888487014003001 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  62. Appelbaum, S. H. (1997). Socio‐technical systems theory: an intervention strategy for organizational development. Management decision. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  63. Barnard, P. J., & Harrison, M. D. (1992). Towards a framework for modelling human-computer interaction. Proceedings International Conference on HCI, 92, 189-196. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  64. Basu, S., Garimella, A., Han, W., & Dennis, A. (2021, January). Human Decision Making in AI Augmented Systems: Evidence from the Initial Coin Offering Market. Proceedings of the 54th Hawaii International Conference on System Sciences, 176. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  65. Beaudouin-Lafon, M. (2004) Designing interaction, not interfaces. Proceedings of the working conference on Advanced visual interfaces, 15-22. https://dl.acm.org/doi/pdf/10.1145/989863.989865 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  66. Bednar, P. M., & Welch, C. (2020). Socio-Technical Perspectives on Smart Working: Creating Meaningful and Sustainable Systems. Information Systems Frontiers, 22(2), 281–298. https://doi.org/10.1007/s10796-019-09921-1 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  67. Bell, S. E., Hullinger, A., & Brislen, L. (2015). Manipulated Masculinities: Agribusiness, Deskilling, and the Rise of the Businessman‐Farmer in the United States. Rural Sociology, 80(3), 285-313. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  68. Bhardwaj, S. (2013). Technology, and the up-skilling or deskilling conundrum. WMU Journal of Maritime Affairs, 12(2), 245–253. https://doi.org/10.1007/s13437-013-0045-6 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  69. Bisen, V.S. (2020). What is Human in the Loop Machine Learning: Why & How Used in AI?. Vshingbisen. https://medium.com/vsinghbisen/what-is-human-in-the-loop-machine-learning-why-how-used-in-ai-60c7b44eb2c0 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  70. Braverman, H. (1998). Labor and monopoly capital: The degradation of work in the twentieth century. Monthly Review Press. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  71. Bresnahan TF, Brynjolfsson E and Hitt LM (2002). Information technology, workplace organization, and the demand for skilled labor: Firm-level evidence. The quarterly journal of economics, 117(1), 339-376. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  72. Brugger, F., & Gehrke, C. (2018). Skilling and deskilling: Technological change in classical economic theory and its empirical evidence. Theory and Society, 47(5), 663–689. https://doi.org/10.1007/s11186-018-9325-7 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  73. Brynjolfsson, E., & McAfee, A. (2014). The second machine age: Work, progress, and prosperity in a time of brilliant technologies. WW Norton & Company. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  74. Buoy Health. (2018). Buoy Health Partners With Boston Children's Hospital To Improve The Way Parents Currently Assess Their Children's Symptoms Online. https://www.prnewswire.com/news-releases/buoy-health-partners-with-boston-childrens-hospital-to-improve-the-way-parents-currently-assess-their-childrens-symptoms-online-300693055.html Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  75. Cabitza, F., Rasoini, R., & Gensini, G. F. (2017). Unintended Consequences of Machine Learning in Medicine. JAMA, 318(6), 517–518. https://doi.org/10.1001/jama.2017.7797 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  76. Chrisinger, D. (2019). The solution lies in education: artificial intelligence & the skills gap. On the Horizon, 27(1), 1-4. 10.1108/OTH-03-2019-096 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  77. Christiansen, D. (2019). Backscatter: The 737 Max: An accident waiting to happen? IEEE InSight USA. https://insight.ieeeusa.org/articles/the-737-max-an-accident-waiting-to-happen/ Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  78. Collins, A., Brown, J. S., & Newman, S. E. (1989). Cognitive apprenticeship: Teaching the crafts of reading, writing, and mathematics. In L. B. Resnick (Ed.), Knowing, learning, and instruction: Essays in honor of Robert Glaser (pp. 453–494). Lawrence Erlbaum Associates, Inc. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  79. Coroamă, V. C., & Pargman, D. (2020, June). Skill rebound: On an unintended effect of digitalization. Proceedings of the 7th International Conference on ICT for Sustainability, 213-219. https://doi.org/10.1145/3401335.3401362 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  80. Council of Europe (2019). Council of Europe study DGI(2019)05 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  81. Davenport, T. H., & Kirby, J. (2016). Just how smart are smart machines?. MIT Sloan Management Review, 57(3), 21. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  82. Dellermann, D. (2020). Accelerating Entrepreneurial Decision-Making Through Hybrid Intelligence (Doctoral dissertation). Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  83. Dellermann, D., Calma, A., Lipusch, N., Weber, T., Weigel, S., & Ebel, P. (2019). The future of human-AI collaboration: a taxonomy of design knowledge for hybrid intelligence systems. Proceedings of the 52nd Hawaii International Conference on System Sciences. arXiv preprint arXiv:2105.03354 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  84. Dellermann, D., Ebel, P., Söllner, M., & Leimeister, J. M. (2019). Hybrid Intelligence. Business & Information Systems Engineering, 61(5), 637–643. https://doi.org/10.1007/s12599-019-00595-2 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  85. Dellermann, D., Lipusch, N., Ebel, P., & Leimeister, J. M. (2019). Design principles for a hybrid intelligence decision support system for business model validation. Electronic Markets, 29(3), 423–441. https://doi.org/10.1007/s12525-018-0309-2 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  86. Dikmen, M., & Burns, C. (2017). Trust in autonomous vehicles: The case of Tesla Autopilot and Summon. 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 1093–1098. https://doi.org/10.1109/SMC.2017.8122757 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  87. Dilla, W. N., & Stone, D. N. (1997). Representations as decision aids: The asymmetric effects of words and numbers on auditors' inherent risk judgments. Decision Sciences, 28(3), 709-743. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  88. Elrod, P.D. and Tippett, D.D., 2002. The “death valley” of change. Journal of organizational change management, 15(3), 273-291. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  89. Endsley, M. R., & Kiris, E. O. (1995). The out-of-the-loop performance problem and level of control in automation. Human factors, 37(2), 381-394. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  90. Feldmann, H., 2013. Technological unemployment in industrial countries. Journal of Evolutionary Economics, 23(5), pp.1099-1126. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  91. Ferris, T., Sarter, N., & Wickens, C. D. (2010). Chapter 15 - Cockpit Automation: Still Struggling to Catch Up…. In E. Salas & D. Maurino (Eds.), Human Factors in Aviation (2nd ed, pp. 479–503). Academic Press. https://doi.org/10.1016/B978-0-12-374518-7.00015-8 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  92. Fitzgerald, D. )1993). Farmers Deskilled: Hybrid Corn and Farmers’ Work. Technology and Culture, 34(2), 324–43. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  93. Fonstad, N.O. and Robertson, D. (2006). Transforming a company, project by project: The IT engagement model. MIS Quarterly Executive, 5(1). Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  94. Frey, C. B., & Osborne, M. A. (2013). The future of employment: How susceptible are jobs to computerisation? Technological Forecasting and Social Change, 114, 254–280. https://doi.org/10.1016/j.techfore.2016.08.019 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  95. Gasparetto, A., & Scalera, L. (2019). A brief history of industrial robotics in the 20th century. Advances in Historical Studies, 8(1), 24-35. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  96. Hawkins, J. (2021). A thousand brains: A new theory of intelligence (1st ed). Basic Books. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  97. Harari, Y. N. (2017). Dataism is our new god. New Perspectives Quarterly, 34(2), 36-43. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  98. Heaven, D. (2019). Why deep-learning AIs are so easy to fool. Nature, 574(7777), 163–166. https://doi.org/10.1038/d41586-019-03013-5 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  99. Hoff, T. (2011). Deskilling and adaptation among primary care physicians using two work innovations. Health Care Management Review, 36(4), 338-348. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  100. Hosny, A., Parmar, C., Quackenbush, J., Schwartz, L. H., & Aerts, H. J. (2018). Artificial intelligence in radiology. Nature Reviews Cancer, 18(8), 500-510. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  101. Jacobs, C., & van Ginneken, B. (2019). Google’s lung cancer AI: a promising tool that needs further validation. Nature Reviews Clinical Oncology, 16(9), 532-533. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  102. Jerald, C. D. (2009). Defining a 21st Century Education. Center for Public Education. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  103. http://www.centerforpubliceducation.org/Learn-About/21st-Century/Defining-a-21st-Century- Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  104. Education-Full-Report-PDF.pdf Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  105. Kamar E (2016). Hybrid workplaces of the future. XRDS: Crossroads, The ACM Magazine for Students, 23(2), 22-25. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  106. Kim, Y.J., Kim, K. and Lee, S., (2017). The rise of technological unemployment and its implications on the future macroeconomic landscape. Futures, 87, 1-9. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  107. Knight, W. (2017). The Dark Secret at the Heart of AI. MIT Technology Review. https://www.technologyreview.com/2017/04/11/5113/the-dark-secret-at-the-heart-of-ai/ Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  108. Korinek, A. and Stiglitz, J.E., (2019). Artificial Intelligence and Its Implications for Income Distribution and Unemployment (pp. 349-390). University of Chicago Press. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  109. Lasecki, W. S. (2019). On Facilitating Human-Computer Interaction via Hybrid Intelligence Systems. Proceedings of the 7th annual ACM Conference on Collective Intelligence. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  110. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444. https://doi.org/10.1038/nature14539 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  111. Leeuw, P., & Mtegha, H. (2018). The significance of mining backward and forward linkages in reskilling redundant mine workers in South Africa. Resources Policy, 56, 31-37. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  112. Levy, J., Jotkowitz, A., & Chowers, I. (2019). Deskilling in ophthalmology is the inevitable controllable?. Eye, 33(3), 347-348. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  113. Lyytinen, K., Nickerson, J. V., & King, J. L. (2020). Metahuman systems = humans + machines that learn. Journal of Information Technology, 026839622091591. https://doi.org/10.1177/0268396220915917 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  114. Mackay, W.E., 1999. Is paper safer? The role of paper flight strips in air traffic control. ACM Transactions on Computer-Human Interaction (TOCHI), 6(4), 311-340. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  115. Makridakis, S. (2018). High tech advances in artificial intelligence (AI) and intelligence augmentation (IA) and Cyprus. The Cyprus Review, 30(2), 159–167. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  116. Marcus, G. (2020). The next decade in AI: four steps towards robust artificial intelligence. ArXiv Preprint ArXiv:2002.06177. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  117. Mascha, M. F., & Smedley, G. (2007). Can computerized decision aids do “damage”? A case for tailoring feedback and task complexity based on task experience. International Journal of Accounting Information Systems, 8(2), 73–91. https://doi.org/10.1016/j.accinf.2007.03.001 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  118. McAfee, A., & Brynjolfsson, E. (2017). Machine, platform, crowd: Harnessing our digital future. WW Norton & Company. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  119. McDermott, R. (1999). Why Information Technology Inspired but Cannot Deliver Knowledge Management. California Management Review, 41(4), 103–117. https://doi.org/10.2307/41166012 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  120. Monarch, R. M. (2021). Human-in-the-Loop Machine Learning: Active learning and annotation for human-centered AI. Simon and Schuster. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  121. Nahavandi, S. (2017). Trusted Autonomy Between Humans and Robots: Toward Human-on-the-Loop in Robotics and Autonomous Systems. IEEE Systems, Man, and Cybernetics Magazine, 3(1), 10–17. https://doi.org/10.1109/MSMC.2016.2623867 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  122. National Highway Traffic Safety Administration (NHTS). (2013). Preliminary statement of policy concerning automated vehicles. Washington DC, 1, 14. http://www.nhtsa.gov/staticfiles/ rulemaking/pdf/Automated_Vehicles_Policy.pdf Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  123. Nibert, D. (2011). Origins and Consequences of the Animal Industrial Complex. In S.Best; R. Kahn; A.J. Nocella, & P. McLaren (Eds.). The Global Industrial Complex: Systems of Domination. (pp. 208). Rowman & Littlefield. ISBN 978-0739136980 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  124. Nicholson, C. (2018). The Next AI Winter—Deep Learning and its Discontents. Pathmind. http://wiki.pathmind.com/ai-winter Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  125. Noga, T., & Arnold, V. (2002). Do tax decision support systems affect the accuracy of tax compliance decisions?. International Journal of Accounting Information Systems, 3(3), 125-144. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  126. OECD. (2018). The future of education and skills—Education 2030. OECD. https://www.oecd.org/education/2030/E2030%20Position%20Paper%20(05.04.2018).pdf Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  127. Pachidi, S., Berends, H., Faraj, S., & Huysman, M. (2021). Make way for the algorithms: Symbolic actions and change in a regime of knowing. Organization Science, 32(1), 18-41. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  128. Paré, G., Sicotte, C., & Jacques, H. (2006). The effects of creating psychological ownership on physicians' acceptance of clinical information systems. Journal of the American Medical Informatics Association, 13(2), 197-205. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  129. Parente, S. L., & Prescott, E. C. (1994). Barriers to technology adoption and development. Journal of political Economy, 102(2), 298-321. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  130. Peng, G., Wang, Y., & Han, G. (2018). Information technology and employment: The impact of job tasks and worker skills. Journal of Industrial Relations, 60(2), 201-223. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  131. Piva M, Santarelli E and Vivarelli M (2005) The skill bias effect of technological and organisational change: Evidence and policy implications. Research Policy, 34, 141–157. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  132. Pol, E. and Reveley, J., 2017. Robot induced technological unemployment: Towards a youth-focused coping strategy. Psychosociological Issues in Human Resource Management, 5(2), pp.169-186. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  133. Quintana, C., Reiser, B. J., Davis, E. A., Krajcik, J., Fretz, E., Duncan, R. G., Kyza, E., Edelson, D., & Soloway, E. (2004). A Scaffolding Design Framework for Software to Support Science Inquiry. Journal of the Learning Sciences, 13(3), 337–386. https://doi.org/10.1207/s15327809jls1303_4 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  134. Prakash, N., & Mathewson, K. W. (2020). Conceptualization and Framework of Hybrid Intelligence Systems. NeurIPS 2020 Workshop on Human And Model in the Loop Evaluation and Training Strategies. https://openreview.net/pdf/b2ded20e201d9d15a39193b3154342de7b6ef81a.pdf Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  135. Rinard, R. G. (1996). Technology, deskilling, and nurses: The impact of the technologically changing environment. Advances in Nursing Science, 18(4), 60–69. https://doi.org/10.1097/00012272-199606000-00008 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  136. Rinta-Kahila, T., Penttinen, E., Salovaara, A., & Soliman, W. (2018). Consequences of Discontinuing Knowledge Work Automation – Surfacing of Deskilling Effects and Methods of Recovery. Proceedings of the 51st Hawaii International Conference on System Sciences, 5244 - 5253. URI: http://hdl.handle.net/10125/50543 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  137. Sarter, N.B., Woods, D.D. and Billings, C.E.. (1997). Automation surprises. Handbook of Human Factors and Ergonomics Methods, 2, 1926-1943. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  138. Shneiderman, B. (2020). Human-centered artificial intelligence: Three fresh ideas. AIS Transactions on Human-Computer Interaction, 12(3), 109-124. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  139. Schwab, K., & Davis, N. (2018). Shaping the Fourth Industrial Revolution. Geneva: World Economic Forum. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  140. Sinagra, E., Rossi, F., & Raimondo, D. (2021). Use of artificial intelligence in endoscopic training: Is deskilling a real fear?. Gastroenterology, 160(6), 2212. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  141. Soffel, J. (2016). What are the 21st-century skills every student needs? World Economic Forum. https://www.weforum.org/agenda/2016/03/21st-century-skills-future-jobs-students/ Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  142. Sparrow, B., Liu, J., & Wegner, D. M. (2011). Google effects on memory: Cognitive consequences of having information at our fingertips. Science, 333(6043), 776-778. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  143. Spenner KI. (1983). Deciphering prometheus: Temporal change in the skill level of work. American Sociological Review, 824-837. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  144. Steelberg, C. (2019, April 18). The path to an AI-connected government. Data Center Dynamics. https://www.datacenterdynamics.com/en/opinions/path-ai-connected-government/. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  145. Stone, G. D., Brush, S., Busch, L., Cleveland, D. A., Dove, M. R., Herring, R. J., ... & Stone, G. D. (2007). Agricultural deskilling and the spread of genetically modified cotton in Warangal. Current anthropology, 48(1), 67-103. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  146. Sutton, S. G. (1993). Toward an understanding of the factors affecting the quality of the audit process. Decision Sciences, 24(1), 88-105. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  147. Sutton, S. G., Arnold, V., & Holt, M. (2018). How Much Automation Is Too Much? Keeping the Human Relevant in Knowledge Work. Journal of Emerging Technologies in Accounting, 15(2), 15–25. https://doi.org/10.2308/jeta-52311 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  148. Travis, G. (2019). How the Boeing 737 Max disaster looks to a Software Developer. IEEE Spectrum. https://spectrum.ieee.org/how-the-boeing-737-max-disaster-looks-to-a-software-developer. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  149. Trösterer, S., Gärtner, M., Mirnig, A., Meschtscherjakov, A., McCall, R., Louveton, N., ... & Engel, T. (2016, October). You never forget how to drive: driver skilling and deskilling in the advent of autonomous vehicles. Proceedings of the 8th international conference on automotive user interfaces and interactive vehicular applications, 209-216. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  150. Trunk, A., Birkel, H., & Hartmann, E. (2020). On the current state of combining human and artificial intelligence for strategic organizational decision making. Business Research, 13(3), 875–919. https://doi.org/10.1007/s40685-020-00133-x Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  151. Tuomi, I. (2018). The Impact of Artificial Intelligence on Learning, Teaching, and Education: Policies for the Future. 10.2760/12297. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  152. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS quarterly, 425-478. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  153. Wang, D., Maes, P., Ren, X., Shneiderman, B., Shi, Y., & Wang, Q. (2021). Designing AI to Work WITH or FOR People? Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, 1–5. https://doi.org/10.1145/3411763.3450394 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  154. Weiss, D. J., & Shanteau, J. (2003). Empirical assessment of expertise. Human factors, 45(1), 104-116. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  155. Wilmer, H. H., Sherman, L. E., & Chein, J. M. (2017). Smartphones and cognition: A review of research exploring the links between mobile technology habits and cognitive functioning. Frontiers in psychology, 8, 605. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  156. Wortmann, C., Fischer, P. M., & Reinecke, S. (2015). Too much of a good thing? How Big Data changes managerial decision making. 36th Society of Judgment and Decision Making. Society for Judgment and Decision Making (SJDM), Annual Conference. https://www.alexandria.unisg.ch/245736/ Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  157. Zihsler, J., Hock, P., Walch, M., Dzuba, K., Schwager, D., Szauer, P., & Rukzio, E. (2016). Carvatar Increasing Trust in Highly-Automated Driving Through Social Cues. Adjunct Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, 9–14. https://doi.org/10.1145/3004323.3004354 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-24
  158. Ajunwa, I. (2020). The “black box” at work. Big Data & Society, October. DOI: 10.1177/2053951720938093 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  159. Ajunwa, I., Crawford, K., & Schultz, J. (2017). Limitless worker surveillance. California Law Review, 105(3), 735–776. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  160. Ananny, M., Crawford, K. (2016). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New media & society, 20(3), 973–989. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  161. Angrave, D., Charlwood, A., Kirkpatrick, I., Lawrence, M., Stuart, M. (2016). HR and analytics: why HR is set to fail the big data challenge. Human Resource Management Journal, 26(1), 1–11. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  162. Atkinson, J. (2018). Workplace monitoring and the right to private life at work. The Modern Law Review, 81(4), 688-700. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  163. Ball, K. (2010). Workplace surveillance: An overview. Labor History, 51(1), 87-106. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  164. Bathaee, Y. (2018). The Artificial Intelligence Black Box and the Failure of Intent and Causation. Harvard Journal or Law & Technology, 31(2), 890–938. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  165. Buiten, M. C. (2019). Towards Intelligent Regulation of Artificial Intelligence. European journal of risk regulation, 10(1), 41–59. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  166. Buiten, M., de Streel, A., Peitz, M. (2021). EU Liability Rules for The Age of Artificial Intelligence. Centre on Regulation in Europe. https://ssrn.com/abstract=3817520. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  167. Dzida, B. (2017). Big Data und Arbeitsrecht. Neue Zeitschrift für Arbeitsrecht, Vol. 9, 541–546. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  168. Ebers, M. (2020). Regulating AI and Robotics. In M. Ebers & S. Navas (Eds.), Algorithms and Law (pp. 37–99). Cambridge University Press. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  169. Ebert, I., Wildhaber, I., & Adams-Prassl, J. (2021). Big Data in the workplace: Privacy Due Diligence as a human rights-based approach to employee privacy protection. Big Data & Society, 1–14. https://doi.org/10.1177/20539517211013051. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  170. Edwards, L., Veale, M. (2017). Slave to the Algorithm? Why a «Right to an Explanation» Is Probably Not the Remedy You Are Looking For. Duke Law & Technology Review, Vol. 16, 18–84. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  171. European Court of Human Rights (2017). Bărbulescu vs. Romania. App. No. 61496/08 ECtHR. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  172. European Court of Human Rights (2018). López Ribalda and others vs. Spain. App. No. 1874/13 and 8567/13 ECtHR. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  173. Galasso, A., Luo, H. (2018). Punishing Robots: Issues in the Economics of Tort Liability and Innovation in Artificial Intelligence. In A. Agrawal, J. Gans, & A. Goldfarb (Eds.), The Economics of Artificial Intelligence: An Agenda (pp. 493–504). University of Chicago Press. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  174. Gillespie, P. (Host) (2015, April 22). AI & Robotics Working Group, Santa Clara County Bar Association, Employment and Labor Law Issues Arising from the Development and Use of Robotics in the Workplace [Audio podcast episode]. https://app.box.com/s/idpfm3glxyqcqeraumas7tegn42zok5x. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  175. Keats, C. D. (2008). Technological Due Process. Washington University Law Review, 85(6), 1249–1313. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  176. King, A. G., Mrkonich, M. (2016). “Big Data” and the Risk of Employment Discrimination. Oklahoma Law Review, 68(3), 555–587. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  177. Martini, M. (2019). Blackbox Algorithmus – Grundfragen einer Regulierung Künstlicher Intelligenz. Springer. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  178. Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 1–21. https://doi.org/10.1177/2053951716679679. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  179. Pasquale, F. (2016). The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  180. Rachum-Twaig, O. (2020). Whose Robot Is It Anyway?: Liability for Artificial-Intelligence-Based Robots. University of Illinois Law Review, Vol. 4, 1141–1175. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  181. Rosenblat, A., Kneese, T., Boyd, D. (2014). Workplace surveillance. Open Society Foundations’ Future of Work Commissioned Research Papers. http://dx.doi.org/10.2139/ssrn.2536605. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  182. Schafheitle, S. D., Weibel, A., Rickert, A. (2021). The Bermuda Triangle of Leadership in the AI Era? Emerging Trust Implications from “Two-Leader-Situations” in the Eyes of Employees. Proceedings of the 54th Hawaii International Conference on System Sciences (HICSS). 5473–5482. DOI:10.24251/HICSS.2021.665. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  183. Seseri, R. (2018, June 14). The Problem with "Explainable AI". TechCrunch https://techcrunch.com/2018/06/14/the-problem-with-explainable-ai/. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  184. Sprague, R. (2015). Welcome to the machine: privacy and workplace implications of predictive analytics, Richmond Journal of Law and Technology, 21(4), 1–46. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  185. Thelisson E. (2017). Towards Trust, Transparency and Liability in AI/AS Systems. In C. Sierra (Ed.), Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17) (pp. 5215–5216). International Joint Conferences on Articial Intelligence. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  186. Trindel, K. for U.S. Equal Employment Opportunity Commission (2016). Big data in the workplace, Written Testimony. 13 October. https://www.eeoc.gov/meetings/meeting-october-13-2016-big-data-workplace-examining-implications-equal-employment/trindel%2C%20phd. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  187. Wachter S., Mittelstadt, B., Floridi, L. (2017). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2), 76–99. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  188. Wachter, S., Mittelstadt, B. (2019). A right to reasonable inferences: re-thinking data protection law in the age of big data and AI. Columbia Business Law review, Vol. 2, 494–620. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  189. Wagner, B. (2019). Liable, but not in control? Ensuring meaningful human agency in automated decision‐making systems. Policy & Internet, 11(1), 104–122. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  190. Wagner, G. (2019). Robot, inc.: Personhood for autonomous systems?. Fordham Law Review, 88(2), 591–612. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  191. Wildhaber, I., Lohmann, M. and Kasper, G. (2019): Diskriminierung durch Algorithmen – Überlegungen zum schweizerischen Recht am Beispiel prädiktiver Analytik am Arbeitsplatz, ZSR 2019 I, 459, 479 f. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  192. POLICY DOCUMENTS Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  193. European Commission. (2019). Liability for Artificial Intelligence and other emerging digital technologies. Report from the Expert Group on Liability and New Technologies – New Technologies Formation. https://op.europa.eu/de/publication-detail/-/publication/1c5e30be-1197-11ea-8c1f-01aa75ed71a1. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  194. European Commission. (2020a). Whitepaper on Artificial Intelligence - A European approach to excellence and trust. https://ec.europa.eu/info/sites/default/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf, 16 f. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  195. European Commission. (2020b). Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52020DC0064&from=EN. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  196. European Parliament. (2016). European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103 (INL)). https://www.europarl.europa.eu/doceo/document/TA-8-2017-0051_EN.html. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  197. European Parliament. (2020a). European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)). https://www.europarl.europa.eu/doceo/document/TA-9-2020-0276_EN.html. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  198. European Parliament. (2020b). European Parliament resolution of 20 October 202 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)). https://www.europarl.europa.eu/doceo/document/TA-9-2020-0276_EN.html. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  199. United Kingdom House of Lords. (2018). Artificial Intelligence Committee, AI in the UK: ready, willing and able?. https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  200. United Nations Human Rights Council. (2021). The right to privacy in the digital age* Report of the United Nations High Commissioner for Human Rights. Annual report of the United Nations High Commissioner for Human Rights and reports of the Office of the High Commissioner and the Secretary-General. https://www.ohchr.org/EN/NewsEvents/Pages/DisplayNews.aspx?NewsID=27469&LangID=E. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  201. United Nations Human Rights. (2011). UN Guiding Principles on Buisness and Human Rights. https://www.ohchr.org/documents/publications/guidingprinciplesbusinesshr_en.pdf . Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  202. United Nations Human Rights. (2020a). Key Characteristics of Buisness Respect for Human Rights. https://www.ohchr.org/Documents/Issues/Business/B-Tech/key-characteristics-business-respect.pdf. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  203. United Nations Human Rights. (2020b). The UNGPs in the Age of Technology. https://www.ohchr.org/Documents/Issues/Business/B-Tech/introduction-ungp-age-technology.pdf. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  204. United Nations Human Rights. (2021). Designing and implementing effective company-based grievance mechanisms. https://www.ohchr.org/Documents/Issues/Business/B-Tech/access-to-remedy-company-based-grievance-mechanisms.pdf Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  205. United States Executive Office of the President. (2016). Artificial intelligence, automation and the economy. https://obamawhitehouse.archives.gov/sites/whitehouse.gov/files/documents/Artificial-Intelligence-Automation-Economy.PDF. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-40
  206. Addink, H. (2019) Good Governance: An Introduction. Oxford: Oxford University Press. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-50
  207. Attwooll. B (1998). “Legal idealism”, doi: 10.4324/9780415249126-T020-1. Routledge Encyclopedia of Philosophy, Taylor and Francis, https://www.rep.routledge.com/articles/thematic/legal-idealism/v-1, 04/06/21. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-50
  208. Epstein. D. 2014. “Rationality, Legitimacy, & The Law”, 7 Wash. U. Jur. Rev. 1. Retrieved from: https://openscholarship.wustl.edu/law_jurisprudence/vol7/iss1/5, 09/05/21 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-50
  209. Gardner. J. A. 1959 “The Supreme Court and Philosophy of Law”, Vill. L. Rev.181, Retrieved from: https://digitalcommons.law.villanova.edu/vlr/vol5/iss2/2, 04/06/21. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-50
  210. Nadler, S. 1999. Spinoza: A life. Cambridge: Cambridge University Press. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-50
  211. Nicholson Price II. W & Rai. Arti. K. 2021. “Clearing Opacity Through Machine Learning”, in Iowa L. Rev. 106 (2): 775. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-50
  212. Noll, G. 2014. “Weaponising neurotechnology: international humanitarian law and loss of language”, in London Review of International Law, 2 (2): 201–23. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-50
  213. Rawls, J. 1955. “Two concepts of rules”, in The Philosophical Review, 64 (1): 3-32. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-50
  214. -. 1971. A Theory of Justice. Cambridge, Mass: Belknap Press of Harvard University. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-50
  215. Rutherford, M. 2010. “Spinoza’s Conception of Law: metaphysics and ethics”, in Melamed and Rosenthal (eds.) Spinoza’s Theological-Political Treatise, A Critical Guide, Cambridge: Cambridge University Press. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-50
  216. Spinoza, B. 1670 and 1677 Ethics and Theological Political Treatise in Curley. E (1985 and 2016). The Collected Works of Spinoza Vol. I and II. Curley, E. (ed. and trans.). Princeton, NJ: Princeton University Press. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-50
  217. Alfano, M., Fard, A. E., Carter, J. A., Clutton, P., & Klein, C. (2020). Technologically scaffolded atypical cognition: The case of YouTube’s recommender system. Synthese. https://doi.org/10.1007/s11229-020-02724-x Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  218. Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  219. Appelman, N., Quintais, J., & Fahy, R. (2021, May 31). Article 12 DSA: Will platforms be required to apply EU fundamental rights in content moderation decisions? DSA Observatory. https://dsa-observatory.eu/2021/05/31/article-12-dsa-will-platforms-be-required-to-apply-eu-fundamental-rights-in-content-moderation-decisions/ Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  220. Article 19. (2021a, May 14). EU: Regulation of recommender systems in the Digital Services Act. https://www.article19.org/resources/eu-regulation-of-recommender-systems-in-the-digital-services-act/ Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  221. Article 19. (2021b, May 21). EU: Due diligence obligations in the proposed Digital Services Act. https://www.article19.org/resources/eu-due-diligence-obligations-in-the-proposed-digital-services-act/ Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  222. Belkin, N. J., & Croft, W. B. (1992). Information filtering and information retrieval: Two sides of the same coin? Communications of the ACM, 35(12), 29–38. https://doi.org/10.1145/138859.138861 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  223. Bellogín, A., & Said, A. (2019). Information Retrieval and Recommender Systems. In A. Said & V. Torra (Eds.), Data Science in Practice (Vol. 46, pp. 79–96). Springer International Publishing. https://doi.org/10.1007/978-3-319-97556-6_5 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  224. Council of Europe. (2019). Declaration by the Committee of Ministers on the manipulative capabilities of Algorithmic processes (Adopted by the Committee of Ministers on 13 February 2019 at the 1337th meeting of the Ministers’ Deputies). Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  225. EU Disinfo Lab. (2021, April 1). How the Digital Services Act (DSA) Can Tackle Disinformation. https://www.disinfo.eu/advocacy/how-the-digital-services-act-(dsa)-can-tackle-disinformation/ Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  226. European Data Protection Supervisor (EDPS). (2021). Opinion 1/2021 on the Proposal 649 for a Digital Services Act. https://edps.europa.eu/ 650 system/files/2021-02/21-02-10- opinion_on_digital_services_act_en.pdf Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  227. Goanta, C., & Spanakis, J. (2020). Influencers and Social Media Recommender Systems: Unfair Commercial Practices in EU and US Law. TTLF Working Papers No. 54. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  228. Gomez-Uribe, C. A., & Hunt, N. (2016). The Netflix Recommender System: Algorithms, Business Value, and Innovation. ACM Transactions on Management Information Systems, 6(4), 1–19. https://doi.org/10.1145/2843948 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  229. Goodrow, C. (2021, September 15). On YouTube’s recommendation system. YouTube Offical Blog. https://blog.youtube/inside-youtube/on-youtubes-recommendation-system/ Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  230. Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society, 7(1), 205395171989794. https://doi.org/10.1177/2053951719897945 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  231. Grimmelmann, J. (2015). The virtues of moderation. Yale JL & Tech., 17, 42. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  232. Helberger, N., Karppinen, K., & D’Acunto, L. (2018). Exposure diversity as a design principle for recommender systems. Information, Communication & Society, 21(2), 191–207. https://doi.org/10.1080/1369118X.2016.1271900 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  233. Helberger, N., van Drunen, M., Vrijenhoek, S., & Möller, J. (2021). Regulation of news recommenders in the Digital Services Act: Empowering David against the Very Large Online Goliath. Internet Policy Review. https://policyreview.info/articles/news/regulation-news-recommenders-digital-services-act-empowering-david-against-very-large Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  234. Jeckmans, A. J. P., Beye, M., Erkin, Z., Hartel, P., Lagendijk, R. L., & Tang, Q. (2013). Privacy in Recommender Systems. In N. Ramzan, R. van Zwol, J.-S. Lee, K. Clüver, & X.-S. Hua (Eds.), Social Media Retrieval (pp. 263–281). Springer London. https://doi.org/10.1007/978-1-4471-4555-4_12 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  235. Krebs, L. M., Alvarado Rodriguez, O. L., Dewitte, P., Ausloos, J., Geerts, D., Naudts, L., & Verbert, K. (2019). Tell Me What You Know: GDPR Implications on Designing Transparency and Accountability for News Recommender Systems. Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, 1–6. https://doi.org/10.1145/3290607.3312808 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  236. Leerssen, P. (2021, September 7). Platform research access in Article 31 of the Digital Services Act – Sword without a shield? Verfassungsblog. https://verfassungsblog.de/power-dsa-dma-14/ Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  237. Llansó, E., Van Hoboken, J., Leerssen, P., & Harambam, J. (2020). Artificial intelligence, content moderation, and freedom of expression. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  238. Panoptykon Foundation. (2021, August 2). Can the EU Digital Services Act contest the power of Big Tech’s algorithms? EDRi. https://edri.org/our-work/can-the-eu-digital-services-act-contest-the-power-of-big-techs-algorithms/ Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  239. Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. Penguin Press. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  240. Quintais, J., & Schwemer, S. F. (2021). The Interplay between the Digital Services Act and Sector Regulation: How Special is Copyright? SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3841606 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  241. Resnick, P., & Varian, H. R. (1997). Recommender systems. Communications of the ACM, 40(3), 56–58. https://doi.org/10.1145/245108.245121 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  242. Riis, T., & Schwemer, S. F. (2019). Leaving the European Safe Harbor, Sailing Towards Algorithmic Content Regulation. Journal of Internet Law, 22(7), 1–21. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  243. Schwemer, S. F., Tomada, L., & Pasini, Tommaso. (2021). Legal AI Systems in the EU’s proposed Artificial Antelligence Act. Joint Proceedings of the Workshops on Automated Semantic Analysis of Information in Legal Text (ASAIL 2021) and AI and Intelligent Assistance for Legal Professionals in the Digital Workplace (LegalAIIA 2021), 2888, 51–58. http://ceur-ws.org/Vol-2888/ Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  244. Senftleben, M., Margoni, T., Antal, D., Bodó, B., Gompel, S. van, Handke, C., Kretschmer, M., Poort, J., Quintais, J., & Schwemer, S. F. (2021). Ensuring the Visibility and Accessibility of European Creative Content on the World Market: The Need for Copyright Data Improvement in the Light of New Technologies. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3785272 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  245. Sethuraman, R. (2019, March 31). Why Am I Seeing This? We Have an Answer for You. Facebook. https://about.fb.com/news/2019/03/why-am-i-seeing-this/ Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  246. Spotify. (2020, November 2). Amplifying Artist Input in Your Personalized Recommendations. https://newsroom.spotify.com/2020-11-02/amplifying-artist-input-in-your-personalized-recommendations/ Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  247. Stasi, M. L. (2019). Social media platforms and content exposure: How to restore users’ control. Competition and Regulation in Network Industries, 20(1), 86–110. https://doi.org/10.1177/1783591719847545 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  248. Ter Hoeve, M., Heruer, M., Odijk, D., Schuth, A., & de Rijke, M. (2017). Do news consumers want explanations for personalized news rankings. FATREC Workshop on Responsible Recommendation Proceedings. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  249. Valcarce, D. (2015). Exploring statistical language models for recommender systems. Proceedings of the 9th ACM Conference on Recommender Systems, 375–378. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  250. van Drunen, M. Z., Helberger, N., & Bastian, M. (2019). Know your algorithm: What media organizations need to explain to their users about news personalization. International Data Privacy Law, 9(4), 220–235. https://doi.org/10.1093/idpl/ipz011 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  251. Veale, M., & Borgesius, F. Z. (2021). Demystifying the Draft EU Artificial Intelligence Act. ArXiv:2107.03721 [Cs]. https://doi.org/10.9785/cri-2021-220402 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  252. Whittaker, J., Looney, S., Reed, A., & Votta, F. (2021). Recommender systems and the amplification of extremist content. Internet Policy Review, 10(2). https://doi.org/10.14763/2021.2.1565 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  253. Wu, T. (2016). The attention merchants: From the daily newspaper to social media, how our time and attention is harvested and sold. Atlantic Books. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-60
  254. Bartsch, A. (2010). Zeitungs-Sucht, Lesewut und Fernsehfieber. In M. Buck, F. Hartling, & S. Pfau (Eds.), Rand-gänge der Mediengeschichte (pp. 109–122). VS Verlag für Sozialwissenschaften. https://doi.org/10.1007/978-3-531-91957-7_7 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
  255. Baumgartner, S. E., Weeda, W. D., van der Heijden, L. L., & Huizinga, M. (2014). The Relationship Between Media Multitasking and Executive Function in Early Adolescents. The Journal of Early Adolescence, 34(8), 1120–1144. https://doi.org/10.1177/0272431614523133 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
  256. Bowles, N. (2018, January 12). Is the Answer to Phone Addiction a Worse Phone? The New York Times. https://www.nytimes.com/2018/01/12/technology/grayscale-phone.html Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
  257. Cachelin, J. L. (2015). Offliner: Die Gegenkultur der Digitalisierung (2nd ed.). Stämpfli Verlag. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
  258. Case, A. (2015). Calm Technology: Principles and Patterns for Non-Intrusive Design. O’Reilly Media, Inc, USA. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
  259. Digital Detox. (2021). Digital Detox® Official—Experiences & Research. Disconnect to Reconnect. https://www.digitaldetox.com Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
  260. Dockterman, E. (2013, February 12). Candy Crush’s Architects of Addiction. Time. http://content.time.com/time/magazine/article/0,9171,2158151,00.html Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
  261. Eyal, N. (2014). Hooked: How to Build Habit-Forming Products. Portfolio Penguin. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
  262. Eyal, N. (2019). Indistractable: How to Control Your Attention and Choose Your Life (Illustrated Edition). Benbella Books. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
  263. Genner, S. (2017). On/Off. Risks and rewards of the anytime-anywhere internet. vdf Hochschulverlag AG an der ETH Zürich. https://vdf.ch/on-off-e-book.html Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
  264. Harris, T. (2020, March 1). EU should regulate Facebook and Google as ‘attention utilities.’ Financial Times. https://www.ft.com/content/abd80d98-595e-11ea-abe5-8e03987b7b20 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
  265. Ho, R. C., Zhang, M. W., Tsang, T. Y., Toh, A. H., Pan, F., Lu, Y., Cheng, C., Yip, P. S., Lam, L. T., Lai, C.-M., Watanabe, H., & Mak, K.-K. (2014). The association between internet addiction and psychiatric co-morbidity: A meta-analysis. BMC Psychiatry, 14(1), 183. https://doi.org/10.1186/1471-244X-14-183 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
  266. Karpf, D. (2019, December 10). On Digital Disinformation and Democratic Myths. MediaWell, Social Science Research Council. https://mediawell.ssrc.org/expert-reflections/on-digital-disinformation-and-democratic-myths/ Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
  267. Kuniecki, M., Pilarczyk, J., & Wichary, S. (2015). The color red attracts attention in an emotional context. An ERP study. Frontiers in Human Neuroscience, 9. https://doi.org/10.3389/fnhum.2015.00212 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
  268. Levy, D. M. (2017). Mindful Tech: How to Bring Balance to Our Digital Lives (Reprint Edition). Yale University Press. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
  269. Meshi, D., Morawetz, C., & Heekeren, H. R. (2013). Nucleus accumbens response to gains in reputation for the self relative to gains for others predicts social media use. Frontiers in Human Neuroscience, 7. https://doi.org/10.3389/fnhum.2013.00439 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
  270. Orlowski, J. (2020). The Social Dilemma [Documentary]. https://www.imdb.com/title/tt11464826/ Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
  271. Park, S., Jeon, H. J., Bae, J. N., Seong, S. J., & Hong, J. P. (2017). Prevalence and Psychiatric Comorbidities of Internet Addiction in a Nationwide Sample of Korean Adults. Psychiatry Investigation, 14(6), 879–882. https://doi.org/10.4306/pi.2017.14.6.879 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
  272. Rus, H. M., & Tiemensma, J. (2017). Social Media under the Skin: Facebook Use after Acute Stress Impairs Cortisol Recovery. Frontiers in Psychology, 8. https://doi.org/10.3389/fpsyg.2017.01609 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
  273. Rushkoff, D. (2013). Present Shock: When Everything Happens Now. Current. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
  274. Satariano, A. (2021, June 4). Facebook Faces Two Antitrust Inquiries in Europe. The New York Times. https://www.nytimes.com/2021/06/04/business/facebook-eu-uk-antitrust.html Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
  275. Schüll, N. D. (2014). Addiction by Design: Machine Gambling in Las Vegas (New in Paper). Princeton University Press. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
  276. SINUS-Institut, & DIVSI. (2018). DIVSI U25-Studie – Euphorie war gestern. Eine Grundlagenstudie des SI-NUS-Instituts Heidelberg im Auftrag des Deutschen Instituts für Vertrauen und Sicherheit im Internet. 116. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
  277. Sweney, M., & Davidson, H. (2021, August 3). China’s Tencent tightens games controls for children after state media attack. The Guardian. http://www.theguardian.com/business/2021/aug/03/chinas-tencent-tightens-controls-for-children-amid-games-addiction-fears Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
  278. Tufekci, Z. (2017). We’re building a dystopia just to make people click on ads. https://www.ted.com/talks/zeynep_tufekci_we_re_building_a_dystopia_just_to_make_people_click_on_ads Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
  279. Turkle, S. (2008). Always-On/Always-On-You: The Tethered Self. The MIT Press. http://mitpress.universitypressscholarship.com/view/10.7551/mitpress/9780262113120.001.0001/upso-9780262113120-chapter-10 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
  280. Waller, G., & Süss, D. (2012). Handygebrauch bei Jugendlichen: Grenzen zwischen engagierter Nutzung und Verhaltenssucht. https://doi.org/10.21256/zhaw-4317 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
  281. Weiser, M., & Brown, J. S. (1995). Designing Calm Technology. Xerox PARC, 5. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
  282. Westcott, B. (2020, July 8). Children in China locked up for as long as 10 days at internet addiction camp. CNN. https://www.cnn.com/2020/07/08/asia/china-court-abuse-internet-addiction-intl-hnk/index.html Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
  283. Wu, T. (2016). The Attention Merchants: The Epic Scramble to Get Inside Our Heads. Knopf. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-70
  284. Andersen, S. M., Moskowitz, G. B., Blair, I. V., & Nosek, B. A. (2007): Automatic thought, in: Higgins, E. T. & Kruglanski, A. W. (Eds.): Social psychology: Handbook of basic principles, pp. 138–175, 2nd ed., New York. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-78
  285. Barassi, V. (2020): The Human Error in AI and question about Children’s Rights. Online: http://childdatacitizen.com/cdc/wp-content/uploads/2020/06/The-Human-Error-in-AI-and-Children-Rights_Prof.-Barassi_Response-to-AI-White-Paper-.pdf Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-78
  286. Beckert, J. & Bronk, R. (Eds.) (2018): Uncertain Futures: Imaginaries, Narratives, and Calcula- tion in the Economy, Oxford University Press. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-78
  287. Bonnefon, J-F; Shariff, A. & Rahwan, I. (2016): The social dilemma of autonomous vehicles, in: Science, 352 (6293), pp. 1573-1576 . Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-78
  288. Dastin, J. (2018): Amazon scraps secret AI recruiting tool that showed bias against women [Press release]. Online: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-78
  289. Elish, M.C. (2019): Moral Crumple Zones: Cautionary Tales in Human-Robot interaction, in: Engaging Science, Technology, and Society 5, pp. 40-60. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-78
  290. Enarsson, T.; Enqvist, L. & Naarttijärvi, M. (2021): Approaching the human in the loop – legal perspectives on hybrid human/algorithmic decision-making in three contexts: in: Information & Communication Technology Law, doi.org/10.1080/13600834.2021.1958860 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-78
  291. Etzioni, A., Etzioni, O. (2016): AI assisted ethics, in: Ethics Inf Technol 18, pp. 149–156. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-78
  292. Kahneman D (2011) Thinking, Fast and Slow. London: Penguin. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-78
  293. Martin, D. (2017) Who Should Decide How Machines Make Morally Laden Decisions?, in: Sci Eng Ethics 23, pp. 951–967. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-78
  294. Martinez, E. & Kirchner, L. (2021, August 25). The Secret Bias Hidden in Mortgage-Approval Algorithms. the Markup. Retrieved from https://themarkup.org/denied/2021/08/25/the-secret-bias-hidden-in-mortgage-approval-algorithms Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-78
  295. Mau, Steffen (2017): Das metrische Wir, Über die Quantifizierung des Sozialen, Berlin: Suhrkamp. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-78
  296. Ortmann, G. (2009): Management in der Hypermoderne: Kontingenz und Entscheidung, Wiesbaden. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-78
  297. Rogers, Richard (2016): Digital Methods, Cambridge. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-78
  298. Stalder, Felix (2019): Kultur der Digitalität, 4th Edititon, Berlin. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-78
  299. Skitka, L.J.; Mosier, K.; Burdick, M.D.(2000): Accountabiliy and automation bias, in: International Journal of Human-Computer Studies, 52(4), pp. 701–717 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-78
  300. Turing AM (1950) Computing machinery and intelligence, in: Mind, 59(236), pp. 433–460. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-78
  301. Weizenbaum, J, (2001): Computermacht und Gesellschaft, Frankfurt am Main. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-78
  302. Wittgenstein, L. (1984d): Philosophische Untersuchungen, in: Werkausgabe, Band 1, Frankfurt am Main, pp. 225-580. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-78
  303. Wood, W., & Neal, D. T. (2007). A new look at habits and the habit–goal interface, in: Psychological Review, 114, pp. 843–863. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-78
  304. Art + Com Studios (n.d.) Terravision, 1994. Retrieved August 27, 2021 from https://artcom.de/en/?project=terravision Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86
  305. Burnam-Fink, M. (2015). Creating narrative scenarios: Science fiction prototyping at emerge. Futures, 70 (19 December 2014). https://doi.org/10.1016/j.futures.2014.12.005. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86
  306. Burton, E., Goldsmith, J., & Mattei, N. (2018) How to teach computer ethics through science fiction. Communications of the ACM, 61(8), 54–64. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86
  307. Catts, O. & Zurr, I. (2004-2005). Ingestion / Disembodied Cuisine: Towards victimless meat. Cabinet Magazine. Retrieved August 27, 2021 from https://www.cabinetmagazine.org/issues/16/catts_zurr.php (27 August, 2021) Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86
  308. Dunne, A. & Raby, F. (n.d.) Critical Design FAQ. Retrieved October 8, 2021 from http://dunneandraby.co.uk/content/bydandr/13/0%20(08 Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86
  309. Escobar, A. (2018). Design for the Pluriverse: Radical Interdependence, Autonomy, and the Making of Worlds. London: Duke University Press. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86
  310. Frauenberger, C. (2020). Entanglement HCI the next wave? ACM Transactions on Computer-Human Interaction 27, 1, 1–27. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86
  311. Haraway, D. (1991). Simians, cyborgs and women: The reinvention of nature. New York: Routledge. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86
  312. Hertz, G. D. (2015). Conversations in Critical Making. CTheory Books. Retrieved October 8, 2021 from https://www.researchgate.net/publication/320344201_Conversations_in_Critical_Making Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86
  313. Hertz, G. D. (n.d.). What is Critical Making? Current. Retrieved October 8, 2021 from https://current.ecuad.ca/what-is-critical-making Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86
  314. Kohno, T. & Johnson, B. D. (2011). Science fiction prototyping and security education: Cultivating contextual and societal thinking in computer security education and beyond. SIGCSE’11. Retrieved August 30, 2021 from: https://homes.cs.washington.edu/~yoshi/papers/SIGCSE/csefp118-kohno.pdf Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86
  315. Kong, B., Liang R.-H., Liu, M., Chang, S.H., Tseng, H.-C. & Ju, C.-H. (2021). Neuromancer workshop: Towards designing experiential entanglement with science fiction. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 26, 1–17 https://doi.org/10.1145/3411764.3445273. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86
  316. Kvale, S. & Brinkmann, S. (2015). Interview: Det kvalitative forskningsinterview som håndværk. Hans Reitzels Forlag. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86
  317. Kohno, T. and Johnson, B. D. (2011). Science fiction prototyping and security education: Cultivating contextual and societal thinking on computer security education and beyond, SIGCSE ’11: Proceedings of the 42nd ACM Technical Symposium on Computer Science Education, 9-11. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86
  318. Le Guin, U. K. (2004). A Rant About "Technology". Retrieved August 25, 2021 from Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86
  319. http://www.ursulakleguinarchive.com/Note-Technology.html Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86
  320. Le Guin, Ursula (1985). Sur. In S. M. Gilbert & S. Gubar (Eds.) The Norton Anthology of Literature by Women, New York: W.W. Norton and Company (p. 2008). Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86
  321. Linehan, C., Kirman, B. J., Reeves, S., Blythe, M. A., Tanenbaum, T. J.,Desjardins, A. & Wakkary, A. (2014) Alternate endings: Using fiction to explore design futures. CHI ’14 Extended Abstracts on Human Factors in Computing Systems, 45–48. CHI EA ’14. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/2559206.2560472. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86
  322. Okorafor, N. (2017, November). Sci-fi stories that imagine a future Africa [Video]. TED Conferences. Retrieved October 9, 2021 from https://www.ted.com/talks/nnedi_okorafor_sci_fi_stories_that_imagine_a_future_africa?language=en Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86
  323. Mubin, O., Obaid, M., Jordan, P., Alves-Oliveira, P., Eriksson, T., Barendregt, W., Sjolle, D., Fjeld, M., Simoff, S. and Billinghurst, M. (2016). Towards an agenda for sci-fi inspired HCI research. Proceedings of the 13th International Conference on Advances in Computer Entertainment, 10, 1-6. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86
  324. Ratto, M. (2011). Critical making: Conceptual and material studies in technology and social life. The Information Society, 27(4), 252–60. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86
  325. Ratto, M., & Hockema, S. (2009). FLWR PWR: Tending the walled garden. Walled garden. (pp. 51-60). Retrieved Augst 30, 2021 from https://criticalmaking.com/wp-content/uploads/2009/10/2448_alledgarden_ch06_ratto_hockema.pdf Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86
  326. SDGC (2019). Matt Ratto: Critical Making as an Antidote to Design Thinking. Retrieved October 8, 2021 from https://www.youtube.com/watch?v=jeBWi_n1Ppg Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86
  327. Shklovski, I. & Grönvall, E. (2020). CreepyLeaks: Participatory speculation through demos. In Proceedings of the 11th Nordic Conference on Human-Computer Interaction, 1–12. Tallinn Estonia: ACM. https://doi.org/10.1145/3419249.3420168. Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86
  328. Zaidi, L. (2019). Worldbuilding in science fiction, foresight and design. Journal of Future Studies, 23(4), 15-25. Retrieved August 28, 2021 from https://jfsdigital.org/articles-and-essays/vol-23-no-4-june-2019/worldbuilding-in-science-fiction-foresight-and-design/ Google Scholar öffnen doi.org/10.5771/2747-5174-2021-2-86

Neuesten Ausgaben

Morals & Machines
Alle Ausgaben anzeigen
Cover of Volume: Morals & Machines Volume 2 (2022), Edition 2
Volume Teilzugriff
Interdisciplinary, Global, Critical
Volume 2 (2022), Edition 2
Cover of Volume: Morals & Machines Volume 2 (2022), Edition 1
Volume Kein Zugriff
Interdisciplinary, Global, Critical
Volume 2 (2022), Edition 1
Cover of Volume: Morals & Machines Volume 1 (2021), Edition 2
Volume Teilzugriff
Interdisciplinary, Global, Critical
Volume 1 (2021), Edition 2
Cover of Volume: Morals & Machines Volume 1 (2021), Edition 1
Volume Kein Zugriff
Interdisciplinary, Global, Critical
Volume 1 (2021), Edition 1