
Criminal Liability in Offenses Involving Autonomous Systems Driven by Artificial Intelligence
- Authors:
- Series:
- Robotik, Künstliche Intelligenz und Recht, Volume 38
- Publisher:
- 23.10.2025
Summary
The outputs of AI-driven robots are inherently unpredictable and difficult to explain due to opacity. It complicates the attribution of criminal liability, which is based on individual culpability. In particular, the scope of negligent liability and the duty of care owed by developers, producers, users, and others remain contested issues in legal discourse. This study addresses these questions through German criminal law dogmatics, offering concrete suggestions. It also examines individual responsibility in product liability cases involving multiple actors within the production chain. The author has long been engaged in research at the intersection of technology and criminal law, with numerous publications. The book will also be available in open access upon publication.
Keywords
Search publication
Bibliographic data
- Copyright year
- 2025
- Publication date
- 23.10.2025
- ISBN-Print
- 978-3-7560-3487-1
- ISBN-Online
- 978-3-7489-6518-3
- Publisher
- Nomos, Baden-Baden
- Series
- Robotik, Künstliche Intelligenz und Recht
- Volume
- 38
- Language
- English
- Pages
- 490
- Product type
- Book Titles
Table of contents
- Preface
- List of Abbreviations
- IntroductionPages 23 - 28 Download chapter (PDF)
- A. Legal Challenges
- B. AI-Driven Autonomous Systems in Daily Life: A New Normal
- 1. Automation - Autonomy
- 2. The Turing Test
- 3. Bot - Robot
- 4. Artificial Intelligence
- 5. Machine Learning
- D. Addressing Liability: Key Actors and Entities
- a. Origins of the Term ‘Autonomy’
- b. The Intellectual Background to the Concept of ‘Autonomy’
- c. Automation vs. Autonomy
- d. Emergence Instead of Autonomy
- e. Autonomy and the Transformation of Human Control
- f. Lack of Predictability in AI-Driven Autonomous Systems
- 2. Ex Post: Opacity and Explainability in AI Systems
- A. Types of Criminal Offences Likely to Emerge
- 1. Various Classifications in Literature
- 2. Intentional Use of Autonomous Systems to Commit a Crime
- 3. Crimes Against Autonomous Systems
- 4. Crimes Caused by Autonomous Systems
- C. Prominent Cases Highlighting AI-Related Liability
- A. Bridging Contested Liability Gaps in Criminal Law
- 1. Fundamentals
- (1) The Origins
- (2) Anthropomorphising Robots
- (3) Pragmatical Necessities
- (4) Defining the Nature and Scope of Legal Personhood for Robots
- (5) The Impact of Robotic Liability on the Responsibility of the Person Behind the Machine
- b. Contra Arguments in Legal Literature Against AI-Personhood
- c. Synthesis and Evaluation
- a. General Insights
- b. Assessment Based on Theories of Action
- c. Re-interpretation of the Concept “Action”
- a. Fault-Based Torts Liability
- (1) Respondeat Superior
- (2) Exploring Existing Frameworks: Slavery, Animal Ownership, Employees and Associates
- (3) Applying Vicarious Liability in Criminal Law
- (1) Strict Liability Over Fault-Based Liability
- (2) Does Strict Liability Incentivise Harm Mitigation Initiatives?
- (3) Defining the Scope of the Strict Liability Regime
- (4) The EU AI Liability Directive (AILD) and Strict Liability Regime within the EU
- (5) Compatibility of Strict Liability with Criminal Law Principles
- (1) Introducing Product Liability for AI-Driven Autonomous Systems
- (2) Responsibility Shifting to Manufacturers
- (3) The Essence of Product Liability
- (4) Manufacturer’s Duties
- (5) Specific Challenges for AI-Driven Systems in Product Liability
- (a) The Rationale Behind Criminal Product Liability
- (b) General Duties of Manufacturers in the Context of Criminal Product Liability
- (c) Key Judicial Decisions Shaping Criminal Product Liability
- (d) Unique Challenges of AI Products and Criminal Product Liability
- a. Pro Arguments for Indirect Perpetration in AI-Driven Autonomous Systems
- b. Theoretical Basis of Indirect Perpetration
- c. Assessment
- 3. The Natural Probable Consequence Liability Model
- 1. General Challenges with the Causal Nexus for Autonomous Systems
- a. Assessment Based on Causality Theories
- b. Distinctive Challenges with Causality
- B. Intentional Liability
- 1. The Rationale Behind the Concept of Negligence in Criminal Liability
- 2. Advancing Technologies and Negligence
- a. Fundamentals
- b. The Legal Basis of Duty of Care
- c. Under Which Perspective Should the Standard of Care Established?
- d. Negligent Undertaking
- e. Insights from Turkish Law on Negligence and the Scope of the Duty of Care
- (1) Recognising the Unforeseeable
- (2) Learning from Mistakes and Hindsight Bias
- (3) Objective Foreseeability, Typical Risks and Laplace’s Demon
- (1) The Anatomy of Failures in AI-Driven Systems
- (2) Challenges in Defining Standards of Conduct for Emerging Technologies
- (a) Defining the General Duty of Care
- (b) The Duty of Care Stemming from Increasing Risks
- (c) Obligations Arising from System Failures
- (d) Duty to Ensure Robust System Design
- (e) The Protective Purpose of the Norm
- (4) The Evolution of Duty of Care Through New Techniques
- c. Human in the Loop
- d. Control Dilemma
- (1) The Concept of “Permissible Risk”
- (2) Debates on the Legal Nature of Permissible Risk
- (a) Underlying Premise: Risks are Inevitable
- (b) Mitigating Risks to Permissible Thresholds
- (c) The Impact of Permissible Risk on Negligent Liability
- (d) Does Permissible Risk Cover Atypical Risks of AI?
- i. The Concept of Risk
- ii. The Balance Between Risks and Societal Benefits
- iii. Calibrating the Duty of Care Through Risk Levels and Public Tolerance
- (b) The Relationship Between Social Adequacy and Permissible Risk
- (c) Society’s Willingness to Tolerate Risks
- (a) Balancing Risks and Benefits
- (b) Societal Gains of AI-Driven Autonomous Systems
- (c) Potential Threats Posed by AI-Driven Autonomous Systems
- (a) Substituting Existing Risks
- (b) Risk Enhancement through Task Delegation to AI-Driven Autonomous Systems: A Legal Analysis
- (c) Does the Non-Use of AI-Driven Autonomous Systems Breach the Duty of Care?
- (d) Delegating Tasks to AI-Driven Autonomous Systems: An Alternative Approach for Liability
- (1) Concretising Legal Expectations
- (2) Positive Law’s Reference to the State of the Science and Technology
- (3) The Effectiveness of Norms Established by Private Entities on the Duty of Care
- (4) Compliance with Norms: An Indicator of Fulfilling the Duty of Care
- (5) The EU AI Regulation (AI Act) and the Imposed Duty of Care
- 1. The Concept of “the Problem of Many Hands”
- a. The Concept
- (1) Liability Challenges in the Production Chain of AI-Driven Autonomous Systems
- (2) Other Instances of the “Problem of Many Hands” in Relation to AI-Driven Autonomous Systems
- (1) Should Humans Rely on Machines?
- (2) Should Autonomous Systems Rely on Humans?
- (3) Should AI-Driven Autonomous Systems Rely on Each Other?
- 1. Exploring the Origins of Moral Dilemmas
- a. How Does it Emerge?
- (1) Comparison of Values
- (2) Assessment of the Utilitarian Approach to Dilemmas
- (3) Proximity of Danger, Impact of Predictable Decisions and Random Generator
- (1) Necessity as Justification (StGB Section 34)
- (2) Necessity as Exculpation (StGB Section 35)
- (3) Supra-Legal Excusable Necessity
- (4) Conflict of Obligations
- b. Analysis under Turkish Law
- 4. Evaluation: An Alternative Approach
- A. Placing Dangerous Products on the Market as an Endangering Offence
- B. Certain Jurisdictions Concretising Criminal (Non-)Liability For AI-Driven Autonomous Systems
- Conclusion and Extended SummaryPages 415 - 442 Download chapter (PDF)
- SummaryPages 443 - 446 Download chapter (PDF)
- Zusammenfassung (Summary in German)Pages 447 - 452 Download chapter (PDF)
- BibliographyPages 453 - 490 Download chapter (PDF)




