Can there be responsible AI without AI liability? Incentivizing generative AI safety through ex-post tort liability under the EU AI liability directive
Noto La Diega, Guido and Bezerra, Leonardo C.T. (2024) Can there be responsible AI without AI liability? Incentivizing generative AI safety through ex-post tort liability under the EU AI liability directive. International Journal of Law and Information Technology, 32 (1). eaae021. ISSN 0967-0769 (https://doi.org/10.1093/ijlit/eaae021)
Preview |
Text.
Filename: Noto-La-Diega-Bezerra-IJLIT-2024-Incentivising-generative-AI-safety-through-ex-post-tort-liability.pdf
Final Published Version License: Download (396kB)| Preview |
Abstract
In Europe, the governance discourse surrounding artificial intelligence (AI) has been predominantly centred on the AI Act, with a proliferation of books, certification courses, and discussions emerging even before its adoption. This narrow focus has overshadowed other crucial regulatory interventions that promise to fundamentally shape AI. This article highlights the proposed EU AI liability directive (AILD), the first attempt to harmonize general tort law in response to AI-related threats, addressing critical issues such as evidence discovery and causal links. As AI risks proliferate, this article argues for the necessity of a responsive system to adequately address AI harms as they arise. AI safety and responsible AI, central themes in current regulatory discussions, must be prioritized, with ex-post liability in tort playing a crucial role in achieving these objectives. This is particularly pertinent as AI systems become more autonomous and unpredictable, rendering the ex-Ante risk assessments mandated by the AI Act insufficient. The AILD's focus on fault and its limited scope is also inadequate. The proposed easing of the burden of proof for victims of AI, through enhanced discovery rules and presumptions of causal links, is insufficient in a context where Large Language Models exhibit unpredictable behaviours and humans increasingly rely on autonomous agents for complex tasks. Moreover, the AILD's reliance on the concept of risk, inherited from the AI Act, is misplaced, as tort liability intervenes after the risk has materialized. However, the inherent risks in AI systems could justify EU harmonization of AI torts in the direction of strict liability. Bridging the liability gap will enhance AI safety and responsibility, better protect individuals from AI harms, and ensure that tort law remains a vital regulatory tool.
ORCID iDs
Noto La Diega, Guido ORCID: https://orcid.org/0000-0001-6918-5398 and Bezerra, Leonardo C.T.;-
-
Item type: Article ID code: 90201 Dates: DateEvent14 September 2024Published14 September 2024Published Online11 August 2024AcceptedSubjects: Science > Mathematics > Electronic computers. Computer science
Science > Mathematics > Electronic computers. Computer science > Other topics, A-Z > Human-computer interaction
Law > Law (General)Department: Faculty of Humanities and Social Sciences (HaSS) > Strathclyde Law School > Law Depositing user: Pure Administrator Date deposited: 12 Aug 2024 11:37 Last modified: 04 Dec 2024 01:30 Related URLs: URI: https://strathprints.strath.ac.uk/id/eprint/90201