How the EU can achieve legally trustworthy AI : a response to the European Commission's proposal for an Artificial Intelligence Act

Smuha, Nathalie A. and Ahmed-Rengers, Emma and Harkens, Adam and Li, Wenlong and MacLaren, James and Piselli, Riccardo and Yeung, Karen (2021) How the EU can achieve legally trustworthy AI : a response to the European Commission's proposal for an Artificial Intelligence Act. SSRN, Rochester, NY.

[thumbnail of Smuha-etal-SSRN-2021-How-the-EU-can-achieve-legally-trustworthy-AI]
Preview
Text. Filename: Smuha_etal_SSRN_2021_How_the_EU_can_achieve_legally_trustworthy_AI.pdf
Final Published Version
License: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 logo

Download (894kB)| Preview

Abstract

This document contains the response to the European Commission's Proposal for an Artificial Intelligence Act from members of the Legal, Ethical & Accountable Digital Society (LEADS) Lab at the University of Birmingham. The Proposal seeks to give expression to the concept of 'Lawful AI.' This concept was mentioned, but not developed in the Commission's High-Level Expert Group on AI's Ethics Guidelines for Trustworthy AI (2019), which instead confined its discussion to the concepts of 'Ethical' and 'Robust' AI. After a brief introduction (Chapter 1), we set out the many aspects of the Proposal which we welcome, and stress our wholehearted support for its aim to protect fundamental rights (Chapter 2). Subsequently, we develop the concept of 'Legally Trustworthy AI,' arguing that it should be grounded in respect for three pillars on which contemporary liberal democratic societies are founded, namely: fundamental rights, the rule of law, and democracy (Chapter 3). Drawing on this conceptual framework, we first argue that the Proposal fails to reflect fundamental rights as claims with enhanced moral and legal status, which subjects any rights interventions to a demanding regime of scrutiny and must satisfy tests of necessity and proportionality. Moreover, the Proposal does not always accurately recognise the wrongs and harms associated with different kinds of AI systems and appropriately allocates responsibility for them. Second, the Proposal does not provide an effective framework for the enforcement of legal rights and duties, and does not ensure legal certainty and consistency, which are essential for the rule of law. Third, the Proposal neglects to ensure meaningful transparency, accountability, and rights of public participation, thereby failing to reflect adequate protection for democracy (Chapter 4). Based on these shortcomings in respecting and promoting the three pillars of Legally Trustworthy AI, we provide detailed recommendations for the Proposal's revision (Chapter 5)