Slave to the algorithm? Why a 'right to an explanation' is probably not the remedy you are looking for

Edwards, Lilian and Veale, Michael (2017) Slave to the algorithm? Why a 'right to an explanation' is probably not the remedy you are looking for. Duke Law and Technology Review, 16 (1). pp. 1-65. ISSN 2328-9600 (https://dltr.law.duke.edu/2017/12/04/slave-to-the-...)

[thumbnail of Edwards-Veale-DLTR-2017-Slave-to-the-algorithm-why-a-right-to-an-explanation-is-probably]
Preview
Text. Filename: Edwards_Veale_DLTR_2017_Slave_to_the_algorithm_why_a_right_to_an_explanation_is_probably.pdf
Final Published Version
License: Creative Commons Attribution 4.0 logo

Download (930kB)| Preview

Abstract

Algorithms, particularly of the machine learning (ML) variety, are increasingly important to individuals' lives, but have caused a range of concerns evolving mainly around unfairness, discrimination and opacity. Transparency in the form of a "right to an explanation" has emerged as a compellingly attractive remedy since it intuitively presents as a means to "open the black box", hence allowing individual challenge and redress, as well as potential to instil accountability to the public in ML systems. In the general furore over algorithmic bias and other issues laid out in section 2, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the GDPR is unlikely to be a complete remedy to algorithmic harms, particularly in some of the core "algorithmic war stories" that have shaped recent attitudes in this domain. We present several reasons for this conclusion. First (section 3), the law is restrictive on when any explanation-related right can be triggered, and in many places is unclear, or even seems paradoxical. Second (section 4), even were some of these restrictions to be navigated, the way that explanations are conceived of legally — as "meaningful information about the logic of processing" — is unlikely to be provided by the kind of ML "explanations" computer scientists have been developing. ML explanations are restricted both by the type of explanation sought, the multi-dimensionality of the domain and the type of user seeking an explanation. However “subject-centric" explanations (SCEs), which restrict explanations to particular regions of a model around a query, show promise for interactive exploration, as do pedagogical rather than decompositional explanations in dodging developers' worries of IP or trade secrets disclosure. As an interim conclusion then, while convinced that recent research in ML explanations shows promise, we fear that the search for a "right to an explanation" in the GDPR may be at best distracting, and at worst nurture a new kind of "transparency fallacy". However, in our final sections, we argue that other parts of the GDPR related (i) to other individual rights including the right to erasure ("right to be forgotten") and the right to data portability and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to build a more responsible, explicable and user-friendly algorithmic society.