Right to explanation

Last updated

In the regulation of algorithms, particularly artificial intelligence and its subfield of machine learning, a right to explanation (or right to an explanation) is a right to be given an explanation for an output of the algorithm. Such rights primarily refer to individual rights to be given an explanation for decisions that significantly affect an individual, particularly legally or financially. For example, a person who applies for a loan and is denied may ask for an explanation, which could be "Credit bureau X reports that you declared bankruptcy last year; this is the main factor in considering you too likely to default, and thus we will not give you the loan you applied for."

Contents

Some such legal rights already exist, while the scope of a general "right to explanation" is a matter of ongoing debate. There have been arguments made that a "social right to explanation" is a crucial foundation for an information society, particularly as the institutions of that society will need to use digital technologies, artificial intelligence, machine learning. [1] In other words, that the related automated decision making systems that use explainability would be more trustworthy and transparent. Without this right, which could be constituted both legally and through professional standards, the public will be left without much recourse to challenge the decisions of automated systems. One of the emerging problems is how to communicate an explanation to a user, should it be through text, a high-level visual diagram, video or some other medium, and how can an explainable system scope the explanation in a reasonable way?

Examples

Credit scoring in the United States

Under the Equal Credit Opportunity Act (Regulation B of the Code of Federal Regulations), Title 12, Chapter X, Part 1002, §1002.9, creditors are required to notify applicants who are denied credit with specific reasons for the detail. As detailed in §1002.9(b)(2): [2]

(2) Statement of specific reasons. The statement of reasons for adverse action required by paragraph (a)(2)(i) of this section must be specific and indicate the principal reason(s) for the adverse action. Statements that the adverse action was based on the creditor's internal standards or policies or that the applicant, joint applicant, or similar party failed to achieve a qualifying score on the creditor's credit scoring system are insufficient.

The official interpretation of this section details what types of statements are acceptable. Creditors comply with this regulation by providing a list of reasons (generally at most 4, per interpretation of regulations), consisting of a numeric reason code (as identifier) and an associated explanation, identifying the main factors affecting a credit score. [3] An example might be: [4]

32: Balances on bankcard or revolving accounts too high compared to credit limits

European Union

The European Union General Data Protection Regulation (enacted 2016, taking effect 2018) extends the automated decision-making rights in the 1995 Data Protection Directive to provide a legally disputed form of a right to an explanation, stated as such in Recital 71: "[the data subject should have] the right ... to obtain an explanation of the decision reached". In full:

The data subject should have the right not to be subject to a decision, which may include a measure, evaluating personal aspects relating to him or her which is based solely on automated processing and which produces legal effects concerning him or her or similarly significantly affects him or her, such as automatic refusal of an online credit application or e-recruiting practices without any human intervention.

...

In any case, such processing should be subject to suitable safeguards, which should include specific information to the data subject and the right to obtain human intervention, to express his or her point of view, to obtain an explanation of the decision reached after such assessment and to challenge the decision.

However, the extent to which the regulations themselves provide a "right to explanation" is heavily debated. [5] [6] [7] There are two main strands of criticism. There are significant legal issues with the right as found in Article 22 — as recitals are not binding, and the right to an explanation is not mentioned in the binding articles of the text, having been removed during the legislative process. [6] In addition, there are significant restrictions on the types of automated decisions that are covered — which must be both "solely" based on automated processing, and have legal or similarly significant effects — which significantly limits the range of automated systems and decisions to which the right would apply. [6] In particular, the right is unlikely to apply in many of the cases of algorithmic controversy that have been picked up in the media. [8]

A second potential source of such a right has been pointed to in Article 15, the "right of access by the data subject". This restates a similar provision from the 1995 Data Protection Directive, allowing the data subject access to "meaningful information about the logic involved" in the same significant, solely automated decision-making, found in Article 22. Yet this too suffers from alleged challenges that relate to the timing of when this right can be drawn upon, as well as practical challenges that mean it may not be binding in many cases of public concern. [6]

France

In France the 2016 Loi pour une République numérique (Digital Republic Act or loi numérique) amends the country's administrative code to introduce a new provision for the explanation of decisions made by public sector bodies about individuals. [9] It notes that where there is "a decision taken on the basis of an algorithmic treatment", the rules that define that treatment and its “principal characteristics” must be communicated to the citizen upon request, where there is not an exclusion (e.g. for national security or defence). These should include the following:

  1. the degree and the mode of contribution of the algorithmic processing to the decision- making;
  2. the data processed and its source;
  3. the treatment parameters, and where appropriate, their weighting, applied to the situation of the person concerned;
  4. the operations carried out by the treatment.

Scholars have noted that this right, while limited to administrative decisions, goes beyond the GDPR right to explicitly apply to decision support rather than decisions "solely" based on automated processing, as well as provides a framework for explaining specific decisions. [9] Indeed, the GDPR automated decision-making rights in the European Union, one of the places a "right to an explanation" has been sought within, find their origins in French law in the late 1970s. [10]

South Korea

South Korea is known to have the strictest regulations on data privacy among Asian countries. Penalties for data protection breaches include paying punitive damages, forfeiting profits, and holding senior executives of a company personally accountable. The country also has strict laws on the cross-border sharing of data; prohibited data transfer can lead to a fine of up to 3 percent of revenue. In 2016, Seoul rejected a request by Google to use mapping data due to security concerns. [11]

China

In 2017, China partially implemented a new cyber security law that required personal information and other important data to be stored locally within the country. The international business community is concerned of the vague definition of the type of data that must be kept on servers within China. China takes data privacy seriously for its increasingly digital-savvy consumers. In January 2018, Ant Financial, Alibaba's financial arm, came under fire after automatically enrolling users in a credit scoring affiliate. [11]

Criticism

Some argue that a "right to explanation" is at best unnecessary, at worst harmful, and threatens to stifle innovation. Specific criticisms include: favoring human decisions over machine decisions, being redundant with existing laws, and focusing on process over outcome. [12]

Authors of study “Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking For” Lilian Edwards and Michael Veale argue that a right to explanation is not the solution to harms caused to stakeholders by algorithmic decisions. They also state that the right of explanation in the GDPR is narrowly-defined, and is not compatible with how modern machine learning technologies are being developed. With these limitations, defining transparency within the context of algorithmic accountability remains a problem. For example, providing the source code of algorithms may not be sufficient and may create other problems in terms of privacy disclosures and the gaming of technical systems. To mitigate this issue, Edwards and Veale argue that an auditing system could be more effective, to allow auditors to look at the inputs and outputs of a decision process from an external shell, in other words, “explaining black boxes without opening them.” [8]

Similarly, Oxford scholars Bryce Goodman and Seth Flaxman assert that the GDPR creates a ‘right to explanation’, but does not elaborate much beyond that point, stating the limitations in the current GDPR. In regards to this debate, scholars Andrew D Selbst and Julia Powles state that the debate should redirect to discussing whether one uses the phrase ‘right to explanation’ or not, more attention must be paid to the GDPR's express requirements and how they relate to its background goals, and more thought must be given to determining what the legislative text actually means. [13]

More fundamentally, many algorithms used in machine learning are not easily explainable. For example, the output of a deep neural network depends on many layers of computations, connected in a complex way, and no one input or computation may be a dominant factor. The field of Explainable AI seeks to provide better explanations from existing algorithms, and algorithms that are more easily explainable, but it is a young and active field. [14] [15]

Similarly, human decisions often cannot be easily explained: they may be based on intuition or a "gut feeling" that is hard to put into words. It can be argued that machines should not be required to meet a higher standard than humans.

Others argue that the difficulties with explainability are due to its overly narrow focus on technical solutions rather than connecting the issue to the wider questions raised by a "social right to explanation." [1]

Suggestions

Edwards and Veale see the right to explanation as providing some grounds for explanations about specific decisions. They discuss two types of algorithmic explanations, model centric explanations and subject-centric explanations (SCEs), which are broadly aligned with explanations about systems or decisions. [8]

SCEs are seen as the best way to provide for some remedy, although with some severe constraints if the data is just too complex. Their proposal is to break down the full model and focus on particular issues through pedagogical explanations to a particular query, “which could be real or could be fictitious or exploratory”. These explanations will necessarily involve trade offs with accuracy to reduce complexity.

With growing interest in explanation of technical decision-making systems in the field of human-computer interaction design, researchers and designers put in efforts to open the black box in terms of mathematically interpretable models as removed from cognitive science and the actual needs of people. Alternative approaches would be to allow users to explore the system's behavior freely through interactive explanations.

One of Edwards and Veale's proposals is to partially remove transparency as a necessary key step towards accountability and redress. They argue that people trying to tackle data protection issues have a desire for an action, not for an explanation. The actual value of an explanation will not be to relieve or redress the emotional or economic damage suffered, but to understand why something happened and helping ensure a mistake doesn't happen again. [8]

On a broader scale, In the study Explainable machine learning in deployment, authors recommend building an explainable framework clearly establishing the desiderata by identifying stakeholder, engaging with stakeholders, and understanding the purpose of the explanation. Alongside, concerns of explainability such as issues on causality, privacy, and performance improvement must be considered into the system. [16]

See also

Related Research Articles

<span class="mw-page-title-main">Data Protection Directive</span> European Union directive which regulates the processing of personal data

The Data Protection Directive, officially Directive 95/46/EC, enacted in October 1995, is a European Union directive which regulates the processing of personal data within the European Union (EU) and the free movement of such data. The Data Protection Directive is an important component of EU privacy and human rights law.

<span class="mw-page-title-main">Data Protection Act 1998</span> United Kingdom legislation

The Data Protection Act 1998 was an Act of Parliament of the United Kingdom designed to protect personal data stored on computers or in an organised paper filing system. It enacted provisions from the European Union (EU) Data Protection Directive 1995 on the protection, processing, and movement of data.

Data security means protecting digital data, such as those in a database, from destructive forces and from the unwanted actions of unauthorized users, such as a cyberattack or a data breach.

A privacy policy is a statement or legal document that discloses some or all of the ways a party gathers, uses, discloses, and manages a customer or client's data. Personal information can be anything that can be used to identify an individual, not limited to the person's name, address, date of birth, marital status, contact information, ID issue, and expiry date, financial records, credit information, medical history, where one travels, and intentions to acquire goods and services. In the case of a business, it is often a statement that declares a party's policy on how it collects, stores, and releases personal information it collects. It informs the client what specific information is collected, and whether it is kept confidential, shared with partners, or sold to other firms or enterprises. Privacy policies typically represent a broader, more generalized treatment, as opposed to data use statements, which tend to be more detailed and specific.

Information privacy, data privacy or data protection laws provide a legal framework on how to obtain, use and store data of natural persons. The various laws around the world describe the rights of natural persons to control who is using its data. This includes usually the right to get details on which data is stored, for what purpose and to request the deletion in case the purpose is not given anymore.

<span class="mw-page-title-main">Privacy and Electronic Communications Directive 2002</span>

Privacy and Electronic Communications Directive2002/58/EC on Privacy and Electronic Communications, otherwise known as ePrivacy Directive (ePD), is an EU directive on data protection and privacy in the digital age. It presents a continuation of earlier efforts, most directly the Data Protection Directive. It deals with the regulation of a number of important issues such as confidentiality of information, treatment of traffic data, spam and cookies. This Directive has been amended by Directive 2009/136, which introduces several changes, especially in what concerns cookies, that are now subject to prior consent.

Data portability is a concept to protect users from having their data stored in "silos" or "walled gardens" that are incompatible with one another, i.e. closed platforms, thus subjecting them to vendor lock-in and making the creation of data backups or moving accounts between services difficult.

<span class="mw-page-title-main">Interactive Advertising Bureau</span>

The Interactive Advertising Bureau (IAB) is an American advertising business organization that develops industry standards, conducts research, and provides legal support for the online advertising industry. The organization represents many of the most prominent media outlets globally, but mostly in the United States, Canada and Europe.

The German Bundesdatenschutzgesetz (BDSG) is a federal data protection act, that together with the data protection acts of the German federated states and other area-specific regulations, governs the exposure of personal data, which are manually processed or stored in IT systems.

<span class="mw-page-title-main">General Data Protection Regulation</span> European Union regulation on personal data

The General Data Protection Regulation is a European Union regulation on Information privacy in the European Union (EU) and the European Economic Area (EEA). The GDPR is an important component of EU privacy law and human rights law, in particular Article 8(1) of the Charter of Fundamental Rights of the European Union. It also governs the transfer of personal data outside the EU and EEA. The GDPR's goals are to enhance individuals' control and rights over their personal information and to simplify the regulations for international business. It supersedes the Data Protection Directive 95/46/EC and, among other things, simplifies the terminology.

Algorithmic transparency is the principle that the factors that influence the decisions made by algorithms should be visible, or transparent, to the people who use, regulate, and are affected by systems that employ those algorithms. Although the phrase was coined in 2016 by Nicholas Diakopoulos and Michael Koliska about the role of algorithms in deciding the content of digital journalism services, the underlying principle dates back to the 1970s and the rise of automated systems for scoring consumer credit.

<span class="mw-page-title-main">Explainable artificial intelligence</span> AI in which the results of the solution can be understood by humans

Explainable AI (XAI), also known as Interpretable AI, or Explainable Machine Learning (XML), is artificial intelligence (AI) in which humans can understand the reasoning behind decisions or predictions made by the AI. It contrasts with the "black box" concept in machine learning, where even the AI's designers cannot explain why it arrived at a specific decision.

<span class="mw-page-title-main">Algorithmic bias</span> Technological phenomenon with social implications

Algorithmic bias describes systematic and repeatable errors in a computer system that create "unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm.

The right of access, also referred to as right to access and (data) subject access, is one of the most fundamental rights in data protection laws around the world. For instance, the United States, Singapore, Brazil, and countries in Europe have all developed laws that regulate access to personal data as privacy protection. The European Union states that: "The right of access occupies a central role in EU data protection law's arsenal of data subject empowerment measures." This right is often implemented as a Subject Access Request (SAR) or Data Subject Access Request (DSAR).

<span class="mw-page-title-main">Sandra Wachter</span> Data Ethics, Artificial Intelligence, robotics researcher

Sandra Wachter is a professor and senior researcher in data ethics, artificial intelligence, robotics, algorithms and regulation at the Oxford Internet Institute. She is a former Fellow of The Alan Turing Institute.

Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging. Another emerging topic is the regulation of blockchain algorithms and is mentioned along with regulation of AI algorithms. Many countries have enacted regulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms.

Michael Veale is a technology policy academic who focuses on information technology and the law. He is currently associate professor in the Faculty of Laws at University College London (UCL).

The Toronto Declaration: Protecting the Rights to Equality and Non-Discrimination in Machine Learning Systems is a declaration that advocates responsible practices for machine learning practitioners and governing bodies. It is a joint statement issued by groups including Amnesty International and Access Now, with other notable signatories including Human Rights Watch and The Wikimedia Foundation. It was published at RightsCon on May 16, 2018.

Automated decision-making (ADM) involves the use of data, machines and algorithms to make decisions in a range of contexts, including public administration, business, health, education, law, employment, transport, media and entertainment, with varying degrees of human oversight or intervention. ADM involves large-scale data from a range of sources, such as databases, text, social media, sensors, images or speech, that is processed using various technologies including computer software, algorithms, machine learning, natural language processing, artificial intelligence, augmented intelligence and robotics. The increasing use of automated decision-making systems (ADMS) across a range of contexts presents many benefits and challenges to human society requiring consideration of the technical, legal, ethical, societal, educational, economic and health consequences.

<span class="mw-page-title-main">Personal Information Protection Law of the People's Republic of China</span> Chinese personal information rights law

The Personal Information Protection Law of the People's Republic of China referred to as the Personal Information Protection Law or ("PIPL") protecting personal information rights and interests, standardize personal information handling activities, and promote the rational use of personal information. It also addresses the transfer of personal data outside of China.

References

  1. 1 2 Berry, David M. (2021). Balaskas, Bill (ed.). Explanatory Publics: Explainability and Democratic Thought. Fabricating Publics: The Dissemination of Culture in the Post-truth Era. DATA browser. Open Humanities Press. pp. 211–233. ISBN   9781785421051.
  2. Consumer Financial Protection Bureau, §1002.9(b)(2)
  3. US FICO credit risk score reason codes: Fundamental document from FICO listing all of the FICO credit score reasons that a score is not higher, March 31, 2010, by Greg Fisher
  4. "ReasonCode.org | VantageScore Solutions". www.reasoncode.org.
  5. Goodman, Bryce; Flaxman, Seth (2017). "European Union Regulations on Algorithmic Decision-Making and a 'Right to Explanation'". AI Magazine. 38 (3): 50–57. arXiv: 1606.08813 . doi:10.1609/aimag.v38i3.2741. S2CID   7373959.
  6. 1 2 3 4 Wachter, Sandra; Mittelstadt, Brent; Floridi, Luciano (December 28, 2016). "Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation". International Data Privacy Law. SSRN   2903469.
  7. Is there a 'right to explanation' for machine learning in the GDPR?, Jun 1, 2017, Andrew Burt
  8. 1 2 3 4 Edwards, Lilian; Veale, Michael (2017). "Slave to the algorithm? Why a "right to an explanation" is probably not the remedy you are looking for". Duke Law and Technology Review. SSRN   2972855.
  9. 1 2 Edwards, Lilian; Veale, Michael (2018). "Enslaving the Algorithm: From a 'Right to an Explanation' to a 'Right to Better Decisions'?" (PDF). IEEE Security & Privacy. 16 (3): 46–54. doi:10.1109/MSP.2018.2701152. S2CID   4049746. SSRN   3052831.
  10. Bygrave, L A (2001). "Minding the Machine: Article 15 of the EC Data Protection Directive and Automated Profiling" (PDF). Computer Law & Security Review. 17 (1). doi:10.1016/S0267-3649(01)00104-2.
  11. 1 2 "Navigating GDPR and data regulation in Asia". Refinitiv Perspectives. 2018-11-28. Retrieved 2022-11-30.
  12. EU's Right to Explanation: A Harmful Restriction on Artificial Intelligence, Nick Wallace, Center for Data Innovation, January 25, 2017
  13. Selbst, Andrew D; Powles, Julia (2017-11-01). "Meaningful information and the right to explanation". International Data Privacy Law. 7 (4): 233–242. doi: 10.1093/idpl/ipx022 . ISSN   2044-3994.
  14. Miller, Tim (2017-06-22). "Explanation in Artificial Intelligence: Insights from the Social Sciences". arXiv: 1706.07269 [cs.AI].
  15. Mittelstadt, Brent; Russell, Chris; Wachter, Sandra (2019). "Explaining Explanations in AI" (PDF). Proceedings of the Conference on Fairness, Accountability, and Transparency. New York, New York, USA: ACM Press. pp. 279–288. doi:10.1145/3287560.3287574. ISBN   978-1-4503-6125-5. S2CID   53214940.
  16. Bhatt, Umang; Xiang, Alice; Sharma, Shubham; Weller, Adrian; Taly, Ankur; Jia, Yunhan; Ghosh, Joydeep; Puri, Ruchir; Moura, José M. F.; Eckersley, Peter (2020-01-27). "Explainable machine learning in deployment". Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. FAT* '20. New York, NY, USA: Association for Computing Machinery. pp. 648–657. doi: 10.1145/3351095.3375624 . ISBN   978-1-4503-6936-7. S2CID   202572724.