Algorithmic transparency

Last updated

Algorithmic transparency is the principle that the factors that influence the decisions made by algorithms should be visible, or transparent, to the people who use, regulate, and are affected by systems that employ those algorithms. Although the phrase was coined in 2016 by Nicholas Diakopoulos and Michael Koliska about the role of algorithms in deciding the content of digital journalism services, [1] the underlying principle dates back to the 1970s and the rise of automated systems for scoring consumer credit.

The phrases "algorithmic transparency" and "algorithmic accountability" [2] are sometimes used interchangeably – especially since they were coined by the same people – but they have subtly different meanings. Specifically, "algorithmic transparency" states that the inputs to the algorithm and the algorithm's use itself must be known, but they need not be fair. "Algorithmic accountability" implies that the organizations that use algorithms must be accountable for the decisions made by those algorithms, even though the decisions are being made by a machine, and not by a human being. [3]

Current research around algorithmic transparency interested in both societal effects of accessing remote services running algorithms., [4] as well as mathematical and computer science approaches that can be used to achieve algorithmic transparency [5] In the United States, the Federal Trade Commission's Bureau of Consumer Protection studies how algorithms are used by consumers by conducting its own research on algorithmic transparency and by funding external research. [6] In the European Union, the data protection laws that came into effect in May 2018 include a "right to explanation" of decisions made by algorithms, though it is unclear what this means. [7] Furthermore, the European Union founded The European Center for Algoritmic Transparency (ECAT). [8]

See also

Related Research Articles

A privacy policy is a statement or legal document that discloses some or all of the ways a party gathers, uses, discloses, and manages a customer or client's data. Personal information can be anything that can be used to identify an individual, not limited to the person's name, address, date of birth, marital status, contact information, ID issue, and expiry date, financial records, credit information, medical history, where one travels, and intentions to acquire goods and services. In the case of a business, it is often a statement that declares a party's policy on how it collects, stores, and releases personal information it collects. It informs the client what specific information is collected, and whether it is kept confidential, shared with partners, or sold to other firms or enterprises. Privacy policies typically represent a broader, more generalized treatment, as opposed to data use statements, which tend to be more detailed and specific.

Independent media refers to any media, such as television, newspapers, or Internet-based publications, that is free of influence by government or corporate interests. The term has varied applications.

Open government is the governing doctrine which maintains that citizens have the right to access the documents and proceedings of the government to allow for effective public oversight. In its broadest construction, it opposes reason of state and other considerations which have tended to legitimize extensive state secrecy. The origins of open-government arguments can be dated to the time of the European Age of Enlightenment, when philosophers debated the proper construction of a then nascent democratic society. It is also increasingly being associated with the concept of democratic reform. The United Nations Sustainable Development Goal 16 for example advocates for public access to information as a criterion for ensuring accountable and inclusive institutions.

Data portability is a concept to protect users from having their data stored in "silos" or "walled gardens" that are incompatible with one another, i.e. closed platforms, thus subjecting them to vendor lock-in and making the creation of data backups or moving accounts between services difficult.

The Interactive Advertising Bureau (IAB) is an American advertising business organization that develops industry standards, conducts research, and provides legal support for the online advertising industry. The organization represents many of the most prominent media outlets globally, but mostly in the United States, Canada and Europe.

In information science, profiling refers to the process of construction and application of user profiles generated by computerized data analysis.

The United States Commission's fair information practice principles (FIPPs) are guidelines that represent widely accepted concepts concerning fair information practice in an electronic marketplace.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.

A digital marketing system (DMS) is a method of centralized channel distribution used primarily by SaaS products. It combines a content management system (CMS) with data centralization and syndication across the web, mobile, scannable surface, and social channels.

Critical data studies is the exploration of and engagement with social, cultural, and ethical challenges that arise when working with big data. It is through various unique perspectives and taking a critical approach that this form of study can be practiced. As its name implies, critical data studies draws heavily on the influence of critical theory, which has a strong focus on addressing the organization of power structures. This idea is then applied to the study of data.

Automated journalism, also known as algorithmic journalism or robot journalism, is a term that attempts to describe modern technological processes that have infiltrated the journalistic profession, such as news articles generated by computer programs. There are four main fields of application for automated journalism, namely automated content production, Data Mining, news dissemination and content optimization. Through artificial intelligence (AI) software, stories are produced automatically by computers rather than human reporters. These programs interpret, organize, and present data in human-readable ways. Typically, the process involves an algorithm that scans large amounts of provided data, selects from an assortment of pre-programmed article structures, orders key points, and inserts details such as names, places, amounts, rankings, statistics, and other figures. The output can also be customized to fit a certain voice, tone, or style.

Explainable AI (XAI), often overlapping with Interpretable AI, or Explainable Machine Learning (XML), either refers to an artificial intelligence (AI) system over which it is possible for humans to retain intellectual oversight, or refers to the methods to achieve this. The main focus is usually on the reasoning behind the decisions or predictions made by the AI which are made more understandable and transparent. XAI counters the "black box" tendency of machine learning, where even the AI's designers cannot explain why it arrived at a specific decision.

In the regulation of algorithms, particularly artificial intelligence and its subfield of machine learning, a right to explanation is a right to be given an explanation for an output of the algorithm. Such rights primarily refer to individual rights to be given an explanation for decisions that significantly affect an individual, particularly legally or financially. For example, a person who applies for a loan and is denied may ask for an explanation, which could be "Credit bureau X reports that you declared bankruptcy last year; this is the main factor in considering you too likely to default, and thus we will not give you the loan you applied for."

<span class="mw-page-title-main">Algorithmic bias</span> Technological phenomenon with social implications

Algorithmic bias describes systematic and repeatable errors in a computer system that create "unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm.

<span class="mw-page-title-main">Sandra Wachter</span> Data Ethics, Artificial Intelligence, robotics researcher

Sandra Wachter is a professor and senior researcher in data ethics, artificial intelligence, robotics, algorithms and regulation at the Oxford Internet Institute. She is a former Fellow of The Alan Turing Institute.

Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging. Another emerging topic is the regulation of blockchain algorithms and is mentioned along with regulation of AI algorithms. Many countries have enacted regulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms.

Predatory advertising, or predatory marketing, can be largely understood as the practice of manipulating vulnerable persons or populations into unfavorable market transactions through the undisclosed exploitation of these vulnerabilities. The vulnerabilities of persons/populations can be hard to determine, especially as they are contextually dependent and may not exist across all circumstances. Commonly exploited vulnerabilities include physical, emotional, social, cognitive, and financial characteristics. Predatory marketing campaigns may also rely on false or misleading messaging to coerce individuals into asymmetrical transactions. The history of the practice has existed as long as general advertising, but particularly egregious forms have accompanied the explosive rise of information technology. Massive data analytics industries have allowed marketers to access previously sparse and inaccessible personal information, leveraging and optimizing it through the use of savvy algorithms. Some common examples today include for-profit college industries, "fringe" financial institutions, political micro-targeting, and elder/child exploitation. Many legal actions have been taken at different levels of government to mitigate the practice, with various levels of success.

Automated decision-making (ADM) involves the use of data, machines and algorithms to make decisions in a range of contexts, including public administration, business, health, education, law, employment, transport, media and entertainment, with varying degrees of human oversight or intervention. ADM involves large-scale data from a range of sources, such as databases, text, social media, sensors, images or speech, that is processed using various technologies including computer software, algorithms, machine learning, natural language processing, artificial intelligence, augmented intelligence and robotics. The increasing use of automated decision-making systems (ADMS) across a range of contexts presents many benefits and challenges to human society requiring consideration of the technical, legal, ethical, societal, educational, economic and health consequences.

The Platform Work Directive is a proposed European Union Directive on the regulation of platform work in EU law.

Algorithmic accountability refers to the issue of where accountability should be apportioned for the consequences of real-world actions that were taken on account of algorithms used to reach a decision.

References

  1. Nicholas Diakopoulos & Michael Koliska (2016): Algorithmic Transparency in the News Media, Digital Journalism, doi : 10.1080/21670811.2016.1208053
  2. Diakopoulos, Nicholas (2015). "Algorithmic Accountability: Journalistic Investigation of Computational Power Structures". Digital Journalism. 3 (3): 398–415. doi:10.1080/21670811.2014.976411. S2CID   42357142.
  3. Dickey, Megan Rose (30 April 2017). "Algorithmic Accountability". TechCrunch. Retrieved 4 September 2017.
  4. "Workshop on Data and Algorithmic Transparency". 2015. Retrieved 4 January 2017.
  5. "Fairness, Accountability, and Transparency in Machine Learning". 2015. Retrieved 29 May 2017.
  6. Noyes, Katherine (9 April 2015). "The FTC is worried about algorithmic transparency, and you should be too". PCWorld. Retrieved 4 September 2017.
  7. "False Testimony" (PDF). Nature. 557 (7707): 612. 31 May 2018.
  8. "About - European Commission".