Sandra Wachter

Last updated
Sandra Wachter
Sandra Wachter at Berkman Klein Center for Internet & Society.jpg
Sandra Wachter speaks at the Berkman Klein Center for Internet & Society in 2018
Born
Austria
Alma mater University of Vienna
University of Oxford
Scientific career
Institutions Oxford Internet Institute
Alan Turing Institute
Royal Academy of Engineering

Sandra Wachter is a professor and senior researcher in data ethics, artificial intelligence, robotics, algorithms and regulation at the Oxford Internet Institute. [1] She is a former Fellow of The Alan Turing Institute. [2]

Contents

Early life and education

Wachter grew up in Austria and studied law at the University of Vienna. [3] [4] Wachter has said that she was inspired to work in technology because of her grandmother, who was one of the first women admitted to Vienna's Technical University. [3]

She completed her LL.M. in 2009, before starting as a legal counsel in the Austrian Federal Ministry of Health. It was during this time that she joined the faculty at the University of Vienna, pursuing a doctoral degree in technology, intellectual property and democracy. She completed her PhD degree in 2015, and simultaneously earned a master's degree in social sciences at the University of Oxford. After earning her doctorate, Wachter joined the Royal Academy of Engineering, where she worked in public policy. She returned to the University of Vienna where she worked on various ethical aspects of innovation. [5]

Research

Her work covers legal and ethical issues associated with big data, artificial intelligence, algorithms and data protection. [6] [7] She believes that there needs to be a balance between technical innovation and personal control of information. [8] Wachter was made a research fellow at the Alan Turing Institute in 2016. In this capacity she has evaluated the ethical and legal aspects of data science. She has argued that artificial intelligence should be more transparent and accountable, and that people have a "right to reasonable inferences". [9] [10] [11] She has highlighted cases where opaque algorithms have become racist and sexist; such as discrimination in applications to St George's Hospital and Medical School in the 1970s and overestimations of black defendants reoffending when using the program COMPAS. [9] Whilst Wachter appreciates that it is difficult to eliminate bias from training data sets, she believes that is possible to develop tools to identify and eliminate them. [9] [12] She has looked at ways to audit artificial intelligence to tackle discrimination and promote fairness. [4] [13] In this capacity she has argued that Facebook should continue to use human moderators. [14]

She has argued that General Data Protection Regulation (GDPR) [15] is in need of reform, as despite attention being paid to the input stage, less time is spent on how the data is assessed. [16] [17] She believes that privacy must mean more than data protection, focussing on data evaluation and ways for people to control how information about them is stored and shared. [16] [18]

Working with Brent Mittelstadt and Chris Russell, Wachter suggested counterfactual explanations – statements of how different the world would be to result in a different outcome. When decisions are made by an algorithm it can be difficult for people to understand why they are being made, especially without revealing trade secrets about an algorithm. Counterfactual explanations would permit the interrogation of algorithms without the need to reveal secrets. The approach of using counterfactual explanations was adopted by Google on What If, a feature on TensorBoard, a Google Open Source web application that uses machine learning. [3] Counterfactual explanations without opening the black box: automated decisions and the GDPR, [19] a paper written by Wachter, Brent Mittelstadt and Chris Russell, has been featured by the press [3] and is widely cited in scholarly literature.

Academic service

She was made an associate professor at the University of Oxford in 2019 and a visiting professor at Harvard University from spring 2020. [4] [20] Wachter is also a member of the World Economic Forum Council on Values, Ethics and Innovation, an affiliate at the Bonavero Institute of Human Rights and a member of the European Commission Expert Group on Autonomous Cars. [21] [22] Additionally, she is a research fellow at the German Internet Institute. [23]

Awards and honours

Related Research Articles

<span class="mw-page-title-main">Friendly artificial intelligence</span> AI to benefit humanity

Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

<span class="mw-page-title-main">Oxford Internet Institute</span> Research institute at the University of Oxford

The Oxford Internet Institute (OII) is a multi-disciplinary department of social and computer science dedicated to the study of information, communication, and technology, and is a part of the Social Sciences Division of the University of Oxford, England.

<span class="mw-page-title-main">Luciano Floridi</span> Italian philosopher (born 1964)

Luciano Floridi is an Italian and British philosopher. He holds a double appointment as professor of philosophy and ethics of information at the University of Oxford, Oxford Internet Institute where is also Governing Body Fellow of Exeter College, Oxford, and as Professor of Sociology of Culture and Communication at the University of Bologna, Department of Legal Studies, where he is the director of the Centre for Digital Ethics. He is adjunct professor, Department of Economics, American University, Washington D.C. At the end of the academic year 2022-2023, Floridi will move to Yale, becoming the Founding Director of the institution’s Digital Ethics Center. He is married to the neuroscientist Anna Christina Nobre.

<span class="mw-page-title-main">Ethics of artificial intelligence</span> Ethical issues specific to AI

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificially intelligent systems. It is sometimes divided into a concern with the moral behavior of humans as they design, make, use and treat artificially intelligent systems, and a concern with the behavior of machines, in machine ethics. It also includes the issue of a possible singularity due to superintelligent AI.

Data portability is a concept to protect users from having their data stored in "silos" or "walled gardens" that are incompatible with one another, i.e. closed platforms, thus subjecting them to vendor lock-in and making the creation of data backups or moving accounts between services difficult.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. Machine ethics should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with the grander social effects of technology.

<span class="mw-page-title-main">Alan Turing Institute</span> Research institute in Britain

The Alan Turing Institute is the United Kingdom's national institute for data science and artificial intelligence, founded in 2015 and largely funded by the UK government. It is named after Alan Turing, the British mathematician and computing pioneer.

<span class="mw-page-title-main">Gina Neff</span> American sociologist

Gina Neff is the Executive Director of the Minderoo Centre for Technology and Democracy at the University of Cambridge. Neff was previously Professor of Technology & Society at the Oxford Internet Institute and the Department of Sociology at the University of Oxford. Neff is an organizational sociologist whose research explores the social and organizational impact of new communication technologies, with a focus on innovation, the digital transformation of industries, and how new technologies impact work.

<span class="mw-page-title-main">Artificial intelligence in healthcare</span> Overview of the use of artificial intelligence in healthcare

Artificial intelligence in healthcare is an overarching term used to describe the use of machine-learning algorithms and software, or artificial intelligence (AI), to mimic human cognition in the analysis, presentation, and comprehension of complex medical and health care data, or to exceed human capabilities by providing new ways to diagnose, treat, or prevent disease. Specifically, AI is the ability of computer algorithms to approximate conclusions based solely on input data.

<span class="mw-page-title-main">Explainable artificial intelligence</span> AI in which the results of the solution can be understood by humans

Explainable AI (XAI), also known as Interpretable AI, or Explainable Machine Learning (XML), is artificial intelligence (AI) in which humans can understand the reasoning behind decisions or predictions made by the AI. It contrasts with the "black box" concept in machine learning, where even the AI's designers cannot explain why it arrived at a specific decision.

In the regulation of algorithms, particularly artificial intelligence and its subfield of machine learning, a right to explanation is a right to be given an explanation for an output of the algorithm. Such rights primarily refer to individual rights to be given an explanation for decisions that significantly affect an individual, particularly legally or financially. For example, a person who applies for a loan and is denied may ask for an explanation, which could be "Credit bureau X reports that you declared bankruptcy last year; this is the main factor in considering you too likely to default, and thus we will not give you the loan you applied for."

<span class="mw-page-title-main">Algorithmic bias</span> Technological phenomenon with social implications

Algorithmic bias describes systematic and repeatable errors in a computer system that create "unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm.

<span class="mw-page-title-main">Safiya Noble</span> American professor and author

Safiya Umoja Noble is a professor at UCLA, and is the co-founder and co-director of the UCLA Center for Critical Internet Inquiry. She is the author of Algorithms of Oppression, and co-editor of two edited volumes: The Intersectional Internet: Race, Sex, Class and Culture and Emotions, Technology & Design. She is a research associate at the Oxford Internet Institute at the University of Oxford. She was appointed a Commissioner to the University of Oxford Commission on AI and Good Governance in 2020. In 2020 she was nominated to the Global Future Council on Artificial Intelligence for Humanity at the World Economic Foundation.

Marina Denise Anne Jirotka is professor of human-centered computing at the University of Oxford, director of the Responsible Technology Institute, governing body fellow at St Cross College, board member of the Society for Computers and Law and a research associate at the Oxford Internet Institute. She leads a team that works on responsible innovation, in a range of ICT fields including robotics, AI, machine learning, quantum computing, social media and the digital economy. She is known for her work on the 'Ethical Black Box', a proposal that robots using AI should be fitted with a type of inflight recorder, similar to those used by aircraft, to track the decisions and actions of the AI when operating in an uncontrolled environment and to aid in post-accident investigations.

Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging. Another emerging topic is the regulation of blockchain algorithms and is mentioned along with regulation of AI algorithms. Many countries have enacted regulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms.

The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union and in supra-national bodies like the IEEE, OECD and others. Since 2016, a wave of AI ethics guidelines have been published in order to maintain social control over the technology. Regulation is considered necessary to both encourage AI and manage associated risks. In addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.

Mariarosaria Taddeo is a senior research fellow at the Oxford Internet Institute, part of the University of Oxford, and deputy director of the Digital Ethics Lab. Taddeo is also an associate scholar at Said Business School, University of Oxford.

Michael Veale is a technology policy academic who focuses on information technology and the law. He is currently associate professor in the Faculty of Laws at University College London (UCL).

Himabindu "Hima" Lakkaraju is an Indian-American computer scientist who works on machine learning, artificial intelligence, algorithmic bias, and AI accountability. She is currently an Assistant Professor at the Harvard Business School and is also affiliated with the Department of Computer Science at Harvard University. Lakkaraju is known for her work on explainable machine learning. More broadly, her research focuses on developing machine learning models and algorithms that are interpretable, transparent, fair, and reliable. She also investigates the practical and ethical implications of deploying machine learning models in domains involving high-stakes decisions such as healthcare, criminal justice, business, and education. Lakkaraju was named as one of the world's top Innovators Under 35 by both Vanity Fair and the MIT Technology Review.

Automated decision-making (ADM) involves the use of data, machines and algorithms to make decisions in a range of contexts, including public administration, business, health, education, law, employment, transport, media and entertainment, with varying degrees of human oversight or intervention. ADM involves large-scale data from a range of sources, such as databases, text, social media, sensors, images or speech, that is processed using various technologies including computer software, algorithms, machine learning, natural language processing, artificial intelligence, augmented intelligence and robotics. The increasing use of automated decision-making systems (ADMS) across a range of contexts presents many benefits and challenges to human society requiring consideration of the technical, legal, ethical, societal, educational, economic and health consequences.

References

  1. "Sandra Wachter — Oxford Internet Institute". www.oii.ox.ac.uk. Retrieved 2021-03-10.
  2. "Sandra Wachter". The Alan Turing Institute. Retrieved 2019-03-10.
  3. 1 2 3 4 Katwala, Amit (2018-12-11). "How to make algorithms fair when you don't know what they're doing". Wired UK. ISSN   1357-0978 . Retrieved 2019-03-10.
  4. 1 2 3 "Sandra Wachter | Harvard Law School" . Retrieved 2019-10-30.
  5. "Robots: Faithful servants or existential threat?". Create the Future. 2016-06-06. Retrieved 2019-10-30.
  6. "Why it's totally unsurprising that Amazon's recruitment AI was biased against women". nordic.businessinsider.com. 2018-10-13. Retrieved 2019-03-10.
  7. Baraniuk, Chris. "Exclusive: UK police wants AI to stop violent crime before it happens". New Scientist. Retrieved 2019-03-10.
  8. CPDP 2019: Profiling, microtargeting and a right to reasonable algorithmic inferences. , retrieved 2019-10-30
  9. 1 2 3 Hutson, Matthew (2017-05-31). "Q&A: Should artificial intelligence be legally required to explain itself?". AAAS. Retrieved 2019-10-30.
  10. "OII London Lecture: Show Me Your Data and I'll Tell You Who You Are — Oxford Internet Institute". www.oii.ox.ac.uk. Retrieved 2019-10-30.
  11. Privacy, identity, and autonomy in the age of big data and AI - Sandra Wachter, University of Oxford , retrieved 2019-10-30
  12. "The ethical use of Artificial Intelligence". www.socsci.ox.ac.uk. Retrieved 2021-10-19.
  13. "What Does a Fair Algorithm Actually Look Like?". Wired. ISSN   1059-1028 . Retrieved 2019-10-30.
  14. Vincent, James (2019-02-27). "AI won't relieve the misery of Facebook's human moderators". The Verge. Retrieved 2019-10-30.
  15. Artificial Intelligence: GDPR and beyond - Dr. Sandra Wachter, University of Oxford , retrieved 2019-10-30
  16. 1 2 Shah, Sooraj. "This Lawyer Believes GDPR Is Failing To Protect You - Here's What She Would Change". Forbes. Retrieved 2019-03-10.
  17. Wachter, Sandra (2018-04-30). "Will our online lives soon become 'private' again?" . Retrieved 2019-10-30.
  18. "Privacy, Identity, & Autonomy in the age of Big Data and AI". TechNative. 2019-06-03. Retrieved 2019-10-30.
  19. Wachter, Sandra; Mittelstadt, Brent; Russell, Chris (2017). "Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR". SSRN Working Paper Series. arXiv: 1711.00399 . Bibcode:2017arXiv171100399W. doi:10.2139/ssrn.3063289. ISSN   1556-5068. S2CID   3995299.
  20. "Professor Sandra Wachter, the OII". www.oii.ox.ac.uk. Retrieved 2019-10-30.
  21. "Sandra Wachter". World Economic Forum. Retrieved 2019-10-30.
  22. "Academic Affiliates of the Bonavero Institute of Human Rights". Oxford Law Faculty. 2018-01-25. Retrieved 2019-10-30.
  23. https://www.weizenbaum-institut.de/en/spezialseiten/persons-details/p/sandra-wachter/
  24. "ESRC Excellence in Impact 2021 - awards ceremony". Social Sciences, OXford. 2021-10-19.
  25. Greene, Tristan (2019-02-28). "Here's who has the most juice in Twitter's AI influencer community". The Next Web. Retrieved 2019-03-10.
  26. "PLSC Paper Awards". Berkeley Law. Retrieved 2019-10-30.
  27. Hamilton, Isobel Asher. "3 female AI trailblazers reveal how they beat the odds and overcame sexism to become leaders in their field". Business Insider. Retrieved 2019-10-30.
  28. Wood, Mary Hanbury, Isobel Asher Hamilton, Charlie. "UK Tech 100: The 30 most important, interesting, and impactful women shaping British technology in 2019". Business Insider. Retrieved 2019-10-30.
  29. "Turing partners with Cog X London 2017 to explore the impact of AI across sectors". The Alan Turing Institute. Retrieved 2019-10-30.