Robot ethics

Last updated

Robot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic (such as in healthcare or as 'killer robots' in war), and how robots should be designed such that they act 'ethically' (this last concern is also called machine ethics). Alternatively, roboethics refers specifically to the ethics of human behavior towards robots, as robots become increasingly advanced. [1] Robot ethics is a sub-field of ethics of technology, specifically information technology, and it has close links to legal as well as socio-economic concerns. Researchers from diverse areas are beginning to tackle ethical questions about creating robotic technology and implementing it in societies, in a way that will still ensure the safety of the human race. [2]

Contents

While the issues are as old as the word robot, serious academic discussions started around the year 2000. Robot ethics requires the combined commitment of experts of several disciplines, who have to adjust laws and regulations to the problems resulting from the scientific and technological achievements in Robotics and AI. The main fields involved in robot ethics are: robotics, computer science, artificial intelligence, philosophy, ethics, theology, biology, physiology, cognitive science, neurosciences, law, sociology, psychology, and industrial design. [3]

History and events

Some of the central discussion of ethics in relation to the treatment of non-human or non-biological things and their potential "spirituality". Another central topic, has to do with the development of machinery and eventually robots, this philosophy was also applied to robotics. One of the first publications directly addressing and setting the foundation for robot ethics was Runaround (story), a science fiction short story written by Isaac Asimov in 1942 which featured his well known Three Laws of Robotics. These three laws were continuously altered by Asimov, and a fourth, or zeroth law, was eventually added to precede the first three, in the context of his science fiction works. The short term "roboethics" was most likely coined by Gianmarco Veruggio. [4]

An important event that propelled the concern of roboethics was the First International Symposium on Roboethics in 2004 by the collaborative effort of Scuola di Robotica, the Arts Lab of Scuola Superiore Sant'Anna, Pisa, and the Theological Institute of Pontificia Accademia della Santa Croce, Rome. [5] Due to the activities of the school of Robotics which is a non profit organization and is to promote the knowledge of the science of Robotics among students, and the general public, this Roboethics Symposium was made. In discussions with students and non specialists, Gianmarco Veruggio and Fiorella Operto thought that it was necessary to spread correct conceptions among the general public about the alleged dangers in Robotics. They thought that a productive debate based on accurate insights and real knowledge could push people to take an active part in the education of public opinion, make them comprehend the positive uses of the new technology, and prevent its abuse. After two days of intense debating, anthropologist Daniela Cerqui identified three main ethical positions emerging from two days of intense debate:

  1. Those who are not interested in ethics. They consider that their actions are strictly technical, and do not think they have a social or a moral responsibility in their work.
  2. Those who are interested in short-term ethical questions. According to this profile, questions are expressed in terms of “good” or “bad,” and refer to some cultural values. For instance, they feel that robots have to adhere to social conventions. This will include “respecting” and helping humans in diverse areas such as implementing laws or in helping elderly people. (Such considerations are important, but we have to remember that the values used to define the “bad” and the “good” are relative. They are the contemporary values of the industrialized countries).
  3. Those who think in terms of long-term ethical questions, about, for example, the “Digital divide” between South and North, or young and elderly. They are aware of the gap between industrialized and poor countries, and wonder whether the former should not change their way of developing robotics to be more useful to the South. They do not formulate explicitly the question what for, but we can consider that it is implicit". [6]

These are some important events and projects in robot ethics. Further events in the field are announced by the euRobotics ELS topics group, and by RoboHub:

A hospital delivery robot in front of elevator doors stating "Robot Has Priority", a situation that may be regarded as reverse discrimination in relation to humans Hospital delivery robot having priority to elevators.jpg
A hospital delivery robot in front of elevator doors stating "Robot Has Priority", a situation that may be regarded as reverse discrimination in relation to humans

Computer scientist Virginia Dignum noted in a March 2018 issue of Ethics and Information Technology that the general societal attitude toward artificial intelligence (AI) has, in the modern era, shifted away from viewing AI as a tool and toward viewing it as an intelligent “team-mate”. In the same article, she assessed that, with respect to AI, ethical thinkers have three goals, each of which she argues can be achieved in the modern era with careful thought and implementation. [14] [15] [16] [17] [18] The three ethical goals are as follows:

Roboethics as a science or philosophical topic has begun to be a common theme in science fiction literature and films. One film that could be argued to be ingrained in pop culture that depicts the dystopian future use of robotic AI is The Matrix , depicting a future where humans and conscious sentient AI struggle for control of planet earth resulting in the destruction of most of the human race. An animated film based on The Matrix, the Animatrix , focused heavily on the potential ethical issues and insecurities between humans and robots. The movie is broken into short stories. Animatrix's animated shorts are also named after Isaac Asimov's fictional stories.

Another facet of roboethics is specifically concerned with the treatment of robots by humans, and has been explored in numerous films and television shows. One such example is Star Trek: The Next Generation, which has a humanoid android, named Data, as one of its main characters. For the most part, he is trusted with mission critical work, but his ability to fit in with the other living beings is often in question. [20] More recently, the movie Ex Machina and TV show Westworld have taken on these ethical questions quite directly by depicting hyper-realistic robots that humans treat as inconsequential commodities. [21] [22] The questions surrounding the treatment of engineered beings has also been key component of Blade Runner (franchise) for over 50 years. [23] Films like Her have even distilled the human relationship with robots even further by removing the physical aspect and focusing on emotions.

Although not a part of roboethics per se, the ethical behavior of robots themselves has also been a joining issue in roboethics in popular culture. The Terminator series focuses on robots run by an conscious AI program with no restraint on the termination of its enemies. This series too has the same archetype as The Matrix series, where robots have taken control. Another famous pop culture case of robots or AI without programmed ethics or morals is HAL 9000 in the Space Odyssey series, where HAL (a computer with advanced AI capabilities who monitors and assists humans on a space station) kills all the humans on board to ensure the success of the assigned mission after his own life is threatened. [24]

Killer robots

Lethal Autonomous Weapon Systems (LAWS) which is often called “killer robots,” are theoretically able to target and fire without human supervision and interference. In 2014, the Convention on Conventional Weapons (CCW) held two meetings. The first was the Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS). This meeting was about the special mandate on LAWS and intrigued intense discussion. [25] National delegations and many non-governmental organizations(NGOs) expressed their opinions on the matter.

Numerous NGOs and certain states such as Pakistan and Cuba are calling for a preventive prohibition of LAWS. They proposed their opinions based on deontological and consequentialist reasoning. On the deontological side, certain philosophers such as Peter Asaro and Robert Sparrow, most NGOs, and the Vatican all argue that authorizing too much rights to machine violates human dignity, and that people have the “right not to be killed by a machine.” To support their standpoint, they repeatedly cite the Martens Clause.

In the end of this meeting, the most important consequentialist objection was that LAWS would never be able to respect international humanitarian law (IHL), as believed by NGOs, many researchers, and several states (Pakistan, Austria, Egypt, Mexico).

According to the International Committee of the Red Cross(ICRC), “there is no doubt that the development and use of autonomous weapon systems in armed conflict is governed by international humanitarian law.” [26] States recognize this: those who participated in the first UN Expert Meeting in May 2014 recognized respect for IHL as an essential condition for the implementation of LAWS. With diverse predictions, certain states believe LAWS will be unable to meet this criterion, while others underline the difficulty of adjudicating at this stage without knowing the weapons' future capabilities (Japan, Australia). All insist equally on the ex-ante verification of the systems' conformity to IHL before they are put into service, in virtue of article of the first additional protocol to the Geneva Conventions.

Degree of human control

Three classifications of the degree of human control of autonomous weapon systems were laid out by Bonnie Docherty in a 2012 Human Rights Watch report. [27]

Sex robots

In 2015, the Campaign Against Sex Robots (CASR) was launched to draw attention to the sexual relationship of humans with machines. The campaign claims that sex robots are potentially harmful and will contribute to inequalities in society, and that an organized approach and ethical response against the development of sex robots is necessary. [28]

In the article Should We Campaign Against Sex Robots?, published by the MIT Press, researchers pointed some flaws on this campaign and did not support a ban on sex robots completely. Firstly, they argued that the particular claims advanced by the CASR were "unpersuasive," partly because of a lack of clarity about the campaign's aims and partly because of substantive defects in the main ethical objections put forward by campaign's founders. Secondly, they argued that it would be very difficult to endorse a general campaign against sex robots unless one embraced a highly conservative attitude towards the ethics of sex. Drawing upon the example of the campaign to stop killer robots, they thought that there were no inherently bad properties of sex robots that give rise to similarly serious levels of concern, the harm caused by sex robots being speculative and indirect. Nonetheless, the article concedes that there are legitimate concerns that can be raised about the development of sex robots. [29]

Law

With contemporary technological issues emerging as society pushes on, one topic that requires thorough thought is robot ethics concerning the law. Academics have been debating the process of how a government could go about creating legislation with robot ethics and law. A pair of scholars that have been asking these questions are Neil M. Richards Professor of Law at Washington University School of Law as well as, William D. Smart Associate Professor of Computer Science at McKelvey School of Engineering. In their paper "How Should Robots Think About Law" they make four main claims concerning robot ethics and law. [30] The groundwork of their argument lies on the definition of robot as "non-biological autonomous agents that we think captures the essence of the regulatory and technological challenges that robots present, and which could usefully be the basis of regulation." Second, the pair explores the future advanced capacities of robots within around a decades time. Their third claim argues a relation between the legal issues robot ethics and law experiences with the legal experiences of cyber-law. Meaning that robot ethics laws can look towards cyber-law for guidance. The "lesson" learned from cyber-law being the importance of the metaphors we understand emerging issues in technology as. This is based on if we get the metaphor wrong for example, the legislation surrounding the emerging technological issue is most likely wrong. The fourth claim they argue against is a metaphor that the pair defines as "The Android Fallacy". They argue against the android fallacy which claims humans and non-biological entities are "just like people".

Empirical research

There is mixed evidence as to whether people judge robot behavior similarly to humans or not. Some evidence indicates that people view bad behavior negatively and good behavior positively regardless of whether the agent of the behavior is a human or a robot; however, robots receive less credit for good behavior and more blame for bad behavior. [31] Other evidence suggests that malevolent behavior by robots is seen as more morally wrong than benevolent behavior is seen as morally right; malevolent robot behavior is seen as more intentional than benevolent behavior. [32] In general, people's moral judgments of both robots and humans are based on the same justifications and concepts but people have different moral expectations when judging humans and robots. [33] Research has also found that when people try to interpret and understand how robots decide to behave in a particular way, they may see robots as using rules of thumb (advance the self, do what is right, advance others, do what is logical, and do what is normal) that align with established ethical doctrines (egotism, deontology, altruism, utilitarianism, and normative). [34]

See also

Notes

  1. Veruggio, Gianmarco; Operto, Fiorella (2008), Siciliano, Bruno; Khatib, Oussama (eds.), "Roboethics: Social and Ethical Implications of Robotics", Springer Handbook of Robotics, Springer Berlin Heidelberg, pp. 1499–1524, doi:10.1007/978-3-540-30301-5_65, ISBN   9783540303015
  2. "Robot Ethics". IEEE Robotics and Automation Society. Retrieved 2017-06-26.
  3. "Robot Ethics - IEEE Robotics and Automation Society - IEEE Robotics and Automation Society". www.ieee-ras.org. Retrieved 2022-04-10.
  4. Tzafestas, Spyros G. (2016). Roboethics A Navigating Overview. Cham: Springer. p. 1. ISBN   978-3-319-21713-0.
  5. "ROBOETHICS Cover". www.roboethics.org. Retrieved 2020-09-29.
  6. Veruggio, Gianmarco. "The Birth of Roboethics" (PDF). www.roboethics.org.
  7. "World Robot Declaration". Kyodo News.
  8. "Saudi Arabia gives citizenship to a non-Muslim, English-Speaking robot". Newsweek. 26 October 2017.
  9. "Saudi Arabia bestows citizenship on a robot named Sophia". TechCrunch. October 26, 2017. Retrieved October 27, 2016.
  10. "Saudi Arabia takes terrifying step to the future by granting a robot citizenship". AV Club. October 26, 2017. Retrieved October 28, 2017.
  11. "Saudi Arabia criticized for giving female robot citizenship, while it restricts women's rights - ABC News". Abcnews.go.com. Retrieved 2017-10-28.
  12. Iphofen, Ron; Kritikos, Mihalis (2021-03-15). "Regulating artificial intelligence and robotics: ethics by design in a digital society". Contemporary Social Science. 16 (2): 170–184. doi:10.1080/21582041.2018.1563803. ISSN   2158-2041. S2CID   59298502.
  13. "Non-Human Party". 2021.
  14. Rahwan, Iyad (2018). "Society-In-the-Loop: Programming the Algorithmic Social Contract". Ethics and Information Technology. 20: 5–14. arXiv: 1707.07232 . doi:10.1007/s10676-017-9430-8. S2CID   3674879.
  15. Bryson, Joanna (2018). "Patiency Is Not a Virtue: the Design of Intelligent Systems and Systems of Ethics". Ethics and Information Technology. 20: 15–26. doi: 10.1007/s10676-018-9448-6 .
  16. Vamplew, Peter; Dazeley, Richard; Foale, Cameron; Firmin, Sally (2018). "Human-Aligned Artificial Intelligence Is a Multiobjective Problem". Ethics and Information Technology. 20: 27–40. doi:10.1007/s10676-017-9440-6. hdl:1959.17/164225. S2CID   3696067.
  17. Bonnemains, Vincent; Saurel, Claire; Tessier, Catherine (2018). "Embedded Ethics: Some Technical and Ethical Challenges" (PDF). Ethics and Information Technology. 20: 41–58. doi:10.1007/s10676-018-9444-x. S2CID   3697093.
  18. Arnold, Thomas; Scheutz, Matthias (2018). "The 'Big Red Button' Is Too Late: An Alternative Model for the Ethical Evaluation of AI Systems". Ethics and Information Technology. 20: 59–69. doi:10.1007/s10676-018-9447-7. S2CID   3582967.
  19. Dignum, Virginia (2018). "Ethics in Artificial Intelligence: Introduction to the Special Issue". Ethics and Information Technology. 20: 1–3. doi: 10.1007/s10676-018-9450-z .
  20. Short, Sue (2003-01-01). "The Measure of a Man?: Asimov's Bicentennial Man, Star Trek's Data, and Being Human". Extrapolation. 44 (2): 209–223. doi:10.3828/extr.2003.44.2.6. ISSN   0014-5483.
  21. Staff, Pacific Standard. "Can 'Westworld' Give Us New Ways of Talking About Slavery?". Pacific Standard. Retrieved 2019-09-16.
  22. Parker, Laura (2015-04-15). "How 'Ex Machina' Stands Out for Not Fearing Artificial Intelligence". The Atlantic. Retrieved 2019-09-16.
  23. Kilkenny, Katie. "The Meaning of Life in 'Blade Runner 2049'". Pacific Standard. Retrieved 2019-09-16.
  24. Krishnan, Armin (2016). Killer Robots: Legality and Ethicality of Autonomous Weapons. Routledge. doi:10.4324/9781315591070. ISBN   9781315591070 . Retrieved 2019-09-16.
  25. "2014". reachingcriticalwill.org. Retrieved 2022-04-03.
  26. "International Committee of the Red Cross (ICRC) position on autonomous weapon systems: ICRC position and background paper". International Review of the Red Cross. 102 (915): 1335–1349. December 2020. doi:10.1017/s1816383121000564. ISSN   1816-3831. S2CID   244396800.
  27. Amitai Etzioni; Oren Etzioni (June 2017). "Pros and Cons of Autonomous Weapons Systems". army.mil.
  28. Temperton, James (2015-09-15). "Campaign calls for ban on sex robots". Wired UK . ISSN   1357-0978 . Retrieved 2022-08-07.
  29. Danaher, John; Earp, Brian D.; Sandberg, Anders (2017), Danaher, John; McArthur, Neil (eds.), "Should We Campaign Against Sex Robots?", Robot Sex: Social and Ethical Implications, Cambridge, MA: MIT Press, retrieved 2022-04-16
  30. Richards, Neil M.; Smart, William D. (2013). "How Should the Law Think About Robots?". SSRN   2263363.
  31. Banks, Jaime (2020-09-10). "Good Robots, Bad Robots: Morally Valenced Behavior Effects on Perceived Mind, Morality, and Trust". International Journal of Social Robotics. 13 (8): 2021–2038. doi: 10.1007/s12369-020-00692-3 . hdl: 2346/89911 .
  32. Swiderska, Aleksandra; Küster, Dennis (2020). "Robots as Malevolent Moral Agents: Harmful Behavior Results in Dehumanization, Not Anthropomorphism". Cognitive Science. 44 (7): e12872. doi: 10.1111/cogs.12872 . PMID   33020966. S2CID   220429245.
  33. Voiklis, John; Kim, Boyoung; Cusimano, Corey; Malle, Bertram F. (August 2016). "Moral judgments of human vs. Robot agents". 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). pp. 775–780. doi:10.1109/ROMAN.2016.7745207. ISBN   978-1-5090-3929-6. S2CID   25295130.
  34. Banks, Jaime; Koban, Kevin (2021). "Framing Effects on Judgments of Social Robots' (Im)Moral Behaviors". Frontiers in Robotics and AI. 8: 627233. doi: 10.3389/frobt.2021.627233 . PMC   8141842 . PMID   34041272.

Related Research Articles

<span class="mw-page-title-main">Robot</span> Machine capable of carrying out a complex series of actions automatically

A robot is a machine—especially one programmable by a computer—capable of carrying out a complex series of actions automatically. A robot can be guided by an external control device, or the control may be embedded within. Robots may be constructed to evoke human form, but most robots are task-performing machines, designed with an emphasis on stark functionality, rather than expressive aesthetics.

Robotic control is the system that contributes to the movement of robots. This involves the mechanical aspects and programmable systems that makes it possible to control robots. Robotics can be controlled by various means including manual, wireless, semi-autonomous, and fully autonomous.

<span class="mw-page-title-main">Military robot</span> Robotic devices designed for military applications

Military robots are autonomous robots or remote-controlled mobile robots designed for military applications, from transport to search & rescue and attack.

Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

Laws of robotics are any set of laws, rules, or principles, which are intended as a fundamental framework to underpin the behavior of robots designed to have a degree of autonomy. Robots of this degree of complexity do not yet exist, but they have been widely anticipated in science fiction, films and are a topic of active research and development in the fields of robotics and artificial intelligence.

Human–robot interaction (HRI) is the study of interactions between humans and robots. Human–robot interaction is a multidisciplinary field with contributions from human–computer interaction, artificial intelligence, robotics, natural language processing, design, and psychology. A subfield known as physical human–robot interaction (pHRI) has tended to focus on device design to enable people to safely interact with robotic systems.

Neurorobotics is the combined study of neuroscience, robotics, and artificial intelligence. It is the science and technology of embodied autonomous neural systems. Neural systems include brain-inspired algorithms, computational models of biological neural networks and actual biological systems. Such neural systems can be embodied in machines with mechanic or any other forms of physical actuation. This includes robots, prosthetic or wearable systems but also, at smaller scale, micro-machines and, at the larger scales, furniture and infrastructures.

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificial intelligence (AI) systems. The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks. Some application areas may also have particularly important ethical implications, like healthcare, education, or the military.

In artificial intelligence, apprenticeship learning is the process of learning by observing an expert. It can be viewed as a form of supervised learning, where the training dataset consists of task executions by a demonstration teacher.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. Machine ethics should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with the grander social effects of technology.

<span class="mw-page-title-main">Lethal autonomous weapon</span> Autonomous military technology system

Lethal autonomous weapons (LAWs) are a type of autonomous military system that can independently search for and engage targets based on programmed constraints and descriptions. LAWs are also known as lethal autonomous weapon systems (LAWS), autonomous weapon systems (AWS), robotic weapons or killer robots. LAWs may operate in the air, on land, on water, underwater, or in space. The autonomy of current systems as of 2018 was restricted in the sense that a human gives the final command to attack—though there are exceptions with certain "defensive" systems.

<i>The Machine Question</i> 2012 nonfiction book by David J. Gunkel

The Machine Question: Critical Perspectives on AI, Robots, and Ethics is a 2012 nonfiction book by David J. Gunkel that discusses the evolution of the theory of human ethical responsibilities toward non-human things and to what extent intelligent, autonomous machines can be considered to have legitimate moral responsibilities and what legitimate claims to moral consideration they can hold. The book was awarded as the 2012 Best Single Authored Book by the Communication Ethics Division of the National Communication Association.

Robotic governance provides a regulatory framework to deal with autonomous and intelligent machines. This includes research and development activities as well as handling of these machines. The idea is related to the concepts of corporate governance, technology governance and IT-governance, which provide a framework for the management of organizations or the focus of a global IT infrastructure.

<span class="mw-page-title-main">Joanna Bryson</span> Researcher and Professor of Ethics and Technology

Joanna Joy Bryson is professor at Hertie School in Berlin. She works on Artificial Intelligence, ethics and collaborative cognition. She has been a British citizen since 2007.

Algorithmic entities refer to autonomous algorithms that operate without human control or interference. Recently, attention is being given to the idea of algorithmic entities being granted legal personhood. Professor Shawn Bayern and Professor Lynn M. LoPucki popularized through their papers the idea of having algorithmic entities that obtain legal personhood and the accompanying rights and obligations.

Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging. Another emerging topic is the regulation of blockchain algorithms and is mentioned along with regulation of AI algorithms. Many countries have enacted regulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms.

The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union and in supra-national bodies like the IEEE, OECD and others. Since 2016, a wave of AI ethics guidelines have been published in order to maintain social control over the technology. Regulation is considered necessary to both encourage AI and manage associated risks. In addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.

Ajung Moon is a Korean-Canadian experimental roboticist specializing in ethics and responsible design of interactive robots and autonomous intelligent systems. She is an assistant professor of electrical and computer engineering at McGill University and the Director of the McGill Responsible Autonomy & Intelligent System Ethics (RAISE) lab. Her research interests lie in human-robot interaction, AI ethics, and robot ethics.

<span class="mw-page-title-main">Kay Firth-Butterfield</span> Law and AI ethics professor & author

Kay Firth-Butterfield is a lawyer, professor, and author specializing in the intersection of artificial intelligence, international relations, Business and AI ethics. She is the CEO of the Centre for Trustworthy Technology which is a Member of the World Economic Forum's Forth Industrial Revolution Network. Before starting her new position Kay was the head of AI and machine learning at the World Economic Forum. She was an adjunct professor of law at the University of Texas at Austin.

<span class="mw-page-title-main">Alan Winfield</span> British engineer and educator

Alan Winfield is a British engineer and educator. He is Professor of Robot Ethics at UWE Bristol, Honorary Professor at the University of York, and Associate Fellow in the Cambridge Centre for the Future of Intelligence. He chairs the advisory board of the Responsible Technology Institute, University of Oxford.

References

Further reading