Blay Whitby

Last updated

Dr Blay Whitby is a philosopher and technology ethicist, specialising in computer science, artificial intelligence and robotics. He is based at the University of Sussex, England. [1]

Contents

Blay Whitby graduated with first class honours from New College, Oxford University in 1974 and completed his PhD on "The Social Implications of Artificial Intelligence" at Middlesex University in 2003. His publications are predominantly in the area of the philosophy and ethical implications of artificial intelligence. His views place particular stress on the moral responsibilities of scientific and technical professionals, [2] [3] having some features in common with techno-progressivism. [4] Widening engagement in science and increasing levels of debate in ethical issues is also an important concern.

Whitby is a member of the Ethics Strategic Panel of BCS, the Chartered Institute for IT. He also participates in art/science collaborations.

Selected publications

Related Research Articles

Artificial intelligence Intelligence demonstrated by machines

Artificial intelligence (AI) is intelligence demonstrated by machines, as opposed to the natural intelligence displayed by humans or animals. Leading AI textbooks define the field as the study of "intelligent agents": any system that perceives its environment and takes actions that maximize its chance of achieving its goals. Some popular accounts use the term "artificial intelligence" to describe machines that mimic "cognitive" functions that humans associate with the human mind, such as "learning" and "problem solving".

Friendly artificial intelligence refers to hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to foster the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained.

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

AI takeover Hypothetical artificial intelligence scenario

An AI takeover is a hypothetical scenario in which some form of artificial intelligence (AI) becomes the dominant form of intelligence on Earth, with computer programs or robots effectively taking the control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce, takeover by a superintelligent AI, and the popular notion of a robot uprising. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

Computer ethics is a part of practical philosophy concerned with how computing professionals should make decisions regarding professional and social conduct. Margaret Anne Pierce, a professor in the Department of Mathematics and Computers at Georgia Southern University has categorized the ethical decisions related to computer technology and usage into three primary influences:

  1. The individual's own personal code.
  2. Any informal code of ethical conduct that exists in the work place.
  3. Exposure to formal codes of ethics.

The philosophy of artificial intelligence is a branch of the philosophy of technology that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence. Some scholars argue that the AI community's dismissal of philosophy is detrimental.

Aaron Sloman is a philosopher and researcher on artificial intelligence and cognitive science. He held the Chair in Artificial Intelligence and Cognitive Science at the School of Computer Science at the University of Birmingham, and before that a chair with the same title at the University of Sussex. Since retiring he is Honorary Professor of Artificial Intelligence and Cognitive Science at Birmingham. He has published widely on philosophy of mathematics, epistemology, cognitive science, and artificial intelligence; he also collaborated widely, e.g. with biologist Jackie Chappell on the evolution of intelligence.

Robot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic, and how robots should be designed such that they act 'ethically'. Alternatively, roboethics refers specifically to the ethics of human behavior towards robots, as robots become increasingly advanced. Robot ethics is a sub-field of ethics of technology, specifically information technology, and it has close links to legal as well as socio-economic concerns. Researchers from diverse areas are beginning to tackle ethical questions about creating robotic technology and implementing it in societies, in a way that will still ensure the safety of the human race.

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificially intelligent systems. It is sometimes divided into a concern with the moral behavior of humans as they design, make, use and treat artificially intelligent systems, and a concern with the behavior of machines, in machine ethics. It also includes the issue of a possible singularity due to superintelligent AI.

Turing test Test of a machines ability to exhibit intelligent behavior equivalent to that of a human

The Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a computer keyboard and screen so the result would not depend on the machine's ability to render words as speech. If the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test. The test results do not depend on the machine's ability to give correct answers to questions, only how closely its answers resemble those a human would give.

James H. Moor is the Daniel P. Stone Professor of Intellectual and Moral Philosophy at Dartmouth College. He earned his Ph.D. in 1972 from Indiana University. Moor's 1985 paper entitled "What is Computer Ethics?" established him as one of the pioneering theoreticians in the field of computer ethics. He has also written extensively on the Turing Test. His research includes study in philosophy of artificial intelligence, philosophy of mind, philosophy of science, and logic.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. Machine ethics should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with the grander social effects of technology.

Mark Coeckelbergh Belgian philosopher of technology

Mark Coeckelbergh is a Belgian philosopher of technology. He is Professor of Philosophy of Media and Technology at the Department of Philosophy of the University of Vienna and former President of the Society for Philosophy and Technology. He was previously Professor of Technology and Social Responsibility at De Montfort University in Leicester, UK, Managing Director of the 3TU Centre for Ethics and Technology, and a member of the Philosophy Department of the University of Twente. Before moving to Austria, he has lived and worked in Belgium, the UK, and the Netherlands. He is the author of several books, including Growing Moral Relations (2012), Human Being @ Risk (2013), Environmental Skill (2015), Money Machines (2015), New Romantic Cyborgs (2017), Moved by Machines (2019), the textbook Introduction to Philosophy of Technology (2019), and AI Ethics (2020). He has written many articles and is an expert in ethics of artificial intelligence. He is best known for his work in philosophy of technology and ethics of robotics and artificial intelligence (AI), he has also published in the areas of moral philosophy and environmental philosophy.

<i>The Machine Question</i> 2012 nonfiction book by David J. Gunkel

The Machine Question: Critical Perspectives on AI, Robots, and Ethics is a 2012 nonfiction book by David J. Gunkel that discusses the evolution of the theory of human ethical responsibilities toward non-human things and to what extent intelligent, autonomous machines can be considered to have legitimate moral responsibilities and what legitimate claims to moral consideration they can hold. The book was awarded as the 2012 Best Single Authored Book by the Communication Ethics Division of the National Communication Association.

Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could someday result in human extinction or some other unrecoverable global catastrophe. It is argued that the human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. If AI surpasses humanity in general intelligence and becomes "superintelligent", then it could become difficult or impossible for humans to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.

Shannon Vallor

Shannon Vallor is a philosopher of technology. She is the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute. She was at Santa Clara University in Santa Clara, California where she was the Regis and Dianne McKenna Professor of Philosophy at SCU.

S. Matthew Liao

S. Matthew Liao is an American philosopher specializing in bioethics and normative ethics. He is internationally known for his work on topics including children’s rights and human rights, novel reproductive technologies, neuroethics, and the ethics of artificial intelligence. Liao currently holds the Arthur Zitrin Chair of Bioethics, and is the Director of the Center for Bioethics and Affiliated Professor in the Department of Philosophy at New York University. He has previously held appointments at Oxford University, Johns Hopkins, Georgetown, and Princeton University.

Joanna Bryson Researcher; Professor of Ethics and Technology

Joanna Bryson is professor at Hertie School in Berlin. She works on Artificial Intelligence, ethics and collaborative cognition. She has been a British citizen since 2007.

Julie Carpenter American Researcher, Educator, Author, Speaker, Human-Robot Interaction Specialist

Julie Carpenter, born Julie Gwyn Wajdyk, is an American researcher whose work focuses on human behavior with emerging technologies, especially within vulnerable and marginalized populations. She is best known for her work in human attachment to robots and other forms of artificial intelligence.

Wendell Wallach is a bioethicist and author focused on the ethics and governance of emerging technologies, in particular artificial intelligence and neuroscience. He is a scholar at Yale University’s Interdisciplinary Center for Bioethics, a senior advisor to The Hastings Center, a Carnegie/Uehiro Senior Fellow at the Carnegie Council for Ethics in International Affairs, and a fellow at the Center for Law and Innovation at the Sandra Day O’Connor School of Law at Arizona State University. He has written two books on the ethics of emerging technologies.

References

  1. Blay Whitby, University of Sussex, UK.
  2. Whitby, B.R (2007) Computing Machinery and Morality in AI & Society, Vol. 22 No. 4. April 2008 pp. 551–563.
  3. More or less human-like? Ethical issues in human-robot interaction in Ethics of Human Interaction with Robotic, Bionic, and AI Systems — Concepts and Policies Naples, Italy: ETHICBOTS European Project.
  4. Sometimes it’s hard to be a robot: A call for action on the ethics of abusing artificial agents Elsevier, ed., in Interacting with Computers Volume 20 pp. 326–333.