Blay Whitby

Last updated

Dr Blay "Horatio" Whitby is a philosopher and technology ethicist, specialising in computer science, artificial intelligence and robotics. He is based at the University of Sussex, England. [1]

Contents

Blay Whitby graduated with first class honours from New College, Oxford University in 1974 and completed his PhD on "The Social Implications of Artificial Intelligence" at Middlesex University in 2003. His publications are predominantly in the area of the philosophy and ethical implications of artificial intelligence. His views place particular stress on the moral responsibilities of scientific and technical professionals, [2] [3] having some features in common with techno-progressivism. [4] Widening engagement in science and increasing levels of debate in ethical issues is also an important concern.

Whitby is a member of the Ethics Strategic Panel of BCS, the Chartered Institute for IT. He also participates in art/science collaborations.

Selected publications

Related Research Articles

Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

Singularitarianism is a movement defined by the belief that a technological singularity—the creation of superintelligence—will likely happen in the medium future, and that deliberate action ought to be taken to ensure that the singularity benefits humans.

The ethics of technology is a sub-field of ethics addressing ethical questions specific to the technology age, the transitional shift in society wherein personal computers and subsequent devices provide for the quick and easy transfer of information. Technology ethics is the application of ethical thinking to growing concerns as new technologies continue to rise in prominence.

Computer ethics is a part of practical philosophy concerned with how computing professionals should make decisions regarding professional and social conduct.

The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence.

<span class="mw-page-title-main">Aaron Sloman</span>

Aaron Sloman is a philosopher and researcher on artificial intelligence and cognitive science. He held the Chair in Artificial Intelligence and Cognitive Science at the School of Computer Science at the University of Birmingham, and before that a chair with the same title at the University of Sussex. Since retiring he is Honorary Professor of Artificial Intelligence and Cognitive Science at Birmingham. He has published widely on philosophy of mathematics, epistemology, cognitive science, and artificial intelligence; he also collaborated widely, e.g. with biologist Jackie Chappell on the evolution of intelligence.

Robot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic, and how robots should be designed such that they act 'ethically'. Alternatively, roboethics refers specifically to the ethics of human behavior towards robots, as robots become increasingly advanced. Robot ethics is a sub-field of ethics of technology, specifically information technology, and it has close links to legal as well as socio-economic concerns. Researchers from diverse areas are beginning to tackle ethical questions about creating robotic technology and implementing it in societies, in a way that will still ensure the safety of the human race.

The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.

James H. Moor was the Daniel P. Stone Professor of Intellectual and Moral Philosophy at Dartmouth College. He earned his Ph.D. in 1972 from Indiana University. Moor's 1985 paper entitled "What is Computer Ethics?" established him as one of the pioneering theoreticians in the field of computer ethics. He has also written extensively on the Turing Test. His research includes study in philosophy of artificial intelligence, philosophy of mind, philosophy of science, and logic.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.

<span class="mw-page-title-main">Mark Coeckelbergh</span> Belgian philosopher of technology

Mark Coeckelbergh is a Belgian philosopher of technology. He is Professor of Philosophy of Media and Technology at the Department of Philosophy of the University of Vienna and former President of the Society for Philosophy and Technology. He was previously Professor of Technology and Social Responsibility at De Montfort University in Leicester, UK, Managing Director of the 3TU Centre for Ethics and Technology, and a member of the Philosophy Department of the University of Twente. Before moving to Austria, he has lived and worked in Belgium, the UK, and the Netherlands. He is the author of several books, including Growing Moral Relations (2012), Human Being @ Risk (2013), Environmental Skill (2015), Money Machines (2015), New Romantic Cyborgs (2017), Moved by Machines (2019), the textbook Introduction to Philosophy of Technology (2019), and AI Ethics (2020). He has written many articles and is an expert in ethics of artificial intelligence. He is best known for his work in philosophy of technology and ethics of robotics and artificial intelligence (AI), he has also published in the areas of moral philosophy and environmental philosophy.

<i>The Machine Question</i> 2012 nonfiction book by David J. Gunkel

The Machine Question: Critical Perspectives on AI, Robots, and Ethics is a 2012 nonfiction book by David J. Gunkel that discusses the evolution of the theory of human ethical responsibilities toward non-human things and to what extent intelligent, autonomous machines can be considered to have legitimate moral responsibilities and what legitimate claims to moral consideration they can hold. The book was awarded as the 2012 Best Single Authored Book by the Communication Ethics Division of the National Communication Association.

<span class="mw-page-title-main">Shannon Vallor</span> Philosopher of technology

Shannon Vallor is an American philosopher of technology. She is the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute. She previously taught at Santa Clara University in Santa Clara, California where she was the Regis and Dianne McKenna Professor of Philosophy and William J. Rewak, S.J. Professor at SCU.

<span class="mw-page-title-main">Mary-Anne Williams</span> Australian professor at UNSW founded Artificial Intelligence programs

Mary-Anne Williams is an Australian roboticist who is the Michael J Crouch Chair for Innovation at the University of New South Wales in Sydney Australia (UNSW), based in the UNSW Business School.

<span class="mw-page-title-main">Joanna Bryson</span> Researcher and Professor of Ethics and Technology

Joanna Joy Bryson is professor at Hertie School in Berlin. She works on Artificial Intelligence, ethics and collaborative cognition. She has been a British citizen since 2007.

Artificial Wisdom, or AW, is an Artificial Intelligence system which is able to display the human traits of wisdom and morals while being able to contemplate its own “endpoint.” Artificial wisdom can be described as artificial intelligence reaching the top-level of decision-making when confronted with the most complex challenging situations. The term artificial wisdom is used when the "intelligence" is based on more than by chance collecting and interpreting data, but by design enriched with smart and conscience strategies that wise people would use.

<span class="mw-page-title-main">Julie Carpenter</span> American Researcher, Educator, Author, Speaker, Human-Robot Interaction Specialist

Julie Carpenter, born Julie Gwyn Wajdyk, is an American researcher whose work focuses on human behavior with emerging technologies, especially within vulnerable and marginalized populations. She is best known for her work in human attachment to robots and other forms of artificial intelligence.

Mariarosaria Taddeo is an Italian philosopher working on the ethics of digital technologies. She is Professor of Digital Ethics and Defence Technologies at the Oxford Internet Institute, University of Oxford and Dslt Ethics Fellow at the Alan Turing Institute, London.

<span class="mw-page-title-main">Wendell Wallach</span> Bioethicist and author

Wendell Wallach is a bioethicist and author focused on the ethics and governance of emerging technologies, in particular artificial intelligence and neuroscience. He is a scholar at Yale University's Interdisciplinary Center for Bioethics, a senior advisor to The Hastings Center, and a Carnegie/Uehiro Senior Fellow at the Carnegie Council for Ethics in International Affairs, where he co-directs the "Artificial Intelligence Equality Initiative" with Anja Kaspersen. Wallach is also a fellow at the Center for Law and Innovation at the Sandra Day O'Connor School of Law at Arizona State University. He has written two books on the ethics of emerging technologies.: "Moral Machines: Teaching Robots Right from Wrong" (2010) and "A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control" (2015). Wallach discusses his professional, personal and spiritual journey, as well as some of the biggest conundrums facing humanity at the wake of the bio/digital revolution, in this podcast published by the Carnegie Council for Ethics in International Affairs (CCEIA).

AI literacy or artificial intelligence literacy, is the ability to understand, use, monitor, and critically reflect on AI applications. The term usually refers to teaching skills and knowledge to the general public, particularly those who are not adept in AI.

References

  1. Blay Whitby, University of Sussex, UK.
  2. Whitby, B.R (2007) Computing Machinery and Morality in AI & Society, Vol. 22 No. 4. April 2008 pp. 551–563.
  3. More or less human-like? Ethical issues in human-robot interaction in Ethics of Human Interaction with Robotic, Bionic, and AI Systems — Concepts and Policies Naples, Italy: ETHICBOTS European Project.
  4. Sometimes it’s hard to be a robot: A call for action on the ethics of abusing artificial agents Elsevier, ed., in Interacting with Computers Volume 20 pp. 326–333.