James H. Moor

Last updated

James H. Moor is the Daniel P. Stone Professor of Intellectual and Moral Philosophy at Dartmouth College. He earned his Ph.D. in 1972 from Indiana University. [1] Moor's 1985 paper entitled "What is Computer Ethics?" established him as one of the pioneering theoreticians in the field of computer ethics. [2] He has also written extensively on the Turing Test. His research includes study in philosophy of artificial intelligence, philosophy of mind, philosophy of science, and logic.

Contents

Moor was the editor-in-chief of Minds and Machines (2001-2010), a peer-reviewed academic journal covering artificial intelligence, philosophy, and cognitive science. [3]

Work

Moor lists four kinds of robots in relation to ethics. A machine can be more than one type of agent. [4]

He has criticised Asimov's Three Laws of Robotics saying that if applied thoroughly they would produce unexpected results. He gives the example of a robot roaming the world trying to prevent harm from all humans.

Awards

Selected publications

Source: [6]

Related Research Articles

Artificial intelligence (AI) is the intelligence of machines or software, as opposed to the intelligence of humans or other animals. It is a field of study in computer science that develops and studies intelligent machines. Such machines may be called AIs.

The Chinese room argument holds that a digital computer executing a program cannot have a "mind", "understanding", or "consciousness", regardless of how intelligently or human-like the program may make the computer behave. The argument was presented by philosopher John Searle in his paper "Minds, Brains, and Programs", published in Behavioral and Brain Sciences in 1980. Similar arguments were presented by Gottfried Leibniz (1714), Anatoly Dneprov (1961), Lawrence Davis (1974) and Ned Block (1978). Searle's version has been widely discussed in the years since. The centerpiece of Searle's argument is a thought experiment known as the Chinese room.

<span class="mw-page-title-main">Kevin Warwick</span> British engineer and robotics researcher

Kevin Warwick is an English engineer and Deputy Vice-Chancellor (Research) at Coventry University. He is known for his studies on direct interfaces between computer systems and the human nervous system, and has also done research concerning robotics.

Artificial consciousness (AC), also known as machine consciousness (MC), synthetic consciousness or digital consciousness, is the consciousness hypothesized to be possible in artificial intelligence. It is also the corresponding field of study, which draws insights from philosophy of mind, philosophy of artificial intelligence, cognitive science and neuroscience. The same terminology can be used with the term "sentience" instead of "consciousness" when specifically designating phenomenal consciousness.

The Loebner Prize was an annual competition in artificial intelligence that awarded prizes to the computer programs considered by the judges to be the most human-like. The prize is reported as defunct since 2020. The format of the competition was that of a standard Turing test. In each round, a human judge simultaneously held textual conversations with a computer program and a human being via computer. Based upon the responses, the judge would attempt to determine which was which.

The ethics of technology is a sub-field of ethics addressing the ethical questions specific to the Technology Age, the transitional shift in society wherein personal computers and subsequent devices provide for the quick and easy transfer of information. Technology ethics is the application of ethical thinking to the growing concerns of technology as new technologies continue to rise in prominence.

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

Computer ethics is a part of practical philosophy concerned with how computing professionals should make decisions regarding professional and social conduct.

Weak artificial intelligence is artificial intelligence that implements a limited part of mind, or, as narrow AI, is focused on one narrow task. In John Searle's terms it “would be useful for testing hypotheses about minds, but would not actually be minds”. Weak artificial intelligence focuses on mimicking how humans perform basic actions such as remembering things, perceiving things, and solving simple problems. As opposed to strong AI, which uses technology to be able to think and learn on its own. Computers are able to use methods such as algorithms and prior knowledge to develop their own ways of thinking like human beings do. Strong artificial intelligence systems are learning how to run independently of the programmers who programmed them. Weak AI is not able to have a mind of its own, and can only imitate physical behaviors that it can observe. It is contrasted with Strong AI, which is defined variously as:

The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence.

The philosophy of information (PI) is a branch of philosophy that studies topics relevant to information processing, representational system and consciousness, cognitive science, computer science, information science and information technology.

<span class="mw-page-title-main">Aaron Sloman</span>

Aaron Sloman is a philosopher and researcher on artificial intelligence and cognitive science. He held the Chair in Artificial Intelligence and Cognitive Science at the School of Computer Science at the University of Birmingham, and before that a chair with the same title at the University of Sussex. Since retiring he is Honorary Professor of Artificial Intelligence and Cognitive Science at Birmingham. He has published widely on philosophy of mathematics, epistemology, cognitive science, and artificial intelligence; he also collaborated widely, e.g. with biologist Jackie Chappell on the evolution of intelligence.

Robot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic, and how robots should be designed such that they act 'ethically'. Alternatively, roboethics refers specifically to the ethics of human behavior towards robots, as robots become increasingly advanced. Robot ethics is a sub-field of ethics of technology, specifically information technology, and it has close links to legal as well as socio-economic concerns. Researchers from diverse areas are beginning to tackle ethical questions about creating robotic technology and implementing it in societies, in a way that will still ensure the safety of the human race.

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificially intelligent systems. It is sometimes divided into a concern with the moral behavior of humans as they design, make, use and treat artificially intelligent systems, and a concern with the behavior of machines, in machine ethics.

<span class="mw-page-title-main">Turing test</span> Test of a machines ability to imitate human intelligence

The Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic).

Dr Blay Whitby is a philosopher and technology ethicist, specialising in computer science, artificial intelligence and robotics. He is based at the University of Sussex, England.

AI@50, formally known as the "Dartmouth Artificial Intelligence Conference: The Next Fifty Years", was a conference organized by James Moor, commemorating the 50th anniversary of the Dartmouth workshop which effectively inaugurated the history of artificial intelligence. Five of the original ten attendees were present: Marvin Minsky, Ray Solomonoff, Oliver Selfridge, Trenchard More, and John McCarthy.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. Machine ethics should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with the grander social effects of technology.

<span class="mw-page-title-main">Susan Schneider</span> American philosopher and artificial intelligence expert

Susan Lynn Schneider is an American philosopher and artificial intelligence expert. She is the founding director of the Center for the Future Mind at Florida Atlantic University where she also holds the William F. Dietrich Distinguished Professorship. Schneider has also held the Baruch S. Blumberg NASA/Library of Congress Chair in Astrobiology, Exploration, and Scientific Innovation at NASA and the Distinguished Scholar Chair at the Library of Congress.

Mariarosaria Taddeo is a senior research fellow at the Oxford Internet Institute, part of the University of Oxford, and deputy director of the Digital Ethics Lab. Taddeo is also an associate scholar at Said Business School, University of Oxford.

References

  1. "Jmoor". Archived from the original on 2018-07-12. Retrieved 2016-06-11.
  2. "SIGCAS Making a Difference Award 2003 — SIGCAS - Computers & Society". Archived from the original on 2016-08-08. Retrieved 2010-11-18.
  3. "Minds and Machines".
  4. Four Kinds of Ethical Robots
  5. "SIGCAS Making a Difference Award 2003 — SIGCAS - Computers & Society". Archived from the original on 2016-08-08. Retrieved 2010-11-18.
  6. "Jmoor". Archived from the original on 2018-07-12. Retrieved 2016-06-11.
  7. Warwick, Kevin; Shah, Huma; Moor, James (2013). "Some Implications of a Sample of Practical Turing Tests". Minds and Machines. 23 (2): 163–177. doi:10.1007/s11023-013-9301-y. S2CID   13933358.