This article relies largely or entirely on a single source .(May 2024) |
Author | Joseph Weizenbaum |
---|---|
Language | English |
Genre | Nonfiction |
Publisher | W. H. Freeman and Company |
Publication date | 1976 |
Publication place | United States |
Media type | |
Pages | 300 |
ISBN | 978-0716704645 |
Computer Power and Human Reason: From Judgment to Calculation is a 1976 nonfiction book by German-American computer scientist Joseph Weizenbaum in which he contends that while artificial intelligence may be possible, we should never allow computers to make important as they will always lack human qualities such as compassion and wisdom. [1]
Before writing Computer Power and Human Reason, Weizenbaum had garnered significant attention for creating the ELIZA program, an early milestone in conversational computing. His firsthand observation of people attributing human-like qualities to a simple program prompted him to reflect more deeply on society's readiness to entrust moral and ethical considerations to machines. [2]
Computer Power and Human Reason sparked scholarly debate on the acceptable scope of AI applications, particularly in fields where human welfare and ethical considerations are paramount. Early academic reviews highlighted that Weizenbaum's stance pushed readers to recognize that even as computers grow more capable, they lack the intrinsic moral compass and empathy required for certain kinds of judgment. [3] [4] [5]
The book caused disagreement with, and separation from other members of the artificial intelligence research community, a status the author later said he'd come to take pride in. [6]
Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.
ELIZA is an early natural language processing computer program developed from 1964 to 1967 at MIT by Joseph Weizenbaum. Created to explore communication between humans and machines, ELIZA simulated conversation by using a pattern matching and substitution methodology that gave users an illusion of understanding on the part of the program, but had no representation that could be considered really understanding what was being said by either party. Whereas the ELIZA program itself was written (originally) in MAD-SLIP, the pattern matching directives that contained most of its language capability were provided in separate "scripts", represented in a lisp-like representation. The most famous script, DOCTOR, simulated a psychotherapist of the Rogerian school, and used rules, dictated in the script, to respond with non-directional questions to user inputs. As such, ELIZA was one of the first chatterbots and one of the first programs capable of attempting the Turing test.
In computer science, the ELIZA effect is a tendency to project human traits — such as experience, semantic comprehension or empathy — onto rudimentary computer programs having a textual interface. ELIZA was a symbolic AI chatbot developed in 1966 by Joseph Weizenbaum and imitating a psychotherapist. Many early users were convinced of ELIZA's intelligence and understanding, despite its basic text-processing approach and the explanations of its limitations.
Joseph Weizenbaum was a German American computer scientist and a professor at MIT. The Weizenbaum Award and the Weizenbaum Institute are named after him.
Natural language understanding (NLU) or natural language interpretation (NLI) is a subset of natural language processing in artificial intelligence that deals with machine reading comprehension. NLU has been considered an AI-hard problem.
Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.
The ethics of technology is a sub-field of ethics addressing ethical questions specific to the technology age, the transitional shift in society wherein personal computers and subsequent devices provide for the quick and easy transfer of information. Technology ethics is the application of ethical thinking to growing concerns as new technologies continue to rise in prominence.
A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.
Computer ethics is a part of practical philosophy concerned with how computing professionals should make decisions regarding professional and social conduct.
SLIP is a list processing computer programming language, invented by Joseph Weizenbaum in the 1960s. The name SLIP stands for Symmetric LIst Processor. It was first implemented as an extension to the Fortran programming language, and later embedded into MAD and ALGOL. The best known program written in the language is ELIZA, an early natural language processing computer program created by Weizenbaum at the MIT Artificial Intelligence Laboratory.
Robot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic, and how robots should be designed such that they act 'ethically'. Alternatively, roboethics refers specifically to the ethics of human behavior towards robots, as robots become increasingly advanced. Robot ethics is a sub-field of ethics of technology, specifically information technology, and it has close links to legal as well as socio-economic concerns. Researchers from diverse areas are beginning to tackle ethical questions about creating robotic technology and implementing it in societies, in a way that will still ensure the safety of the human race.
The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.
A virtual assistant (VA) is a software agent that can perform a range of tasks or services for a user based on user input such as commands or questions, including verbal ones. Such technologies often incorporate chatbot capabilities to simulate human conversation, such as via online chat, to facilitate interaction with their users. The interaction may be via text, graphical interface, or voice - as some virtual assistants are able to interpret human speech and respond via synthesized voices.
Plug & Pray is a 2010 documentary film about the promise, problems and ethics of artificial intelligence and robotics. The main protagonists are the former MIT professor Joseph Weizenbaum and the futurist Raymond Kurzweil. The title is a pun on the computer hardware phrase "Plug and Play".
Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.
S. Matthew Liao is a Taiwanese-American philosopher specializing in bioethics and normative ethics. Liao currently holds the Arthur Zitrin Chair of Bioethics, and is the Director of the Center for Bioethics and Affiliated Professor in the Department of Philosophy at New York University. He has previously held appointments at Oxford, Johns Hopkins, Georgetown, and Princeton.
Artificial Wisdom, or AW, is an Artificial Intelligence system which is able to display the human traits of wisdom and morals while being able to contemplate its own “endpoint.” Artificial wisdom can be described as artificial intelligence reaching the top-level of decision-making when confronted with the most complex challenging situations. The term artificial wisdom is used when the "intelligence" is based on more than by chance collecting and interpreting data, but by design enriched with smart and conscience strategies that wise people would use.
AI literacy or artificial intelligence literacy, is the ability to understand, use, monitor, and critically reflect on AI applications. The term usually refers to teaching skills and knowledge to the general public, particularly those who are not adept in AI.
Artificial intelligence rhetoric is a term primarily applied to persuasive text and speech generated by chatbots using generative artificial intelligence, although the term can also apply to the language that humans type or speak when communicating with a chatbot. This emerging field of rhetoric scholarship is related to the fields of digital rhetoric and human-computer interaction.