Susan Lynn Schneider [1] | |
---|---|
Nationality | American |
Occupation | Philosopher |
Academic background | |
Education | University of California, Berkeley Rutgers University |
Doctoral advisor | Jerry Fodor |
Academic work | |
Institutions | Moravian College University of Pennsylvania Library of Congress University of Connecticut Florida Atlantic University |
Website | schneiderwebsite |
External videos | |
---|---|
Artificial You:AI and The Future Of Your Mind,Susan Schneider,Talks at Google,Nov 7,2019 | |
Can A Robot Feel?,Susan Schneider,TEDxCambridge,June 22,2016 | |
Transcending the Brain? AI,Radical Brain Enhancement and the Nature of Consciousness,Susan Schneider,Harvard,January 3,2019 | |
AI and Artificially Enhanced Brains,Susan Schneider,Royal Institution,June 18,2020 |
Susan Lynn Schneider is an American philosopher and artificial intelligence expert. She is the founding director of the Center for the Future Mind at Florida Atlantic University where she also holds the William F. Dietrich Distinguished Professorship. [1] Schneider has also held the Baruch S. Blumberg NASA/Library of Congress Chair in Astrobiology,Exploration,and Scientific Innovation at NASA and the Distinguished Scholar Chair at the Library of Congress.
Schneider graduated from University of California,Berkeley in 1993 with a B.A. (honors) in Economics. She then went to Rutgers University [2] [3] where she worked with Jerry Fodor, [4] graduating with a Ph.D. in Philosophy in 2003. [2] [3]
Schneider was an assistant professor of philosophy at the University of Pennsylvania and an associate professor of philosophy and cognitive science at the University of Connecticut. [3] [5] She was the founding director of the group for AI,Mind and Society ("AIMS"). [6] In addition she has done research at the Australian National University, [7] [3] the Institute for Advanced Study in Princeton,New Jersey [8] and at the Yale Interdisciplinary Center for Bioethics at Yale University [7] [3]
At the Library of Congress in Washington,D.C. she has held the Distinguished Scholar chair and the Baruch S. Blumburg NASA Chair in Astrobiology,Exploration and Technological innovation. [9] [10] In 2020,Schneider accepted the position of William F. Dietrich Professor of Philosophy at Florida Atlantic University (FAU),jointly appointed to the FAU Brain Institute. [11] [12]
Schneider writes about the philosophical nature of the mind and self,drawing on and addressing issues from philosophy of mind,cognitive science,artificial intelligence,ethics,metaphysics,and astrobiology. [9] Topics include the nature of life,the nature of persons,what minds have in common with programs,radical brain enhancement,superintelligence,panpsychism,and emergent spacetime. [13] [9] [14]
In her book Artificial You:AI and the Future of Your Mind, Schneider discusses different theories of artificial intelligence (AI) and consciousness,and speculates about the ethical,philosophical,and scientific implications of AI for humanity. [15] She argues that AI will inevitably change our understanding of intelligence,and may also change us in ways that we do not anticipate,intend,or desire. She advocates for a cautious and thoughtful approach to transhumanism. She emphasizes that people must make careful choices to ensure that sentient beings –whether human or android –flourish. [16] [17] Using AI technology to reshape the human brain or to build machine minds,will mean experimenting with "tools" that we do not understand how to use:the mind,the self,and consciousness. Schneider argues that failing to understand fundamental philosophical issues will jeopardize the beneficial use of AI and brain enhancement technology,and may lead to the suffering or death of conscious beings. To flourish,humans must address the philosophical issues underlying the AI algorithms. [18] [19] [20] [16]
In her work on the mind-body problem,she argues against physicalism,maintaining a monistic position and offering,in a series of papers,several novel anti-physicalist arguments. [21] [22] [23]
In the domain of astrobiology,Schneider contends that the most intelligent alien beings we encounter will be "postbiological in nature",being forms of artificial intelligence,that they would be superintelligent,and that we can predict what the shape of some of these superintelligences would be like. [13] [24] Her reason for the claim that the most intelligent aliens will be "postbiological" is called the "short window observation." The short-window supposition holds that by the time any society learns to transmit radio signals,they're likely just a few hundred years from upgrading their own biology. [13]
In an earlier technical book on the computational nature of the brain with MIT Press,The Language of Thought:a New Philosophical Direction (2011),Schneider examines the viability of different computational theories of thinking. Expanding on the work of Jerry Fodor,with whom she had studied,she suggests revisions to the symbol processing approach known as the "language of thought hypothesis" (LOTH) or "language of thought" (LOT). [25] Drawing on both computational neuroscience and cognitive psychology,Scheider argues that the brain may be a hybrid computational system. [9] She defends a view in which mental symbols are the basic vocabulary items composing the language of thought. She then uses this conception of symbols,together with certain work on the nature of meaning,to construct a theory of the nature of concepts. [26] [27] [4] The basic theory of concepts is intended to be ecumenical,having a version that applies in the case of connectionism,as well as versions that apply to both the prototype theory and definitions view of concepts. [4]
Testing AI:
To determine whether something is conscious we must perform dedicated tests in order to come to a conclusion. The Turing test was developed in an attempt to solve the puzzle of testing thinking,not consciousness. This test was developed far before the first artificial intelligence were made and does not answer the true question of consciousness. The idea behind the Turing test was that if it could have a conversation,then it could think. However this is far too restrictive. Much like the "seagull test" just because it looks like a seagull doesn't mean it can fly. Due to these faults,a new test had to be created. When it comes to machine minds,Schneider has developed her own personal opinions on machine consciousness. She believed that we should test computer consciousness with a variety of different tests. The two main tests Schneider shares her opinions about are the ACT test and her chip test. These test aim to defeat the faults that plague the Turing test.
Chip Test:
The Chip test,unlike the Turing test Focuses on the parts inside the machine and not just its behaviors. She thinks that if a machine has the same parts that could support a human consciousness,we should consider that it might also have a consciousness and be conscious. This means that if the machine could contain parts that had the capability to support consciousness,then it would be possible to be conscious. the issue with this test is that if we replaced someone's brain with a silicon chip,then we ask them if they are conscious,there is nothing stopping the chip from emitting a sound that says "yes I am conscious". They could act exactly as they did before,simply without consciousness.
ACT test:
The ACT test is responsible for determining consciousness based on verbal behavior,more specifically;verbal behavior concerning the metaphysics of consciousness. The machine will be tested on its ability to have philosophical ideas and thoughts,whether it thinks of the afterlife and whether we have a soul or not etc. If the machine could have these profound thoughts without being taught to do so,then it would be conscious. If a machine is not truly conscious,then it would perform these tests very poorly. However this test has its faults as well. The test relies on the idea that the machine would not have been fed any information of any kind. The issue lies however,not in the computer but in humans,since we have been fed information from the moment we were born. Our philosophy has been constructed by the things and ideas around us. So while we may think philosophically about what happens when we die,the computer may be thinking of something completely different. So if a machine was not fed any information at all,or even info different form ours,then we could not compare them to us.
Schneider is active as a public philosopher, [12] [28] who believes that individuals,not companies,need to be considering and deciding the philosophical issues that will affect them personally,socially,and culturally as a result of artificial intelligence. [9] She writes opinion pieces for venues such as the New York Times, [29] [30] The Financial Times [31] and Scientific American. [32] [33] [28]
Her work has been mentioned by numerous publications including The New York Times, Wired Magazine , Smithsonian , Discover Magazine , Science Magazine , Slate ,Motherboard,Big Think,Inverse,and Nautilus. [28] [34] [35] [36] [37] [38] [39]
Schneider has been featured on television shows on BBC World News, [16] The History Channel,Fox News,PBS,and the National Geographic Channel, [28] and appears in the feature film,Supersapiens:the Rise of the Mind by Markus Mooslechner. [40] [41]
Consciousness, at its simplest, is awareness of internal and external existence. However, its nature has led to millennia of analyses, explanations, and debate by philosophers, scientists, and theologians. Opinions differ about what exactly needs to be studied or even considered consciousness. In some explanations, it is synonymous with the mind, and at other times, an aspect of it. In the past, it was one's "inner life", the world of introspection, of private thought, imagination, and volition. Today, it often includes any kind of cognition, experience, feeling, or perception. It may be awareness, awareness of awareness, metacognition, or self-awareness, either continuously changing or not. The disparate range of research, notions and speculations raises a curiosity about whether the right questions are being asked.
The Chinese room argument holds that a computer executing a program cannot have a mind, understanding, or consciousness, regardless of how intelligently or human-like the program may make the computer behave. The argument was presented in a 1980 paper by the philosopher John Searle entitled "Minds, Brains, and Programs" and published in the journal Behavioral and Brain Sciences. Before Searle, similar arguments had been presented by figures including Gottfried Wilhelm Leibniz (1714), Anatoly Dneprov (1961), Lawrence Davis (1974) and Ned Block (1978). Searle's version has been widely discussed in the years since. The centerpiece of Searle's argument is a thought experiment known as the Chinese room.
Mind uploading is a speculative process of whole brain emulation in which a brain scan is used to completely emulate the mental state of the individual in a digital computer. The computer would then run a simulation of the brain's information processing, such that it would respond in essentially the same way as the original brain and experience having a sentient conscious mind.
Sentience is the ability to experience feelings and sensations. It may not necessarily imply higher cognitive functions such as awareness, reasoning, or complex thought processes. Sentience is an important concept in ethics, as the ability to experience happiness or suffering often forms a basis for determining which entities deserve moral consideration, particularly in utilitarianism.
Artificial consciousness, also known as machine consciousness, synthetic consciousness, or digital consciousness, is the consciousness hypothesized to be possible in artificial intelligence. It is also the corresponding field of study, which draws insights from philosophy of mind, philosophy of artificial intelligence, cognitive science and neuroscience.
"Computing Machinery and Intelligence" is a seminal paper written by Alan Turing on the topic of artificial intelligence. The paper, published in 1950 in Mind, was the first to introduce his concept of what is now known as the Turing test to the general public.
Artificial general intelligence (AGI) or 'Smart AI' is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.
A philosophical zombie is a being in a thought experiment in the philosophy of mind that is physically identical to a normal human being but does not have conscious experience.
Ned Joel Block is an American philosopher working in philosophy of mind who has made important contributions to the understanding of consciousness and the philosophy of cognitive science. He has been professor of philosophy and psychology at New York University since 1996.
Weak artificial intelligence is artificial intelligence that implements a limited part of the mind, or, as narrow AI, is focused on one narrow task.
An artificial brain is software and hardware with cognitive abilities similar to those of the animal or human brain.
A physical symbol system takes physical patterns (symbols), combining them into structures (expressions) and manipulating them to produce new expressions.
The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence.
The symbol grounding problem is a concept in the fields of artificial intelligence, cognitive science, philosophy of mind, and semantics. It addresses the challenge of connecting symbols, such as words or abstract representations, to the real-world objects or concepts they refer to. In essence, it is about how symbols acquire meaning in a way that is tied to the physical world. It is concerned with how it is that words get their meanings, and hence is closely related to the problem of what meaning itself really is. The problem of meaning is in turn related to the problem of how it is that mental states are meaningful, and hence to the problem of consciousness: what is the connection between certain physical systems and the contents of subjective experiences.
In philosophy of mind, the computational theory of mind (CTM), also known as computationalism, is a family of views that hold that the human mind is an information processing system and that cognition and consciousness together are a form of computation. It is closely related to functionalism, a broader theory that defines mental states by what they do rather than what they're made of.
The philosophy of mind is a branch of philosophy that deals with the nature of the mind and its relation to the body and the external world.
Aaron Sloman is a philosopher and researcher on artificial intelligence and cognitive science. He held the Chair in Artificial Intelligence and Cognitive Science at the School of Computer Science at the University of Birmingham, and before that a chair with the same title at the University of Sussex. Since retiring he is Honorary Professor of Artificial Intelligence and Cognitive Science at Birmingham. He has published widely on philosophy of mathematics, epistemology, cognitive science, and artificial intelligence; he also collaborated widely, e.g. with biologist Jackie Chappell on the evolution of intelligence.
The Turing test, originally called the imitation game by Alan Turing in 1949, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic).
AI@50, formally known as the "Dartmouth Artificial Intelligence Conference: The Next Fifty Years", was a conference organized by James Moor, commemorating the 50th anniversary of the Dartmouth workshop which effectively inaugurated the history of artificial intelligence. Five of the original ten attendees were present: Marvin Minsky, Ray Solomonoff, Oliver Selfridge, Trenchard More, and John McCarthy.
Joscha Bach is a German cognitive scientist, AI researcher, and philosopher known for his work on cognitive architectures, artificial intelligence, mental representation, emotion, social modeling, multi-agent systems, and the philosophy of mind. He is a prolific thinker whose research aims to bridge cognitive science and AI by studying how human intelligence and consciousness can be modeled computationally.