Sequential decision making

Last updated

In artificial intelligence, sequential decision making refers to algorithms that take the dynamics[ clarification needed ] of the world into consideration, [1] thus delaying parts of the problem until it must be solved[ clarification needed ]. It can be described as a procedural approach to decision-making, or as a step by step decision theory. Sequential decision making has as a consequence the intertemporal choice problem, where earlier decisions influences the later available choices. [2]

Related Research Articles

Bayesian probability is an interpretation of the concept of probability, in which, instead of frequency or propensity of some phenomenon, probability is interpreted as reasonable expectation representing a state of knowledge or as quantification of a personal belief.

<span class="mw-page-title-main">Kenneth Arrow</span> American economist (1921–2017)

Kenneth Joseph Arrow was an American economist, mathematician, writer, and political theorist. He was the joint winner of the Nobel Memorial Prize in Economic Sciences with John Hicks in 1972.

A heuristic, or heuristic technique, is any approach to problem solving or self-discovery that employs a practical method that is not guaranteed to be optimal, perfect, or rational, but is nevertheless sufficient for reaching an immediate, short-term goal or approximation in a search space. Where finding an optimal solution is impossible or impractical, heuristic methods can be used to speed up the process of finding a satisfactory solution. Heuristics can be mental shortcuts that ease the cognitive load of making a decision.

<span class="mw-page-title-main">Eliezer Yudkowsky</span> American AI researcher and writer (born 1979)

Eliezer S. Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence, including the idea of a "fire alarm" for AI. He is a co-founder and research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.

Bounded rationality is the idea that rationality is limited when individuals make decisions, and under these limitations, rational individuals will select a decision that is satisfactory rather than optimal.

<span class="mw-page-title-main">Creativity</span> Forming something new and somehow valuable

Creativity is a characteristic of someone or some process that forms something new and valuable. The created item may be intangible or a physical object.

<span class="mw-page-title-main">Intuition</span> Ability to acquire knowledge, without conscious reasoning

Intuition is the ability to acquire knowledge, without recourse to conscious reasoning or needing an explanation. Different fields use the word "intuition" in very different ways, including but not limited to: direct access to unconscious knowledge; unconscious cognition; gut feelings; inner sensing; inner insight to unconscious pattern-recognition; and the ability to understand something instinctively, without any need for conscious reasoning. Intuitive knowledge tends to be approximate.

<span class="mw-page-title-main">Case-based reasoning</span> Process of solving new problems based on the solutions of similar past problems

In artificial intelligence and philosophy, case-based reasoning (CBR), broadly construed, is the process of solving new problems based on the solutions of similar past problems.

<span class="mw-page-title-main">Decision-making</span> Cognitive process to choose a course of action or belief

In psychology, decision-making is regarded as the cognitive process resulting in the selection of a belief or a course of action among several possible alternative options. It could be either rational or irrational. The decision-making process is a reasoning process based on assumptions of values, preferences and beliefs of the decision-maker. Every decision-making process produces a final choice, which may or may not prompt action.

Intelligence has been defined in many ways: the capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving. It can be described as the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.

<span class="mw-page-title-main">Problem solving</span> Approaches to problem solving

Problem solving is the process of achieving a goal by overcoming obstacles, a frequent part of most activities. Problems in need of solutions range from simple personal tasks to complex issues in business and technical fields. The former is an example of simple problem solving (SPS) addressing one issue, whereas the latter is complex problem solving (CPS) with multiple interrelated obstacles. Another classification of problem-solving tasks is into well-defined problems with specific obstacles and goals, and ill-defined problems in which the current situation is troublesome but it is not clear what kind of resolution to aim for. Similarly, one may distinguish formal or fact-based problems requiring psychometric intelligence, versus socio-emotional problems which depend on the changeable emotions of individuals or groups, such as tactful behavior, fashion, or gift choices.

<span class="mw-page-title-main">Weak artificial intelligence</span> Form of artificial intelligence

Weak artificial intelligence is artificial intelligence that implements a limited part of mind, or, as narrow AI, is focused on one narrow task. In John Searle's terms it “would be useful for testing hypotheses about minds, but would not actually be minds”. Weak artificial intelligence focuses on mimicking how humans perform basic actions such as remembering things, perceiving things, and solving simple problems. As opposed to strong AI, which uses technology to be able to think and learn on its own. Computers are able to use methods such as algorithms and prior knowledge to develop their own ways of thinking like human beings do. Strong artificial intelligence systems are learning how to run independently of the programmers who programmed them. Weak AI is not able to have a mind of its own, and can only imitate physical behaviors that it can observe.

Computational cognition is the study of the computational basis of learning and inference by mathematical modeling, computer simulation, and behavioral experiments. In psychology, it is an approach which develops computational models based on experimental results. It seeks to understand the basis behind the human method of processing of information. Early on computational cognitive scientists sought to bring back and create a scientific form of Brentano's psychology.

<span class="mw-page-title-main">DPLL algorithm</span>

In logic and computer science, the Davis–Putnam–Logemann–Loveland (DPLL) algorithm is a complete, backtracking-based search algorithm for deciding the satisfiability of propositional logic formulae in conjunctive normal form, i.e. for solving the CNF-SAT problem.

<span class="mw-page-title-main">Marcus Hutter</span> Computer scientist

Marcus Hutter is a professor and artificial intelligence researcher. A Senior Scientist at DeepMind, he is researching the mathematical foundations of artificial general intelligence. He is on leave from his professorship at the ANU College of Engineering and Computer Science of the Australian National University in Canberra, Australia. Hutter studied physics and computer science at the Technical University of Munich. In 2000 he joined Jürgen Schmidhuber's group at the Istituto Dalle Molle di Studi sull'Intelligenza Artificiale in Manno, Switzerland. He developed a mathematical theory of artificial general intelligence. His book Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability was published by Springer in 2005.

Attribute substitution is a psychological process thought to underlie a number of cognitive biases and perceptual illusions. It occurs when an individual has to make a judgment that is computationally complex, and instead substitutes a more easily calculated heuristic attribute. This substitution is thought of as taking place in the automatic intuitive judgment system, rather than the more self-aware reflective system. Hence, when someone tries to answer a difficult question, they may actually answer a related but different question, without realizing that a substitution has taken place. This explains why individuals can be unaware of their own biases, and why biases persist even when the subject is made aware of them. It also explains why human judgments often fail to show regression toward the mean.

Cognitive biology is an emerging science that regards natural cognition as a biological function. It is based on the theoretical assumption that every organism—whether a single cell or multicellular—is continually engaged in systematic acts of cognition coupled with intentional behaviors, i.e., a sensory-motor coupling. That is to say, if an organism can sense stimuli in its environment and respond accordingly, it is cognitive. Any explanation of how natural cognition may manifest in an organism is constrained by the biological conditions in which its genes survive from one generation to the next. And since by Darwinian theory the species of every organism is evolving from a common root, three further elements of cognitive biology are required: (i) the study of cognition in one species of organism is useful, through contrast and comparison, to the study of another species’ cognitive abilities; (ii) it is useful to proceed from organisms with simpler to those with more complex cognitive systems, and (iii) the greater the number and variety of species studied in this regard, the more we understand the nature of cognition.

In the philosophy of artificial intelligence, GOFAI is classical symbolic AI, as opposed to other approaches, such as neural networks, situated robotics, narrow symbolic AI or neuro-symbolic AI. The term was coined by philosopher John Haugeland in his 1985 book Artificial Intelligence: The Very Idea.

Keith Frankish is a British philosopher specializing in philosophy of mind, philosophy of psychology, and philosophy of cognitive science. He is an Honorary Reader at the University of Sheffield, UK, Visiting Research Fellow with The Open University, and adjunct Professor with the Brain and Mind Programme at the University of Crete. He is known for his "illusionist" stance in the theory of consciousness. He holds that the conscious mind is a virtual system, a trick of the biological mind. In other words, phenomenality is an introspective illusion. This position is in opposition to dualist theories, reductive realist theories, and panpsychism.

<span class="mw-page-title-main">Ethics of uncertain sentience</span> Applied ethics issue

The ethics of uncertain sentience refers to questions surrounding the treatment of and moral obligations towards individuals whose sentience—the capacity to subjectively sense and feel—and resulting ability to experience pain is uncertain; the topic has been particularly discussed within the field of animal ethics, with the precautionary principle frequently invoked in response.

References

  1. Frankish, Keith; Ramsey, William M., eds. (2014). The Cambridge handbook of artificial intelligence. Frankish, Keith., Ramsey, William M., 1960-. Cambridge, UK: Cambridge University Press. p. 337. ISBN   9780521871426. OCLC   865297798.
  2. Amir, Eyal (2014). "Reasoning and decision making". In Frankish, Keith; Ramsey, William M. (eds.). The Cambridge handbook of artificial intelligence. Frankish, Keith., Ramsey, William M., 1960-. Cambridge, UK: Cambridge University Press. pp. 191–212. ISBN   9780521871426. OCLC   865297798.