Utility system

Last updated

In video game AI, a utility system, or utility AI, is a simple but effective way to model behaviors for non-player characters. Using numbers, formulas, and scores to rate the relative benefit of possible actions, one can assign utilities to each action. A behavior can then be selected based on which one scores the highest "utility" or by using those scores to seed the probability distribution for a weighted random selection. The result is that the character is selecting the "best" behavior for the given situation at the moment based on how those behaviors are defined mathematically.

Contents

Key concepts

The concept of utility has been around for centuries – primarily in mathematically dependent areas such as economics. However, it has also been used in psychology, sociology, and even biology. Because of this background and the inherent nature of needing to convert things to math for computer programming, it was something that came naturally as a way of designing and expressing behaviors for game characters.

Naturally, different AI architectures have their various pros and cons. One of the benefits of utility AI is that it is less "hand-authored" than many other types of game AI architectures. [1] While behaviors in a utility system are often created individually (and by hand), the interactions and priorities between them are not inherently specified. For example, behavior trees (BTs) require the designer to specify priorities in sequence to check if something should be done. Only if that behavior (or tree branch) is NOT executed will the behavior tree system fall through to check the next one.

By comparison, behaviors in many utility systems sort themselves out by priority based on the scores generated by any mathematical modeling that defines every given behavior. Because of this, the developer isn't required to determine exactly where the new behavior "fits" in the overall scheme of what could be thousands of behavior "nodes" in a BT. Instead, the focus is on simply defining the specific reasons why the single behavior in question would be beneficial (i.e. its "utility"). The decision system then scores each behavior according to what is happening in the world at that moment and selects the best one. While some care must be taken to ensure that standards are being followed so that all behavior scoring is using the same or similar premises, the "heavy lifting" of determining how to process tens – or even hundreds – of different behaviors is offloaded from the designer and put into the execution of the system itself.

Background

Early use

Numbers and formulas and scores have been used for decades in games to define behavior. Even something as simple as a defining a set percentage chance for something to happen (e.g. 12% chance to perform Action X) was an early step into utility AI. Only in the early 21st century, however, has that method started to take on more of a formalized approach now referred to commonly as "utility AI".

Mathematical modeling of behavior

In The Sims (2000) an NPCs current "need" for something (e.g. rest, food, social activity) was combined with a score from an object or activity that could satisfy that same need. The combinations of these values gave a score to the action that told the Sim what it should do. This was one of the first visible uses of utility AI in a game. While the player didn't see the calculations themselves, they were made aware of the relative needs of the Sim and the varying degrees of satisfaction that objects in the game would provide. It was, in fact, the core gameplay mechanism.

In The Sims 3 (2009), Richard Evans used a modified version of the Boltzmann distribution to choose an action for a Sim, using a temperature that is low when the Sim is happy, and high when the Sim is doing badly to make it more likely that an action with a low utility is chosen. [2] He also incorporated "personalities" into the Sims. This created a sort of 3-axis model — extending the numeric "needs" and "satisfaction values" to include preferences so that different NPCs might react differently from others in the same circumstances based on their internal wants and drives.

In his book, Behavioral Mathematics for Game AI, [3] Dave Mark detailed how to mentally think of behavior in terms of math including such things as response curves (converting changing input variables to output variables). He and Kevin Dill went on to give many of the early lectures on utility theory at the AI Summit of the annual Game Developers Conference (GDC) in San Francisco including "Improving AI Decision Modeling Through Utility Theory" in 2010. [4] and "Embracing the Dark Art of Mathematical Modeling in AI" in 2012. [5] These lectures served to inject utility AI as a commonly-referred-to architecture alongside finite state machines (FSMs), behavior trees, and planners.

A "Utility System"

While the work of Richard Evans, and subsequent AI programmers on the Sims franchise such as David "Rez" Graham [6] were heavily based on utility AI, Dave Mark and his co-worker from ArenaNet, Mike Lewis, went on to lecture at the AI Summit during the 2015 GDC about a full stand-alone architecture he had developed, the Infinite Axis Utility System (IAUS). [7] The IAUS was designed to be a data-driven, self-contained architecture that, once hooked up to the inputs and outputs of the game system, did not require much programming support. In a way, this made it similar to behavior trees and planners where the reasoner (what makes the decisions) was fully established and it was left to the development team to add behaviors into the mix as they saw fit.

Utility with other architectures

Additionally, rather than a stand-alone architecture, other people have discussed and presented methods of incorporating utility calculations into existing architectures. Bill Merrill wrote a segment in the book, Game AI Pro, [8] entitled "Building Utility Decisions into Your Existing Behavior Tree" [9] with examples of how to re-purpose selectors in BTs to use utility-based math. This made for a powerful hybrid that kept much of the popular formal structure of behavior trees but allowed for some of the non-brittle advantages that utility offered.

Utility decision-making is relatively fast, in terms of real-time performance, compared to more computationally expensive planning approaches such as Monte Carlo tree search. This mainly stems from the fact that Utility System is reactive, i.e., chooses decision based on the present state. The planning approaches involve some kind of search that enables to consider various future scenarios at the expense of heavy computations. However, both architectures can be combined. In a conference paper about AI in Tactical Troops: Anthracite Shift game, [10] Utility System is responsible for high-level strategical decision making, whereas Monte Carlo Tree Search is responsible for deep tactical situations which require more exact planning.

See also

Related Research Articles

Game theory is the study of mathematical models of strategic interactions among rational agents. It has applications in many fields of social science, used extensively in economics as well as in logic, systems science and computer science. Traditional game theory addressed two-person zero-sum games, in which a participant's gains or losses are exactly balanced by the losses and gains of the other participant. In the 21st century, game theory applies to a wider range of behavioral relations, and it is now an umbrella term for the science of logical decision making in humans, animals, as well as computers.

Bounded rationality is the idea that rationality is limited when individuals make decisions, and under these limitations, rational individuals will select a decision that is satisfactory rather than optimal.

<span class="mw-page-title-main">Computational biology</span> Branch of biology

Computational biology refers to the use of data analysis, mathematical modeling and computational simulations to understand biological systems and relationships. An intersection of computer science, biology, and big data, the field also has foundations in applied mathematics, chemistry, and genetics. It differs from biological computing, a subfield of computer science and engineering which uses bioengineering to build computers.

<span class="mw-page-title-main">Decision theory</span> Branch of applied probability theory

Decision theory is a branch of applied probability theory and analytic philosophy concerned with the theory of making decisions based on assigning probabilities to various factors and assigning numerical consequences to the outcome.

Interactive storytelling is a form of digital entertainment in which the storyline is not predetermined. The author creates the setting, characters, and situation which the narrative must address, but the user experiences a unique story based on their interactions with the story world. The architecture of an interactive storytelling program includes a drama manager, user model, and agent model to control, respectively, aspects of narrative production, player uniqueness, and character knowledge and behavior. Together, these systems generate characters that act "human," alter the world in real-time reactions to the player, and ensure that new narrative events unfold comprehensibly.

Soar is a cognitive architecture, originally created by John Laird, Allen Newell, and Paul Rosenbloom at Carnegie Mellon University. It is now maintained and developed by John Laird's research group at the University of Michigan.

A blackboard system is an artificial intelligence approach based on the blackboard architectural model, where a common knowledge base, the "blackboard", is iteratively updated by a diverse group of specialist knowledge sources, starting with a problem specification and ending with a solution. Each knowledge source updates the blackboard with a partial solution when its internal constraints match the blackboard state. In this way, the specialists work together to solve the problem. The blackboard model was originally designed as a way to handle complex, ill-defined problems, where the solution is the sum of its parts.

In video games, artificial intelligence (AI) is used to generate responsive, adaptive or intelligent behaviors primarily in non-playable characters (NPCs) similar to human-like intelligence. Artificial intelligence has been an integral part of video games since their inception in the 1950s. AI in video games is a distinct subfield and differs from academic AI. It serves to improve the game-player experience rather than machine learning or decision making. During the golden age of arcade video games the idea of AI opponents was largely popularized in the form of graduated difficulty levels, distinct movement patterns, and in-game events dependent on the player's input. Modern games often implement existing techniques such as pathfinding and decision trees to guide the actions of NPCs. AI is often used in mechanisms which are not immediately visible to the user, such as data mining and procedural-content generation.

A cognitive architecture refers to both a theory about the structure of the human mind and to a computational instantiation of such a theory used in the fields of artificial intelligence (AI) and computational cognitive science. The formalized models can be used to further refine a comprehensive theory of cognition and as a useful artificial intelligence program. Successful cognitive architectures include ACT-R and SOAR. The research on cognitive architectures as software instantiation of cognitive theories was initiated by Allen Newell in 1990.

<span class="mw-page-title-main">Neural network</span> Structure in biology and artificial intelligence

A neural network is a neural circuit of biological neurons, sometimes also called a biological neural network, or a network of artificial neurons or nodes in the case of an artificial neural network.

In computer science, the Actor model, first published in 1973, is a mathematical model of concurrent computation.

<span class="mw-page-title-main">Intelligent agent</span> Software agent which acts autonomously

In artificial intelligence, an intelligent agent (IA) is an agent acting in an intelligent manner; It perceives its environment, takes actions autonomously in order to achieve goals, and may improve its performance with learning or acquiring knowledge. An intelligent agent may be simple or complex: A thermostat or other control system is considered an example of an intelligent agent, as is a human being, as is any system that meets the definition, such as a firm, a state, or a biome.

Action selection is a way of characterizing the most basic problem of intelligent systems: what to do next. In artificial intelligence and computational cognitive science, "the action selection problem" is typically associated with intelligent agents and animats—artificial systems that exhibit complex behaviour in an agent environment. The term is also sometimes used in ethology or animal behavior.

In artificial intelligence, reactive planning denotes a group of techniques for action selection by autonomous agents. These techniques differ from classical planning in two aspects. First, they operate in a timely fashion and hence can cope with highly dynamic and unpredictable environments. Second, they compute just one next action in every instant, based on the current context. Reactive planners often exploit reactive plans, which are stored structures describing the agent's priorities and behaviour. The term reactive planning goes back to at least 1988, and is synonymous with the more modern term dynamic planning.

Game Description Language (GDL), an innovation in the field of artificial intelligence, represents a significant step forward in the quest to create versatile game-playing AI agents. Designed by Michael Genesereth, GDL is a specialized logic programming language that finds its home within the ambitious realm of the General Game Playing Project at Stanford University. This project aims to develop AI agents capable of playing a wide variety of games without the need for game-specific programming.

In artificial intelligence research, the situated approach builds agents that are designed to behave effectively successfully in their environment. This requires designing AI "from the bottom-up" by focussing on the basic perceptual and motor skills required to survive. The situated approach gives a much lower priority to abstract reasoning or problem-solving skills.

<span class="mw-page-title-main">Entity component system</span> Software architectural pattern mostly used in video game development

Entity component system (ECS) is a software architectural pattern mostly used in video game development for the representation of game world objects. An ECS comprises entities composed from components of data, with systems which operate on entities' components.

<span class="mw-page-title-main">Behavior tree (artificial intelligence, robotics and control)</span>

A behavior tree is a mathematical model of plan execution used in computer science, robotics, control systems and video games. They describe switchings between a finite set of tasks in a modular fashion. Their strength comes from their ability to create very complex tasks composed of simple tasks, without worrying how the simple tasks are implemented. Behavior trees present some similarities to hierarchical state machines with the key difference that the main building block of a behavior is a task rather than a state. Its ease of human understanding make behavior trees less error prone and very popular in the game developer community. Behavior trees have been shown to generalize several other control architectures.

In artificial intelligence, a behavior selection algorithm, or action selection algorithm, is an algorithm that selects appropriate behaviors or actions for one or more intelligent agents. In game artificial intelligence, it selects behaviors or actions for one or more non-player characters. Common behavior selection algorithms include:

This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.

References

  1. Mark, Dave (August 2012). "AI Architectures: A Culinary Guide".
  2. Evans, Richard. "Modeling Individual Personalities in The Sims 3". GDC Vault. pp. 36–38. Retrieved 21 September 2015.
  3. Mark, Dave (March 2009). Behavioral Mathematics for Game AI. ISBN   978-1584506843.
  4. Mark, Dave; Dill, Kevin (2010). "Improving AI Decision Modeling Through Utility Theory". GDC Vault.
  5. Mark, Dave; Dill, Kevin (2012). "Embracing the Dark Art of Mathematical Modeling in AI". GDC Vault.
  6. Graham, David "Rez" (September 2013). "An Introduction to Utility Theory" (PDF). GameAIPro.
  7. Mark, Dave; Lewis, Mike (2015). "Building a Better Centaur: AI at Massive Scale". GDC Vault.
  8. Rabin, Steve (September 2013). "Game AI Pro". Amazon.
  9. Merrill, Bill (September 2013). "Building Utility Decisions into Your Existing Behavior Tree" (PDF). GameAIPro.
  10. Świechowski, Maciej; Lewiński, Daniel; Tyl, Rafał (5 December 2021). Combining Utility AI and MCTS Towards Creating Intelligent Agents in Video Games, with the Use Case of Tactical Troops: Anthracite Shift. IEEE Symposium Series on Computational Intelligence (SSCI). Orlando, Florida, USA: IEEE. pp. 1–8. doi:10.1109/SSCI50451.2021.9660170.