Dialog manager

Last updated

A dialog manager (DM) is a component of a dialog system (DS), responsible for the state and flow of the conversation. Usually:

Contents

There are many different DMs that fulfill very different roles. There can even be several DM components in a single DS.

The only thing common to all DMs is that they are stateful, in contrast to other parts of the DS (such as the NLU and NLG components), which are just stateless functions. The DM roles can roughly be divided into these groups:

  1. Input-control, which enable context-dependent processing of the human utterances.
  2. Output-control, which enable state-dependent generation of text.
  3. Strategic flow-control, which decide what action the dialog agent should take at each point of the dialog.
  4. Tactic flow-control, which make some tactical conversational decisions (error handling, initiative control, etc.).

Input-control DM

The human input has different meanings depending on the context. For example, in a travel-planning DS:

The meaning of the city name depends on the previously asked question. A DM may keep that question in a state variable, and use it to convert "Tel Aviv" to "I want to depart from Tel Aviv", and convert "Gaza" to "I want to arrive at Gaza".

This function is on the border between NLU and DM: in some systems it's included in the NLU, such as the context-dependent rules of Milward (2000); while in other systems it is included in the DM, such as the NP resolution module of Mirkovic and Cavedon (2005).

Another function between the NLU and DM is, determining which input utterances are part of a single utterance. Here is an example from a job negotiation dialog:

All three utterances are actually a single offer. For the second utterance, the word "and" is a clue, but for the third utterance the only possible clue is that it was said immediately after the second one. To understand this, the DM should probably keep a timestamp of each utterance.

Output-control DM

The computer output may be made more natural, by remembering the dialog history. For example, NPCEditor (a framework for authoring characters that answer human questions) allows the author to define question-answer pairs, such that for each question there are several possible answers. The DM selects the best answer for the question, unless it was already used, in which case it selects the 2nd best answer, etc.

A similar feature exists in ChatScript (a framework for authoring chatter-bots): Each time the DS uses a certain rule, the DM marks this rule as "used", so that it won't be used again.

A recent DS for technical assistance [ citation needed ] uses advanced machine-learned rules to select the best terms for describing items. For example, if the DM notices that it's speaking with an adult, it will use terms such as "the left hand"; if it notices that it's speaking with a child, it will use less technical terms such as "the hand where you wear the clock".

This function is on the border between DM and NLG.

Strategic flow-control DM

The main role of a DM is to decide what action the dialog agent should take at each point of the dialog.

A simple way to do this is to let the author completely specify the dialog structure. For example, a specification of a tutorial dialog structure may look like:

The DM keeps a pointer to our current position in the script. The position is updated according to the human input.

There are many languages and frameworks that allow authors to specify dialog structures, such as: VoiceXML (optimized for speech dialogs), AIML, Facade and ChatScript (optimized for chat-bots), CDM (Java-based, optimized for device-control dialogs), and TuTalk (optimized for tutorial dialogs).

Additionally, the dialog structure can be described as a state-chart, using a standard language such as SCXML. This is done in DomainEditor (a framework for tactical questioning characters).

It is quite tedious for authors to write a full dialog structure. There are many improvements that allow authors to describe a dialog in a higher abstraction level, while putting more burden on the DM.

Hierarchical structure

Ravenclaw (a DM for goal-oriented dialogs, based on the CMU communicator) allows the author an advanced, multi-level dialog structure description, such as:

The Ravenclaw DM keeps a stack of dialog modules, and uses it to process the human input.

This structure encourages code reuse, for example, the login module can be used in other dialogs.

They also claim to allow dynamic dialog-task construction, where the structure is not fixed in advance but constructed on the fly, based on information selected from a backend. For instance, in a system that helps aircraft maintenance personnel throughout the execution of maintenance tasks, the structure of the dialog depends on the structure of the maintenance task and is constructed dynamically.

Topic tracking

Frameworks for chatter-bots, such as ChatScript, allow to control the conversation structure with topics. The author can create rules that capture the topic that

If the human says one of the words in parentheses, the DM remembers that the topic is "CHILDHOOD". The chat-bot now starts telling the story under the "CHILDHOOD" title, as long as the bot is in control of the conversation (the user passively responds by saying thinks like "OK" or "right"). Yet if the user asks questions, the system can either respond directly, or use up a line of the story it was going to say anyway.

This, too, allows authors to reuse topics, and combine several independent topics to create a smarter chatter-bot.

Form filling

A common use of dialog systems is as a replacement to forms. For example, a flight-reservation agent should ask the human about his origin time and place, and destination time and place - just as if the human is filling a form with these 4 slots.

A simple solution is to use system-initiative, where the dialog system asks the user about each piece of information in turn, and the user must fill them in that exact order, like in this dialog (from a presentation by David Traum):

The opposite of system-initiative is user-initiative, where the user takes lead, and the system respond to whatever the user directs.

A common compromise between the two methods is mixed-initiative, where the system starts with asking questions, but users can barge in and change the dialog direction. The system understands the user even when he speaks about details he was not asked about yet.

However, describing such a system manually, as a state-chart, is very tedious, since the human may first say the origin and then the destination, or vice versa. In each of them, the human may first say the time and then the place, or vice versa.

So, there are DMs that allow the dialog author to just say what information is required, without specifying the exact order. For example, the author may write:

The DM keeps track of which slots are already filled and which slots are still empty, and navigates the conversation in order to collect the missing information. For example, the DM may ask the human about the origin place first, but if the human adds the destination place, the DM will keep the information and not ask about it again.

Such DSs were developed in MIT, for example, Wheels (for searching used car ads), Jupiter (for retrieving weather forecasts), and more.

Simple DMs handle slot-filling binarily: either a slot is "filled", or it is "empty". More advanced DMs also keep track of the degree of grounding - how sure are we, that we really understood what the user said: whether it was "Just recently introduced", "Introduced again", "acknowledged", "repeated", etc. We can also allow the author to specify, for each piece of information, the degree to which we NEED it to be understood, e.g. sensitive information need higher degree. The DM uses this information to control the course of dialog, e.g., if the human said something about a sensitive subject, and we are not sure we understood, then the DM will issue a confirmation question. See Roque and Traum (2008).

Information state

The TrindiKit Archived 2012-02-23 at the Wayback Machine DS, developed during the Trindi Archived 2012-05-11 at the Wayback Machine project, allows authors to define a complex information state, and write general rules that process this state. Here is a sample rule:

integrateAnswer:   preconditions: ("If the human gave a relevant answer to a question currently under discussion...")     in(SHARED.LM, answer (usr, A))     fst(SHARED.QUD, Q)     relevant_answer(Q, A)   effects: ("... then remove it from the Question Under Discussion, and add it to the shared ground")     pop(SHARED.QUD)     reduce(Q, A, P)     add(SHARED.COM, P) 

The DM decides, according to the input and the state, which rules are applicable, and applies them to get the new state.

This may help authors re-use general rules for dialog management rules, based on dialog theories. DSs developed with TrindiKit include: GoDiS, MIDAS, EDIS and SRI Autorate.

The information state approach was developed later in projects such as Siridus Archived 2012-03-23 at the Wayback Machine and the Dipper toolkit.

Another example of an information state based dialog manager is FLoReS. It uses a propositional information state to encode the current state and selects the next action using a Markov decision process. This dialog manager is implemented in the jmNL software.

General planning

A generalization of this approach is to let the author define the goals of the agent, and let the DM construct a plan to achieve that goal. The plan is made of operations. Each speech act is an operation. Each operation has preconditions and postconditions (effects), for example:

Inform(Speaker,Hearer,Predicate):   Precondition: Knows(Speaker,Predicate) AND Wants(Speaker,Inform(Speaker,Hearer,Predicate))   Effect: Knows(Hearer,Predicate)   Body: Believes(Hearer,Wants(Speaker,Knows(Hearer,Predicate))) 

The conversation can be navigated using a general planner, such as SOAR (Strengths, Opportunities, Aspirations & Results). The planner maintains the current state, and tries to construct a plan to achieve the goal, using the given operations.

A similar approach is taken in SASO-ST [1] (a DS for multi-agent negotiation training). Using SOAR allows the incorporation of complex emotional and social models, for example: the agent can decide, based on the human actions, whether he wants to cooperate with him, avoid him, or even attack him.

A similar approach is taken in TRIPS [2] (a DS for multi-agent collaborative problem solving). They split the dialog management into several modules:

A different kind of planning is theorem proving. A dialog can be described as an attempt to prove a theorem. The system interacts with user to supply "missing axioms" to help complete the proof (this is called "backward chaining"). This approach was implemented by:

The dialog manager can be connected with an expert system, to give the ability to respond with specific expertise.

Tactic flow-control DM

In addition to following the general structure and goals of the dialog, some DMs also make some tactical conversational decisions - local decisions that affect the quality of conversation.

Error handling

The ASR and NLU modules are usually not 100% sure they understood the user; they usually return a confidence score reflecting the quality of understanding. In such cases, the DM should decide whether to:

Choosing "no-confirmation" may make the dialog proceed quicker, but may also introduce mistakes which will take longer to correct later.

Error handling was researched extensively by Ravenclaw, which allows the author to manually control the error handling strategy in each part of the dialog.

Initiative control

Some DSs have several modes of operation: the default mode is user-initiative, where the system just asks "what can I do for you?" and lets the user navigate the conversation. This is good for experienced users. However, if there are many misunderstandings between the user and the system, the DM may decide to switch to mixed-initiative or system-initiative - ask the user explicit questions, and accept one answer at a time.

Pedagogical decisions

Tactical decisions of a different type are done by Cordillera (a tutorial DS for teaching physics, built using TuTalk). In many points during the lesson, the DM should decide:

These decisions affect the overall quality of learning, which can be measured by comparing pre- and post-learning exams.

Learned tactics

Instead of letting a human expert write a complex set of decision rules, it is more common to use reinforcement learning. The dialog is represented as a Markov Decision Process (MDP) - a process where, in each state, the DM has to select an action, based on the state and the possible rewards from each action. In this setting, the dialog author should only define the reward function, for example: in tutorial dialogs, the reward is the increase in the student grade; in information seeking dialogs, the reward is positive if the human receives the information, but there is also a negative reward for each dialog step.

RL techniques are then used to learn a policy, for example, what kind of confirmation should we use in each state? etc. This policy is later used by the DM in real dialogs.

A tutorial on this subject were written by Lemon and Rieser (2009).

A different way to learn dialog policies is to try to imitate humans, using Wizard of Oz experiments, in which a human sits in a hidden room and tells the computer what to say; see for example Passonneau et al (2011).

Related Research Articles

In the philosophy of language and linguistics, speech act is something expressed by an individual that not only presents information but performs an action as well. For example, the phrase "I would like the kimchi; could you please pass it to me?" is considered a speech act as it expresses the speaker's desire to acquire the kimchi, as well as presenting a request that someone pass the kimchi to them.

Natural-language understanding (NLU) or natural-language interpretation (NLI) is a subset of natural-language processing in artificial intelligence that deals with machine reading comprehension. Natural-language understanding is considered an AI-hard problem.

<span class="mw-page-title-main">Chatbot</span> Program that simulates conversation

A chatbot is a software application or web interface that is designed to mimic human conversation through text or voice interactions. Modern chatbots are typically online and use generative artificial intelligence systems that are capable of maintaining a conversation with a user in natural language and simulating the way a human would behave as a conversational partner. Such chatbots often use deep learning and natural language processing, but simpler chatbots have existed for decades.

Interactive voice response (IVR) is a technology that allows telephone users to interact with a computer-operated telephone system through the use of voice and DTMF tones input with a keypad. In telephony, IVR allows customers to interact with a company's host system via a telephone keypad or by speech recognition, after which services can be inquired about through the IVR dialogue. IVR systems can respond with pre-recorded or dynamically generated audio to further direct users on how to proceed. IVR systems deployed in the network are sized to handle large call volumes and also used for outbound calling as IVR systems are more intelligent than many predictive dialer systems.

<span class="mw-page-title-main">Dialogue</span> Conversation between two or more people

Dialogue is a written or spoken conversational exchange between two or more people, and a literary and theatrical form that depicts such an exchange. As a philosophical or didactic device, it is chiefly associated in the West with the Socratic dialogue as developed by Plato, but antecedents are also found in other traditions including Indian literature.

Natural language generation (NLG) is a software process that produces natural language output. A widely-cited survey of NLG methods describes NLG as "the subfield of artificial intelligence and computational linguistics that is concerned with the construction of computer systems than can produce understandable texts in English or other human languages from some underlying non-linguistic representation of information".

Open Programming Language (OPL) is a programming language for embedded systems and mobile devices that run the operating systems EPOC and Symbian. It was released by the British company Psion in 1984.

An alert dialog box is a special dialog box that is displayed in a graphical user interface when something unexpected occurred that requires immediate user action.

A voice-user interface (VUI) enables spoken human interaction with computers, using speech recognition to understand spoken commands and answer questions, and typically text to speech to play a reply. A voice command device is a device controlled with a voice user interface.

<span class="mw-page-title-main">Dialogue system</span>

A dialogue system, or conversational agent (CA), is a computer system intended to converse with a human. Dialogue systems employed one or more of text, speech, graphics, haptics, gestures, and other modes for communication on both the input and output channel.

<span class="mw-page-title-main">Relevance theory</span> Theory of cognitive linguistics

Relevance theory is a framework for understanding the interpretation of utterances. It was first proposed by Dan Sperber and Deirdre Wilson, and is used within cognitive linguistics and pragmatics. The theory was originally inspired by the work of Paul Grice and developed out of his ideas, but has since become a pragmatic framework in its own right. The seminal book, Relevance, was first published in 1986 and revised in 1995.

A number of computer operating systems employ security features to help prevent malicious software from gaining sufficient privileges to compromise the computer system. Operating systems lacking such features, such as DOS, Windows implementations prior to Windows NT, CP/M-80, and all Mac operating systems prior to Mac OS X, had only one category of user who was allowed to do anything. With separate execution contexts it is possible for multiple users to store private files, for multiple users to use a computer at the same time, to protect the system against malicious users, and to protect the system against malicious programs. The first multi-user secure system was Multics, which began development in the 1960s; it wasn't until UNIX, BSD, Linux, and NT in the late 80s and early 90s that multi-tasking security contexts were brought to x86 consumer machines.

A spoken dialog system (SDS) is a computer system able to converse with a human with voice. It has two essential components that do not exist in a written text dialog system: a speech recognizer and a text-to-speech module. It can be further distinguished from command and control speech systems that can respond to requests but do not attempt to maintain continuity over time.

AutoTutor is an intelligent tutoring system developed by researchers at the Institute for Intelligent Systems at the University of Memphis, including Arthur C. Graesser that helps students learn Newtonian physics, computer literacy, and critical thinking topics through tutorial dialogue in natural language. AutoTutor differs from other popular intelligent tutoring systems such as the Cognitive Tutor, in that it focuses on natural language dialog. This means that the tutoring occurs in the form of an ongoing conversation, with human input presented using either voice or free text input. To handle this input, AutoTutor uses computational linguistics algorithms including latent semantic analysis, regular expression matching, and speech act classifiers. These complementary techniques focus on the general meaning of the input, precise phrasing or keywords, and functional purpose of the expression, respectively. In addition to natural language input, AutoTutor can also accept ad hoc events such as mouse clicks, learner emotions inferred from emotion sensors, and estimates of prior knowledge from a student model. Based on these inputs, the computer tutor determine when to reply and what speech acts to reply with. This process is driven by a "script" that includes a set of dialog-specific production rules.

The Artificial Passenger is a telematic device, developed by IBM, that interacts verbally with a driver to reduce the likelihood of them falling asleep at the controls of a vehicle. It is based on inventions covered by U.S. patent 6,236,968. Whereas, Telematics device perform a range of functions by gathering vehicle location and activity data, and turning this into business insight. Also Telematic machine works by Capturing vehicle location data via a GPS enabled device installed in a vehicle. The Artificial Passenger is equipped to engage a vehicle operator by carrying on conversations, playing verbal games, controlling the vehicle's stereo system, and so on. It also monitors the driver's speech patterns to detect fatigue, and in response can suggest that the driver take a break or get some sleep. The Artificial Passenger may also be integrated with wireless services to provide weather and road information, driving directions, and other such notifications systems.

Natural-language programming (NLP) is an ontology-assisted way of programming in terms of natural-language sentences, e.g. English. A structured document with Content, sections and subsections for explanations of sentences forms a NLP document, which is actually a computer program. Natural language programming is not to be mixed up with natural language interfacing or voice control where a program is first written and then communicated with through natural language using an interface added on. In NLP the functionality of a program is organised only for the definition of the meaning of sentences. For instance, NLP can be used to represent all the knowledge of an autonomous robot. Having done so, its tasks can be scripted by its users so that the robot can execute them autonomously while keeping to prescribed rules of behaviour as determined by the robot's user. Such robots are called transparent robots as their reasoning is transparent to users and this develops trust in robots. Natural language use and natural-language user interfaces include Inform 7, a natural programming language for making interactive fiction, Shakespeare, an esoteric natural programming language in the style of the plays of William Shakespeare, and Wolfram Alpha, a computational knowledge engine, using natural-language input. Some methods for program synthesis are based on natural-language programming.

Natural-language user interface is a type of computer human interface where linguistic phenomena such as verbs, phrases and clauses act as UI controls for creating, selecting and modifying data in software applications.

Dialogical analysis, or more precisely dialogical interaction analysis, refers to a way of analyzing human communication which is based on the theory of dialogism. The approach has been developed based on the theoretical work of George Herbert Mead and Mikhail Mikhailovich Bakhtin.

Martin Nystrand is an American composition and education theorist. He is Louise Durham Mead Professor Emeritus in the Department of English at the University of Wisconsin–Madison and Professor Emeritus of Education at the Wisconsin Center for Education Research.

A conversational user interface (CUI) is a user interface for computers that emulates a conversation with a real human. Historically, computers have relied on text-based user interfaces and graphical user interfaces (GUIs) to translate the user's desired action into commands the computer understands. While an effective mechanism of completing computing actions, there is a learning curve for the user associated with GUI. Instead, CUIs provide opportunity for the user to communicate with the computer in their natural language rather than in a syntax specific commands.

References

Further reading