Automated Pain Recognition

Last updated

Automated Pain Recognition (APR) is a method for objectively measuring pain and at the same time represents an interdisciplinary research area that comprises elements of medicine, psychology, psychobiology, and computer science. The focus is on computer-aided objective recognition of pain, implemented on the basis of machine learning. [1] [2]

Contents

Automated pain recognition allows for the valid, reliable detection and monitoring of pain in people who are unable to communicate verbally. The underlying machine learning processes are trained and validated in advance by means of unimodal or multimodal body signals. Signals used to detect pain may include facial expressions or gestures and may also be of a (psycho-)physiological or paralinguistic nature. To date, the focus has been on identifying pain intensity, but visionary efforts are also being made to recognize the quality, site, and temporal course of pain.[ citation needed ]

However, the clinical implementation of this approach is a controversial topic in the field of pain research. Critics of automated pain recognition argue that pain diagnosis can only be performed subjectively by humans.

Background

Pain diagnosis under conditions where verbal reporting is restricted - such as in verbally and/or cognitively impaired people or in patients who are sedated or mechanically ventilated - is based on behavioral observations by trained professionals. [3] However, all known observation procedures (e.g., Zurich Observation Pain Assessment [4] (ZOPA)); Pain Assessment in Advanced Dementia Scale (PAINAD) require a great deal of specialist expertise. These procedures can be made more difficult by perception- and interpretation-related misjudgments on the part of the observer. With regard to the differences in design, methodology, evaluation sample, and conceptualization of the phenomenon of pain, it is difficult to compare the quality criteria of the various tools. Even if trained personnel could theoretically record pain intensity several times a day using observation instruments, it would not be possible to measure it every minute or second. In this respect, the goal of automated pain recognition is to use valid, robust pain response patterns that can be recorded multimodally for a temporally dynamic, high-resolution, automated pain intensity recognition system.[ citation needed ]

Procedure

For automated pain recognition, pain-relevant parameters are usually recorded using non-invasive sensor technology, which captures data on the (physical) responses of the person in pain. This can be achieved with camera technology that captures facial expressions, gestures, or posture, while audio sensors record paralinguistic features. (Psycho-)physiological information such as muscle tone and heart rate can be collected via biopotential sensors (electrodes). [5]

Pain recognition requires the extraction of meaningful characteristics or patterns from the data collected. This is achieved using machine learning techniques that are able to provide an assessment of the pain after training (learning), e.g., "no pain," "mild pain," or "severe pain."[ citation needed ]

Parameters

Although the phenomenon of pain comprises different components (sensory discriminative, affective (emotional), cognitive, vegetative, and (psycho-)motor), [6] automated pain recognition currently relies on the measurable parameters of pain responses. These can be divided roughly into the two main categories of "physiological responses" and "behavioral responses".

Physiological responses

In humans, pain almost always initiates autonomic nervous processes that are reflected measurably in various physiological signals. [7]

Physiological signals

Measurements can include electrodermal activity (EDA, also skin conductance), electromyography (EMG), electrocardiogram (ECG), blood volume pulse (BVP), electroencephalogram (EEG), respiration, and body temperature, [8] [9] which are regulatory mechanisms of the sympathetic and parasympathetic systems. Physiological signals are mainly recorded using special non-invasive surface electrodes (for EDA, EMG, ECG, and EEG), a blood volume pulse sensor (BVP), a respiratory belt (respiration), and a thermal sensor (body temperature). Endocrinological and immunological parameters can also be recorded, but this requires measures that are somewhat invasive (e.g., blood sampling).[ citation needed ]

Behavioral responses

Behavioral responses to pain fulfil two functions: protection of the body (e.g., through protective reflexes) and external communication of the pain (e.g., as a cry for help). The responses are particularly evident in facial expressions, gestures, and paralinguistic features.

Facial expressions

Behavioral signals captured comprise facial expression patterns (expressive behavior), which are measured with the aid of video signals. Facial expression recognition is based on the everyday clinical observation that pain often manifests itself in the patient's facial expressions but that this is not necessarily always the case, since facial expressions can be inhibited through self-control. Despite the possibility that facial expressions may be influenced consciously, facial expression behavior represents an essential source of information for pain diagnosis and is thus also a source of information for automatic pain recognition. One advantage of video-based facial expression recognition is the contact-free measurement of the face, provided that it can be captured on video, which is not possible in every position (e.g., lying face down) or may be limited by bandages covering the face. Facial expression analysis relies on rapid, spontaneous, and temporary changes in neuromuscular activity that lead to visually detectable changes in the face.[ citation needed ]

Gestures

Gestures are also captured predominantly using non-contact camera technology. Motor pain responses vary and are strongly dependent on the type and cause of the pain. They range from abrupt protective reflexes (e.g., spontaneous retraction of extremities or doubling up) to agitation (pathological restlessness) and avoidance behavior (hesitant, cautious movements).

Paralinguistic features of language

Among other things, pain leads to nonverbal linguistic behavior that manifests itself in sounds such as sighing, gasping, moaning, whining, etc. Paralinguistic features are usually recorded using highly sensitive microphones.

Algorithms

After the recording, pre-processing (e.g., filtering), and extraction of relevant features, an optional information fusion can be performed. During this process, modalities from different signal sources are merged to generate new or more precise knowledge.[ citation needed ]

The pain is classified using machine learning processes. The method chosen has a significant influence on the recognition rate and depends greatly on the quality and granularity of the underlying data. Similar to the field of affective computing, [10] the following classifiers are currently being used:

Support Vector Machine (SVM): The goal of an SVM is to find a clearly defined optimal hyperplane with the greatest minimal distance to two (or more) classes to be separated. The hyperplane acts as a decision function for classifying an unknown pattern.

Random Forest (RF): RF is based on the composition of random, uncorrelated decision trees. An unknown pattern is judged individually by each tree and assigned to a class. The final classification of the patterns by the RF is then based on a majority decision.

k-Nearest Neighbors (k-NN): The k-NN algorithm classifies an unknown object using the class label that most commonly classifies the k neighbors closest to it. Its neighbors are determined using a selected similarity measure (e.g., Euclidean distance, Jaccard coefficient, etc.).

Artificial neural networks (ANNs): ANNs are inspired by biological neural networks and model their organizational principles and processes in a very simplified manner. Class patterns are learned by adjusting the weights of the individual neuronal connections.

Simplified automated pain recognition process. Simplified automated pain recognition process.png
Simplified automated pain recognition process.

Databases

In order to classify pain in a valid manner, it is necessary to create representative, reliable, and valid pain databases that are available to the machine learner for training. An ideal database would be sufficiently large and would consist of natural (not experimental), high-quality pain responses. However, natural responses are difficult to record and can only be obtained to a limited extent; in most cases they are characterized by suboptimal quality. The databases currently available therefore contain experimental or quasi-experimental pain responses, and each database is based on a different pain model. The following list shows a selection of the most relevant pain databases (last updated: April 2020): [11]

Related Research Articles

Computer vision tasks include methods for acquiring, processing, analyzing and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g. in the forms of decisions. Understanding in this context means the transformation of visual images into descriptions of the world that make sense to thought processes and can elicit appropriate action. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory.

<span class="mw-page-title-main">Emotion</span> Conscious subjective experience of humans

Emotions are mental states brought on by neurophysiological changes, variously associated with thoughts, feelings, behavioral responses, and a degree of pleasure or displeasure. There is no scientific consensus on a definition. Emotions are often intertwined with mood, temperament, personality, disposition, or creativity.

Pattern recognition is the task of assigning a class to an observation based on patterns extracted from data. While similar, pattern recognition (PR) is not to be confused with pattern machines (PM) which may possess (PR) capabilities but their primary function is to distinguish and create emergent pattern. PR has applications in statistical data analysis, signal processing, image analysis, information retrieval, bioinformatics, data compression, computer graphics and machine learning. Pattern recognition has its origins in statistics and engineering; some modern approaches to pattern recognition include the use of machine learning, due to the increased availability of big data and a new abundance of processing power.

Affective computing is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects. It is an interdisciplinary field spanning computer science, psychology, and cognitive science. While some core ideas in the field may be traced as far back as to early philosophical inquiries into emotion, the more modern branch of computer science originated with Rosalind Picard's 1995 paper on affective computing and her book Affective Computing published by MIT Press. One of the motivations for the research is the ability to give machines emotional intelligence, including to simulate empathy. The machine should interpret the emotional state of humans and adapt its behavior to them, giving an appropriate response to those emotions.

<span class="mw-page-title-main">Body language</span> Type of nonverbal communication

Body language is a type of communication in which physical behaviors, as opposed to words, are used to express or convey information. Such behavior includes facial expressions, body posture, gestures, eye movement, touch and the use of space. The term body language is usually applied in regard to people but may also be applied to animals. The study of body language is also known as kinesics. Although body language is an important part of communication, most of it happens without conscious awareness.

<span class="mw-page-title-main">Nonverbal communication</span> Interpersonal communication through wordless (mostly visual) cues

Nonverbal communication (NVC) is the transmission of messages or signals through a nonverbal platform such as eye contact (oculesics), body language (kinesics), social distance (proxemics), touch (haptics), voice (paralanguage), physical environments/appearance, and use of objects. When communicating, we utilize nonverbal channels as means to convey different messages or signals, whereas others can interpret these message. The study of nonverbal communication started in 1872 with the publication of The Expression of the Emotions in Man and Animals by Charles Darwin. Darwin began to study nonverbal communication as he noticed the interactions between animals such as lions, tigers, dogs etc. and realized they also communicated by gestures and expressions. For the first time, nonverbal communication was studied and its relevance questioned. Today, scholars argue that nonverbal communication can convey more meaning than verbal communication.

<span class="mw-page-title-main">Gesture recognition</span> Topic in computer science and language technology

Gesture recognition is an area of research and development in computer science and language technology concerned with the recognition and interpretation of human gestures. A subdiscipline of computer vision, it employs mathematical algorithms to interpret gestures.

<span class="mw-page-title-main">Facial Action Coding System</span> System of classifying human facial movements

The Facial Action Coding System (FACS) is a system to taxonomize human facial movements by their appearance on the face, based on a system originally developed by a Swedish anatomist named Carl-Herman Hjortsjö. It was later adopted by Paul Ekman and Wallace V. Friesen, and published in 1978. Ekman, Friesen, and Joseph C. Hager published a significant update to FACS in 2002. Movements of individual facial muscles are encoded by the FACS from slight different instant changes in facial appearance. It has proven useful to psychologists and to animators.

In statistics, classification is the problem of identifying which of a set of categories (sub-populations) an observation belongs to. Examples are assigning a given email to the "spam" or "non-spam" class, and assigning a diagnosis to a given patient based on observed characteristics of the patient.

Dyssemia is a difficulty with receptive and/or expressive nonverbal communication. The word comes from the Greek roots dys (difficulty) and semia (signal). The term was coined by psychologists Marshall Duke and Stephen Nowicki in their 1992 book, Helping The Child Who Doesn't Fit In, to decipher the hidden dimensions of social rejection. These difficulties go beyond problems with body language and motor skills. Dyssemic persons exhibit difficulties with the acquisition and use of nonverbal cues in interpersonal relationships. "A classic set of studies by Albert Mehrabian showed that in face-to-face interactions, 55 percent of the emotional meaning of a message is expressed through facial, postural, and gestural means, and 38 percent of the emotional meaning is transmitted through the tone of voice. Only seven percent of the emotional meaning is actually expressed with words." Dyssemia represents the social dysfunction aspect of nonverbal learning disorder.

Activity recognition aims to recognize the actions and goals of one or more agents from a series of observations on the agents' actions and the environmental conditions. Since the 1980s, this research field has captured the attention of several computer science communities due to its strength in providing personalized support for many different applications and its connection to many different fields of study such as medicine, human-computer interaction, or sociology.

<span class="mw-page-title-main">Human–computer interaction</span> Academic discipline studying the relationship between computer systems and their users

Human–computer interaction (HCI) is research in the design and the use of computer technology, which focuses on the interfaces between people (users) and computers. HCI researchers observe the ways humans interact with computers and design technologies that allow humans to interact with computers in novel ways. A device that allows interaction between human being and a computer is known as a "Human-computer Interface (HCI)".

<span class="mw-page-title-main">Pain in animals</span> Overview about pain in animals

Pain negatively affects the health and welfare of animals. "Pain" is defined by the International Association for the Study of Pain as "an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage." Only the animal experiencing the pain can know the pain's quality and intensity, and the degree of suffering. It is harder, if even possible, for an observer to know whether an emotional experience has occurred, especially if the sufferer cannot communicate. Therefore, this concept is often excluded in definitions of pain in animals, such as that provided by Zimmerman: "an aversive sensory experience caused by actual or potential injury that elicits protective motor and vegetative reactions, results in learned avoidance and may modify species-specific behaviour, including social behaviour." Nonhuman animals cannot report their feelings to language-using humans in the same manner as human communication, but observation of their behaviour provides a reasonable indication as to the extent of their pain. Just as with doctors and medics who sometimes share no common language with their patients, the indicators of pain can still be understood.

Non-verbal leakage is a form of non-verbal behavior that occurs when a person verbalizes one thing, but their body language indicates another, common forms of which include facial movements and hand-to-face gestures. The term "non-verbal leakage" got its origin in literature in 1968, leading to many subsequent studies on the topic throughout the 1970s, with related studies continuing today.

Robotic sensing is a subarea of robotics science intended to provide sensing capabilities to robots. Robotic sensing provides robots with the ability to sense their environments and is typically used as feedback to enable robots to adjust their behavior based on sensed input. Robot sensing includes the ability to see, touch, hear and move and associated algorithms to process and make use of environmental feedback and sensory data. Robot sensing is important in applications such as vehicular automation, robotic prosthetics, and for industrial, medical, entertainment and educational robots.

Social cues are verbal or non-verbal signals expressed through the face, body, voice, motion and guide conversations as well as other social interactions by influencing our impressions of and responses to others. These percepts are important communicative tools as they convey important social and contextual information and therefore facilitate social understanding.

Emotion recognition is the process of identifying human emotion. People vary widely in their accuracy at recognizing the emotions of others. Use of technology to help people with emotion recognition is a relatively nascent research area. Generally, the technology works best if it uses multiple modalities in context. To date, the most work has been conducted on automating the recognition of facial expressions from video, spoken expressions from audio, written expressions from text, and physiology as measured by wearables.

Artificial empathy or computational empathy is the development of AI systems—such as companion robots or virtual agents—that can detect emotions and respond to them in an empathic way.

<span class="mw-page-title-main">Present weather sensor</span>

The present weather sensor (PWS) is a component of an automatic weather station that detects the presence of hydrometeors and determines their type and intensity. It works on a principle similar to a bistatic radar, noting the passage of droplets, or flakes, between a transmitter and a sensor. These instruments in automatic weather stations are used to simulate the observation taken by a human observer. They allow rapid reporting of any change in the type and intensity of precipitation, but include interpretation limitations.

References

  1. GmbH, Südwest Presse Online-Dienste (2017-04-11). "Forschung: Schmerzen messbar machen". swp.de (in German). Retrieved 2020-04-20.
  2. "Künstliche Intelligenz erkennt den Schmerz". AerzteZeitung.de (in German). 2 March 2017. Retrieved 2020-04-20.
  3. Fundamentals of pain medicine. Cheng, Jianguo (Professor of anesthesiology),, Rosenquist, Richard W. Cham, Switzerland. 8 February 2018. ISBN   978-3-319-64922-1. OCLC   1023425599.{{cite book}}: CS1 maint: location missing publisher (link) CS1 maint: others (link)
  4. Elisabeth Handel: Praxishandbuch ZOPA: Schmerzeinschätzung bei Patienten mit kognitiven und/oder Bewusstseinsbeeinträchtigungen. Huber, Bern 2010, ISBN   978-3-456-84785-6.
  5. Anbarjafari, Gholamreza (2018). Machine learning for face, emotion, and pain recognition. Gorbova, Jelena,, Hammer, Rain Eric,, Rasti, Pejman,, Noroozi, Fatemeh,, Society of Photo-optical Instrumentation Engineers. Bellingham, Washington. ISBN   978-1-5106-1986-9. OCLC   1035460960.{{cite book}}: CS1 maint: location missing publisher (link)
  6. Henrik Kessler (2015), Kurzlehrbuch Medizinische Psychologie und Soziologie (in German) (3 ed.), Stuttgart/New York: Thieme, p. 34, ISBN   978-3-13-136423-4
  7. Birbaumer, Niels. (2006). Biologische Psychologie : mit 41 Tabellen : [Bonusmaterial im Web]. Schmidt, Robert F. (6., vollst. überarb. und erg. Aufl ed.). Heidelberg: Springer. ISBN   978-3-540-25460-7. OCLC   162267511.
  8. S. Gruss et al.: Pain Intensity Recognition Rates via Biopotential Feature Patterns with Support Vector Machines. In: PLoS One. Vol. 10, No. 10, 2015, S. 1–14, doi:10.1371/journal.pone.0140330.
  9. S. Walter et al.: Automatic pain quantification using autonomic parameters. In: Psychol. Neurosci. Nol. 7, No. 3, 2014, S. 363–380, doi:10.3922/j.psns.2014.041.
  10. Picard, Rosalind W. (2000). Affective computing (1st MIT Press pbk. ed.). Cambridge, Mass.: MIT Press. ISBN   0-262-66115-2. OCLC   45432790.
  11. Werner, Philipp; Lopez-Martinez, Daniel; Walter, Steffen; Al-Hamadi, Ayoub; Gruss, Sascha; Picard, Rosalind (2019). "Automatic Recognition Methods Supporting Pain Assessment: A Survey". IEEE Transactions on Affective Computing. 13: 530–552. doi:10.1109/TAFFC.2019.2946774. hdl: 1721.1/136497 . ISSN   1949-3045. S2CID   210588609.