Hatice Gunes

Last updated
Hatice Gunes
NationalityTurkish
Alma mater University of Technology Sydney
Yıldız Technical University
Scientific career
Institutions University of Cambridge
Queen Mary University of London
Alan Turing Institute
Thesis Vision-based multimodal analysis of affective face and upper-body behaviour  (2007)

Hatice Gunes is a Turkish computer scientist who is Professor of Affective Intelligence & Robotics at the University of Cambridge. Gunes leads the Affective Intelligence & Robotics Lab. Her research considers human robot interactions and the development of sophisticated technologies with emotional intelligence.

Contents

Early life and education

Gunes was an undergraduate student at the Yıldız Technical University.[ citation needed ] She moved to the University of Technology Sydney for her doctoral research, where she was awarded the Australian Government International Postgraduate Research Scholarship (IPRS) to focus on vision and machine learning based analysis of affective face and upper body behaviour. [1] [2] Her doctoral research showed that affective face and body displays are simultaneous but not strictly synchronous; explicit detection of temporal phases (onset-apex-offset) can improve the accuracy of affect recognition; recognition from fused face and body modalities performs better than that from the face or the body modality alone; and synchronized feature-level fusion achieves better performance than decision-level fusion. [3] She created the Bimodal Face and Body Gesture Database (FABO), a collection of labelled videos of posed, affective face and body displays for automatic analysis of human nonverbal affective behavior. [4] After earning her doctorate, she was appointed an Australian Research Council postdoctoral fellow, and worked on airport and railway security through object human tracking.[ citation needed ] In 2008, Gunes moved to Imperial College London, where she worked alongside Maja Pantić in the Intelligent Behaviour Understanding Group (iBUG). [5] [6] The project looked to build a dialogue system that can interact with humans via a virtual character. [7]

Research and career

In 2011, Gunes was appointed a lecturer at Queen Mary University of London. She remained there for four years, becoming an associate professor in 2014.[ citation needed ] She moved to the University of Cambridge in 2016, where she was promoted to a Professor of Affective Intelligence and Robotics. In 2019, she was awarded an Engineering and Physical Sciences Research Council fellowship, and was appointed a Faculty Fellow of the Alan Turing Institute. [8] Her fellowship considered human–robot interactions and the development of robot emotional intelligence through the study of human-human interactions. [9] [10] She investigated the relationships between humans and their companion robots and looked to design robots with enhanced socio-emotional skills. [9]

Gunes was appointed President of the Association for the Advancement of Affective Computing in 2017. She is interested in how technologies can enhance a sense of wellbeing, through affective VR, autonomous and tele-presence social robotics. [11]

Selected publications

Related Research Articles

Affective computing is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects. It is an interdisciplinary field spanning computer science, psychology, and cognitive science. While some core ideas in the field may be traced as far back as to early philosophical inquiries into emotion, the more modern branch of computer science originated with Rosalind Picard's 1995 paper on affective computing and her book Affective Computing published by MIT Press. One of the motivations for the research is the ability to give machines emotional intelligence, including to simulate empathy. The machine should interpret the emotional state of humans and adapt its behavior to them, giving an appropriate response to those emotions.

The expression computational intelligence (CI) usually refers to the ability of a computer to learn a specific task from data or experimental observation. Even though it is commonly considered a synonym of soft computing, there is still no commonly accepted definition of computational intelligence.

Multimodal interaction provides the user with multiple modes of interacting with a system. A multimodal interface provides several distinct tools for input and output of data.

<span class="mw-page-title-main">Thomas Huang</span> Chinese-American engineer and computer scientist (1936–2020)

Thomas Shi-Tao Huang was a Chinese-born American computer scientist, electrical engineer, and writer. He was a researcher and professor emeritus at the University of Illinois at Urbana-Champaign (UIUC). Huang was one of the leading figures in computer vision, pattern recognition and human computer interaction.

Myra S. Wilson is a British computer scientist. She is a senior lecturer in computer science at Aberystwyth University, Wales. Her research interests are in the broad area of robotics, and she also teaches in the field.

King-Sun Fu was a Chinese-born American computer scientist. He was a Goss Distinguished Professor at Purdue University School of Electrical and Computer Engineering in West Lafayette, Indiana. He was instrumental in the founding of International Association for Pattern Recognition (IAPR), served as its first president, and is widely recognized for his extensive and pioneering contributions to the field of pattern recognition and machine intelligence. In honor of the memory of Professor King-Sun Fu, IAPR gives the biennial King-Sun Fu Prize to a living person in the recognition of an outstanding technical contribution to the field of pattern recognition. The first King-Sun Fu Prize was presented in 1988, to Azriel Rosenfeld.

View synthesis aims to create new views of a specific subject starting from a number of pictures taken from given point of views.

An area of computer vision is active vision, sometimes also called active computer vision. An active vision system is one that can manipulate the viewpoint of the camera(s) in order to investigate the environment and get better information from it.

The IEEE Systems, Man, and Cybernetics Society is a professional society of the IEEE. It aims "to serve the interests of its members and the community at large by promoting the theory, practice, and interdisciplinary aspects of systems science and engineering, human-machine systems, and cybernetics".

Pfinder is a computer vision system which detects features in video images in order to recognize human figures and their movements and gestures. Pfinder was designed by Wren, et al. of the MIT Media Laboratory in 1997. As described by its authors, Pfinder is a "real-time system for tracking people and interpreting their behavior". The system improves upon previous works by not only identifying the boundaries of a person in the image, but also analyzing the regions inside the boundaries and relating them to the known structure of the human body. As an example, Pfinder can track a person's head and hands, and can determine the pose of the body and recognize gestures.

<span class="mw-page-title-main">Emotion recognition</span> Process of visually interpreting emotions

Emotion recognition is the process of identifying human emotion. People vary widely in their accuracy at recognizing the emotions of others. Use of technology to help people with emotion recognition is a relatively nascent research area. Generally, the technology works best if it uses multiple modalities in context. To date, the most work has been conducted on automating the recognition of facial expressions from video, spoken expressions from audio, written expressions from text, and physiology as measured by wearables.

<span class="mw-page-title-main">J. K. Aggarwal</span> American engineer (born 1936)

Jagdishkumar Keshoram Aggarwal is an American computer scientist, who is currently retired and is Cullen Trust Endowed Emeritus Professor of the Department of Electrical and Computer Engineering at the Cockrell School of Engineering, University of Texas at Austin. He is known for his contributions in the fields of computer vision, pattern recognition and image processing focusing on human motion and activities. He served in various positions in the Department of Electrical and Computer of the University of Texas at Austin and other institutions.

<span class="mw-page-title-main">Maja Pantić</span> Artificial intelligence and robotics researcher

Maja Pantić is a Professor of Affective and Behavioural Computing at Imperial College London and an AI Scientific Research Lead in Facebook London. She was previously Professor of Affective and Behavioural Computing University of Twente and Research Director of the Samsung AI lab in Cambridge, UK. She is an expert in machine understanding of human behaviour including vision-based detection and tracking of human behavioural cues like facial expressions and body gestures, and multimodal analysis of human behaviours like laughter, social signals and affective states.

<span class="mw-page-title-main">Amir Hussain (cognitive scientist)</span>

Amir Hussain is a cognitive scientist, the director of Cognitive Big Data and Cybersecurity (CogBID) Research Lab at Edinburgh Napier University He is a professor of computing science. He is founding Editor-in-Chief of Springer Nature's internationally leading Cognitive Computation journal and the new Big Data Analytics journal. He is founding Editor-in-Chief for two Springer Book Series: Socio-Affective Computing and Cognitive Computation Trends, and also serves on the Editorial Board of a number of other world-leading journals including, as Associate Editor for the IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Systems, Man and Cybernetics (Systems) and the IEEE Computational Intelligence Magazine.

<span class="mw-page-title-main">Peter William McOwan</span>

Peter William McOwan was a Professor of Computer Science in the School of Electronic Engineering and Computer Science at Queen Mary, University of London. His research interests were in visual perception, mathematical models for visual processing, in particular motion, cognitive science and biologically inspired hardware and software and science outreach.

<span class="mw-page-title-main">Silvia Ferrari</span> American aerospace engineer

Silvia Ferrari is an American aerospace engineer. She is John Brancaccio Professor at the Sibley School of Mechanical and Aerospace Engineering at Cornell University and also the director of the Laboratory for Intelligent Systems and Control (LISC) at the same university.

<span class="mw-page-title-main">Rita Cucchiara</span> Italian electrical and computer engineer (born 1965)

Rita Cucchiara is an Italian electrical and computer engineer, and professor of computer architecture and computer vision in the Enzo Ferrari Department of Engineering at the University of Modena and Reggio Emilia (UNIMORE) in Italy. Cucchiara's work focuses on artificial intelligence, specifically deep network technologies and computer vision to human behavior understanding (HBU). She is the director of the AImage Lab at UNIMORE and is director of the Artificial Intelligence Research and Innovation Center (AIRI) as well as the ELLIS Unit at Modena. She was founder and director from 2018 to 2021 of the Italian National Lab of Artificial Intelligence and intelligent systems of CINI. Cucchiara was also president of the CVPL GIRPR the Italian Association of Computer Vision, Machine Learning and Pattern Recognition from 2016 to 2018.

Mark S. Nixon is an author, researcher, editor and an academic. He is the former president of IEEE Biometrics Council, and former vice-Chair of IEEE PSPB. He retired from his position as Professor of Electronics and Computer Science at University of Southampton in 2019.

References

  1. Gunes, Hatice (2007). Vision-based multimodal analysis of affective face and upper-body behaviour (Thesis). OCLC   271213807.
  2. Lakatos, Alessandra Rossi, Patrick Holthaus, Sílvia Moros, Marcus Scheunemann, Gabriella. "Programme". SCRITA 2021. Retrieved 2022-06-24.
  3. Gunes, H.; Piccardi, M. (2009). "Automatic Temporal Segment Detection and Affect Recognition from Face and Body Display". IEEE Transactions on Systems, Man, and Cybernetics - Part B: Cybernetics. 39 (1): 64–84. doi:10.1109/TSMCB.2008.927269. PMID   19068431. S2CID   1921438 . Retrieved 2022-06-24.
  4. Gunes, H.; Piccardi, M. (August 2006). "A Bimodal Face and Body Gesture Database for Automatic Analysis of Human Nonverbal Affective Behavior". 18th International Conference on Pattern Recognition (ICPR'06). Vol. 1. pp. 1148–1153. doi:10.1109/ICPR.2006.39. hdl:10453/2540. ISBN   0-7695-2521-0. S2CID   117447.
  5. "i·bug - people". ibug.doc.ic.ac.uk. Retrieved 2022-06-24.
  6. "i·bug - people - Hatice Gunes". ibug.doc.ic.ac.uk. Retrieved 2022-06-24.
  7. "Hatice Gunes's Home Page at Cambridge: Research". www.cl.cam.ac.uk. Retrieved 2022-07-29.
  8. Gunes, Hatice (2019-12-18). "Prof Hatice Gunes". www.cst.cam.ac.uk. Retrieved 2022-06-24.
  9. 1 2 "Affective Mechanisms for Modelling Lifelong Human-Robot Relationships". 2018.
  10. Dr Hatice Gunes Data-driven Artificial Social Intelligence: From Social Appropriateness to Fairness , retrieved 2022-06-24
  11. "Advancing Wellbeing Seminar Series: Hatice Gunes". MIT Media Lab. Retrieved 2022-06-24.