Sonic interaction design is the study and exploitation of sound as one of the principal channels conveying information, meaning, and aesthetic/emotional qualities in interactive contexts. [1] [2] Sonic interaction design is at the intersection of interaction design and sound and music computing. If interaction design is about designing objects people interact with, and such interactions are facilitated by computational means, in sonic interaction design, sound is mediating interaction either as a display of processes or as an input medium.
Research in this area focuses on experimental scientific findings about human sound reception in interactive contexts. [3]
During closed-loop interactions, the users manipulate an interface that produces sound, and the sonic feedback affects in turn the users’ manipulation. In other words, there is a tight coupling between auditory perception and action. [4] Listening to sounds might not only activate a representation of how the sound was made: it might also prepare the listener to react to the sound. Cognitive representations of sounds might be associated with action-planning schemas, and sounds can also unconsciously cue a further reaction on the part of the listener. [5]
Sonic interactions have the potential to influence the users’ emotions: the quality of the sounds affects the pleasantness of the interaction, and the difficulty of the manipulation influences whether the user feels in control or not. [6]
Product design in the context of sonic interaction design is dealing with methods and experiences for designing interactive products having a salient sonic behaviour. Products, in this context, are either tangible and functional objects that are designed to be manipulated, [7] [8] or usable simulations of such objects as in virtual prototyping. Research and development in this area relies on studies from other disciplines, such as:
In design research for sonic products a set of practices have been inherited from a variety of fields. Such practices have been tested in contexts where research and pedagogy naturally intermix. Among these practices it suffices to mention:
In the context of sonic interaction design, interactive art and music projects are designing and researching aesthetic experiences where sonic interaction is in the focus. The creative and expressive aspects – the aesthetics – are more important than conveying information through sound. Practices include installations, performances, public art and interactions between humans through digitally-augmented objects/environments. These often integrate elements such as embedded technology, gesture-sensitive devices, speakers or context-aware systems.
The experience is in the focus, addressing how humans are affected by the sound, and vice versa. Interactive art and music allows researchers to question existing paradigms and models of how humans interact with technology and sound, going beyond paradigms of control (human controlling a machine). Users are part of a loop which includes action and perception.
Interactive art and music projects invite explorative actions and playful engagement. There is also a multi-sensory aspect, especially haptic-audio [18] and audio-visual projects are popular. Amongst many other influences, this field is informed by the development of the roles of instrument-maker, composer and performer merging. [19]
Artistic research in sonic interaction design is about productions in the interactive arts and performing arts, exploiting the role of enactive engagement with sound–augmented interactive objects. [20]
Sonification is the data-dependent generation of sound, if the transformation is systematic, objective and reproducible, so that it can be used as scientific method. [21]
For sonic interaction design, sonification provides a set of methods to create interaction sounds that encode relevant data, so that the user can perceive or interpret the conveyed information. Sonification does not necessarily need to represent huge amounts of data in sound, but may only convey one or few data values in a sound. To give an example, imagine a light switch that, on activation would create a short sound that depends on the electric power consumed through the cable: more energy-wasting lamps would perhaps systematically result in more annoying switch sounds. This example shows that sonification aims to provide some information by using its systematic transformation into sound.
The integration of data-driven elements in interaction sound may serve different purposes:
Within the field of sonification, sonic interaction design acknowledges the importance of human interaction for understanding and using auditory feedback. [24] Within sonic interaction design, sonification can help and offer solutions, methods, and techniques to inspire and guide the design of products or interactive systems.
Computer-supported cooperative work (CSCW) is the study of how people utilize technology collaboratively, often towards a shared goal. CSCW addresses how computer systems can support collaborative activity and coordination. More specifically, the field of CSCW seeks to analyze and draw connections between currently understood human psychological and social behaviors and available collaborative tools, or groupware. Often the goal of CSCW is to help promote and utilize technology in a collaborative way, and help create new tools to succeed in that goal. These parallels allow CSCW research to inform future design patterns or assist in the development of entirely new tools.
Context awareness refers, in information and communication technologies, to a capability to take into account the situation of entities, which may be users or devices, but are not limited to those. Location is only the most obvious element of this situation. Narrowly defined for mobile devices, context awareness does thus generalize location awareness. Whereas location may determine how certain processes around a contributing device operate, context may be applied more flexibly with mobile users, especially with users of smart phones. Context awareness originated as a term from ubiquitous computing or as so-called pervasive computing which sought to deal with linking changes in the environment with computer systems, which are otherwise static. The term has also been applied to business theory in relation to contextual application design and business process management issues.
An audio game is an electronic game played on a device such as a personal computer. It is similar to a video game save that there is audible and tactile feedback but not visual.
Interaction design, often abbreviated as IxD, is "the practice of designing interactive digital products, environments, systems, and services." While interaction design has an interest in form, its main area of focus rests on behavior. Rather than analyzing how things are, interaction design synthesizes and imagines things as they could be. This element of interaction design is what characterizes IxD as a design field, as opposed to a science or engineering field.
Sonification is the use of non-speech audio to convey information or perceptualize data. Auditory perception has advantages in temporal, spatial, amplitude, and frequency resolution that open possibilities as an alternative or complement to visualization techniques.
Mixed reality (MR) is a term used to describe the merging of a real-world environment and a computer-generated one. Physical and virtual objects may co-exist in mixed reality environments and interact in real time.
Gesture recognition is an area of research and development in computer science and language technology concerned with the recognition and interpretation of human gestures. A subdiscipline of computer vision, it employs mathematical algorithms to interpret gestures.
A tangible user interface (TUI) is a user interface in which a person interacts with digital information through the physical environment. The initial name was Graspable User Interface, which is no longer used. The purpose of TUI development is to empower collaboration, learning, and design by giving physical forms to digital information, thus taking advantage of the human ability to grasp and manipulate physical objects and materials.
Multimodal interaction provides the user with multiple modes of interacting with a system. A multimodal interface provides several distinct tools for input and output of data.
A voice-user interface (VUI) enables spoken human interaction with computers, using speech recognition to understand spoken commands and answer questions, and typically text to speech to play a reply. A voice command device is a device controlled with a voice user interface.
Affective design describes the design of products, services, and user interfaces that aim to evoke intended emotional responses from consumers, ultimately improving customer satisfaction. It is often regarded within the domain of technology interaction and computing, in which emotional information is communicated to the computer from the user in a natural and comfortable way. The computer processes the emotional information and adapts or responds to try to improve the interaction in some way. The notion of affective design emerged from the field of human–computer interaction (HCI), specifically from the developing area of affective computing. Affective design serves an important role in user experience (UX) as it contributes to the improvement of the user's personal condition in relation to the computing system. Decision-making, brand loyalty, and consumer connections have all been associated with the integration of affective design. The goals of affective design focus on providing users with an optimal, proactive experience. Amongst overlap with several fields, applications of affective design include ambient intelligence, human–robot interaction, and video games.
Human–computer interaction (HCI) is research in the design and the use of computer technology, which focuses on the interfaces between people (users) and computers. HCI researchers observe the ways humans interact with computers and design technologies that allow humans to interact with computers in novel ways. A device that allows interaction between human being and a computer is known as a "Human-computer Interface".
Sound and music computing (SMC) is a research field that studies the whole sound and music communication chain from a multidisciplinary point of view. By combining scientific, technological and artistic methodologies it aims at understanding, modeling and generating sound and music through computational approaches.
An immersive virtual musical instrument, or immersive virtual environment for music and sound, represents sound processes and their parameters as 3D entities of a virtual reality so that they can be perceived not only through auditory feedback but also visually in 3D and possibly through tactile as well as haptic feedback, using 3D interface metaphors consisting of interaction techniques such as navigation, selection and manipulation (NSM). It builds on the trend in electronic musical instruments to develop new ways to control sound and perform music such as explored in conferences like NIME.
Audification is an auditory display technique for representing a sequence of data values as sound. By definition, it is described as a "direct translation of a data waveform to the audible domain." Audification interprets a data sequence and usually a time series, as an audio waveform where input data are mapped to sound pressure levels. Various signal processing techniques are used to assess data features. The technique allows the listener to hear periodic components as frequencies. Audification typically requires large data sets with periodic components.
Feminist HCI is a subfield of human-computer interaction (HCI) that applies feminist theory, critical theory and philosophy to social topics in HCI, including scientific objectivity, ethical values, data collection, data interpretation, reflexivity, and unintended consequences of HCI software. The term was originally used in 2010 by Shaowen Bardzell, and although the concept and original publication are widely cited, as of 2020 Bardzell's proposed frameworks have been rarely used since.
Lisa Anthony is an Associate Professor in the Department of Computer & Information Science & Engineering (CISE) at the University of Florida. She is also the director of the Intelligent Natural Interaction Technology Laboratory. Her research interests revolve around developing natural user interfaces to allow for greater human-computer interaction, specifically for children as they develop their cognitive and physical abilities.
Data sonification is the presentation of data as sound using sonification. It is the auditory equivalent of the more established practice of data visualization.
Stefania Serafin is a professor at the Department of Architecture, Design and Media technology at Aalborg University in Copenhagen.
Bruno Zamborlin is an AI researcher, entrepreneur and artist based in London, working in the field of human-computer interaction. His work focuses on converting physical objects into touch-sensitive, interactive surfaces using vibration sensors and artificial intelligence. In 2013 he founded Mogees Limited a start-up to transform everyday objects into musical instruments and games using a vibration sensor and a mobile phone. With HyperSurfaces, he converts physical surfaces of any material, shape and form into data-enabled-interactive surfaces using a vibration sensor and a coin-sized chipset. As an artist, he has created art installations around the world, with his most recent work comprising a unique series of "sound furnitures" that was showcased at the Italian Pavilion of the Venice Biennale 2023. He regularly performed with UK-based electronic music duo Plaid. He is also honorary visiting research fellow at Goldsmiths, University of London.