Sonic interaction design

Last updated

Sonic interaction design is the study and exploitation of sound as one of the principal channels conveying information, meaning, and aesthetic/emotional qualities in interactive contexts. [1] [2] Sonic interaction design is at the intersection of interaction design and sound and music computing. If interaction design is about designing objects people interact with, and such interactions are facilitated by computational means, in sonic interaction design, sound is mediating interaction either as a display of processes or as an input medium.

Contents

Research areas

Perceptual, cognitive, and emotional study of sonic interactions

Research in this area focuses on experimental scientific findings about human sound reception in interactive contexts. [3]

During closed-loop interactions, the users manipulate an interface that produces sound, and the sonic feedback affects in turn the users’ manipulation. In other words, there is a tight coupling between auditory perception and action. [4] Listening to sounds might not only activate a representation of how the sound was made: it might also prepare the listener to react to the sound. Cognitive representations of sounds might be associated with action-planning schemas, and sounds can also unconsciously cue a further reaction on the part of the listener. [5]

Sonic interactions have the potential to influence the users’ emotions: the quality of the sounds affects the pleasantness of the interaction, and the difficulty of the manipulation influences whether the user feels in control or not. [6]

Product sound design

Product design in the context of sonic interaction design is dealing with methods and experiences for designing interactive products having a salient sonic behaviour. Products, in this context, are either tangible and functional objects that are designed to be manipulated, [7] [8] or usable simulations of such objects as in virtual prototyping. Research and development in this area relies on studies from other disciplines, such as:

In design research for sonic products a set of practices have been inherited from a variety of fields. Such practices have been tested in contexts where research and pedagogy naturally intermix. Among these practices it suffices to mention:

Interactive art and music

In the context of sonic interaction design, interactive art and music projects are designing and researching aesthetic experiences where sonic interaction is in the focus. The creative and expressive aspects – the aesthetics – are more important than conveying information through sound. Practices include installations, performances, public art and interactions between humans through digitally-augmented objects/environments. These often integrate elements such as embedded technology, gesture-sensitive devices, speakers or context-aware systems.

The experience is in the focus, addressing how humans are affected by the sound, and vice versa. Interactive art and music allows researchers to question existing paradigms and models of how humans interact with technology and sound, going beyond paradigms of control (human controlling a machine). Users are part of a loop which includes action and perception.

Interactive art and music projects invite explorative actions and playful engagement. There is also a multi-sensory aspect, especially haptic-audio [18] and audio-visual projects are popular. Amongst many other influences, this field is informed by the development of the roles of instrument-maker, composer and performer merging. [19]

Artistic research in sonic interaction design is about productions in the interactive arts and performing arts, exploiting the role of enactive engagement with sound–augmented interactive objects. [20]

Sonification

Sonification is the data-dependent generation of sound, if the transformation is systematic, objective and reproducible, so that it can be used as scientific method. [21]

For sonic interaction design, sonification provides a set of methods to create interaction sounds that encode relevant data, so that the user can perceive or interpret the conveyed information. Sonification does not necessarily need to represent huge amounts of data in sound, but may only convey one or few data values in a sound. To give an example, imagine a light switch that, on activation would create a short sound that depends on the electric power consumed through the cable: more energy-wasting lamps would perhaps systematically result in more annoying switch sounds. This example shows that sonification aims to provide some information by using its systematic transformation into sound.

The integration of data-driven elements in interaction sound may serve different purposes:

Within the field of sonification, sonic interaction design acknowledges the importance of human interaction for understanding and using auditory feedback. [24] Within sonic interaction design, sonification can help and offer solutions, methods, and techniques to inspire and guide the design of products or interactive systems.

See also

Related Research Articles

Computer-supported cooperative work (CSCW) is the study of how people utilize technology collaboratively, often towards a shared goal. CSCW addresses how computer systems can support collaborative activity and coordination. More specifically, the field of CSCW seeks to analyze and draw connections between currently understood human psychological and social behaviors and available collaborative tools, or groupware. Often the goal of CSCW is to help promote and utilize technology in a collaborative way, and help create new tools to succeed in that goal. These parallels allow CSCW research to inform future design patterns or assist in the development of entirely new tools.

Context awareness refers, in information and communication technologies, to a capability to take into account the situation of entities, which may be users or devices, but are not limited to those. Location is only the most obvious element of this situation. Narrowly defined for mobile devices, context awareness does thus generalize location awareness. Whereas location may determine how certain processes around a contributing device operate, context may be applied more flexibly with mobile users, especially with users of smart phones. Context awareness originated as a term from ubiquitous computing or as so-called pervasive computing which sought to deal with linking changes in the environment with computer systems, which are otherwise static. The term has also been applied to business theory in relation to contextual application design and business process management issues.

An audio game is an electronic game played on a device such as a personal computer. It is similar to a video game save that there is audible and tactile feedback but not visual.

Interaction design, often abbreviated as IxD, is "the practice of designing interactive digital products, environments, systems, and services." While interaction design has an interest in form, its main area of focus rests on behavior. Rather than analyzing how things are, interaction design synthesizes and imagines things as they could be. This element of interaction design is what characterizes IxD as a design field, as opposed to a science or engineering field.

<span class="mw-page-title-main">Sonification</span> Use of non-speech audio to convey information

Sonification is the use of non-speech audio to convey information or perceptualize data. Auditory perception has advantages in temporal, spatial, amplitude, and frequency resolution that open possibilities as an alternative or complement to visualization techniques.

<span class="mw-page-title-main">Mixed reality</span> Merging of real and virtual worlds to produce new environments

Mixed reality (MR) is a term used to describe the merging of a real-world environment and a computer-generated one. Physical and virtual objects may co-exist in mixed reality environments and interact in real time.

<span class="mw-page-title-main">Gesture recognition</span> Topic in computer science and language technology

Gesture recognition is an area of research and development in computer science and language technology concerned with the recognition and interpretation of human gestures. A subdiscipline of computer vision, it employs mathematical algorithms to interpret gestures.

<span class="mw-page-title-main">Tangible user interface</span>

A tangible user interface (TUI) is a user interface in which a person interacts with digital information through the physical environment. The initial name was Graspable User Interface, which is no longer used. The purpose of TUI development is to empower collaboration, learning, and design by giving physical forms to digital information, thus taking advantage of the human ability to grasp and manipulate physical objects and materials.

Multimodal interaction provides the user with multiple modes of interacting with a system. A multimodal interface provides several distinct tools for input and output of data.

A voice-user interface (VUI) enables spoken human interaction with computers, using speech recognition to understand spoken commands and answer questions, and typically text to speech to play a reply. A voice command device is a device controlled with a voice user interface.

Affective design describes the design of products, services, and user interfaces that aim to evoke intended emotional responses from consumers, ultimately improving customer satisfaction. It is often regarded within the domain of technology interaction and computing, in which emotional information is communicated to the computer from the user in a natural and comfortable way. The computer processes the emotional information and adapts or responds to try to improve the interaction in some way. The notion of affective design emerged from the field of human–computer interaction (HCI), specifically from the developing area of affective computing. Affective design serves an important role in user experience (UX) as it contributes to the improvement of the user's personal condition in relation to the computing system. Decision-making, brand loyalty, and consumer connections have all been associated with the integration of affective design. The goals of affective design focus on providing users with an optimal, proactive experience. Amongst overlap with several fields, applications of affective design include ambient intelligence, human–robot interaction, and video games.

<span class="mw-page-title-main">Human–computer interaction</span> Academic discipline studying the relationship between computer systems and their users

Human–computer interaction (HCI) is research in the design and the use of computer technology, which focuses on the interfaces between people (users) and computers. HCI researchers observe the ways humans interact with computers and design technologies that allow humans to interact with computers in novel ways. A device that allows interaction between human being and a computer is known as a "Human-computer Interface".

Sound and music computing (SMC) is a research field that studies the whole sound and music communication chain from a multidisciplinary point of view. By combining scientific, technological and artistic methodologies it aims at understanding, modeling and generating sound and music through computational approaches.

An immersive virtual musical instrument, or immersive virtual environment for music and sound, represents sound processes and their parameters as 3D entities of a virtual reality so that they can be perceived not only through auditory feedback but also visually in 3D and possibly through tactile as well as haptic feedback, using 3D interface metaphors consisting of interaction techniques such as navigation, selection and manipulation (NSM). It builds on the trend in electronic musical instruments to develop new ways to control sound and perform music such as explored in conferences like NIME.

Audification is an auditory display technique for representing a sequence of data values as sound. By definition, it is described as a "direct translation of a data waveform to the audible domain." Audification interprets a data sequence and usually a time series, as an audio waveform where input data are mapped to sound pressure levels. Various signal processing techniques are used to assess data features. The technique allows the listener to hear periodic components as frequencies. Audification typically requires large data sets with periodic components.

Feminist HCI is a subfield of human-computer interaction (HCI) that applies feminist theory, critical theory and philosophy to social topics in HCI, including scientific objectivity, ethical values, data collection, data interpretation, reflexivity, and unintended consequences of HCI software. The term was originally used in 2010 by Shaowen Bardzell, and although the concept and original publication are widely cited, as of 2020 Bardzell's proposed frameworks have been rarely used since.

Lisa Anthony is an Associate Professor in the Department of Computer & Information Science & Engineering (CISE) at the University of Florida. She is also the director of the Intelligent Natural Interaction Technology Laboratory. Her research interests revolve around developing natural user interfaces to allow for greater human-computer interaction, specifically for children as they develop their cognitive and physical abilities.

Data sonification is the presentation of data as sound using sonification. It is the auditory equivalent of the more established practice of data visualization.

Stefania Serafin is a professor at the Department of Architecture, Design and Media technology at Aalborg University in Copenhagen.

<span class="mw-page-title-main">Bruno Zamborlin</span> Italian researcher, entrepreneur and artist

Bruno Zamborlin is an AI researcher, entrepreneur and artist based in London, working in the field of human-computer interaction. His work focuses on converting physical objects into touch-sensitive, interactive surfaces using vibration sensors and artificial intelligence. In 2013 he founded Mogees Limited a start-up to transform everyday objects into musical instruments and games using a vibration sensor and a mobile phone. With HyperSurfaces, he converts physical surfaces of any material, shape and form into data-enabled-interactive surfaces using a vibration sensor and a coin-sized chipset. As an artist, he has created art installations around the world, with his most recent work comprising a unique series of "sound furnitures" that was showcased at the Italian Pavilion of the Venice Biennale 2023. He regularly performed with UK-based electronic music duo Plaid. He is also honorary visiting research fellow at Goldsmiths, University of London.

References

  1. Davide Rocchesso, Stefania Serafin, Frauke Behrendt, Nicola Bernardini, Roberto Bresin, Gerhard Eckel, Karmen Franinović, Thomas Hermann, Sandra Pauletto, Patrick Susini, and Yon Visell, (2008). Sonic interaction design: sound, information and experience. In: CHI '08 Extended Abstracts on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI '08. ACM, New York, NY, 3969–3972. doi : 10.1145/1358628.1358969
  2. Davide Rocchesso and Stefania Serafin, (2009). "Sonic Interaction Design". Editorial of Special Issue. International Journal of Human–Computer Studies 67(11) (Nov. 2009): 905–906. doi : 10.1016/j.ijhcs.2009.09.009
  3. Guillaume Lemaitre, Olivier Houix, Yon Visell, Karmen Franinović, Nicolas Misdariis, and Patrick Susini, (2009). "Toward the design and evaluation of continuous sound in tangible interfaces: The Spinotron". International Journal of Human–Computer Studies 67(11) (Nov. 2009): 976–993. doi : 10.1016/j.ijhcs.2009.07.002
  4. Salvatore M. Aglioti and Mariella Pazzaglia (2010). "Representing actions through their sound". "Experimental Brain Research" 206(2): 141–151. doi : 10.1007/s00221-010-2344-x
  5. Marzia De Lucia, Christian Camen, Stephanie Clarke, and Micah M. Murray (2009). "The role of actions in auditory ob ject discrimination". "Neuroimage" 48(2): 475–485. doi : 10.1016/j.neuroimage.2009.06.041
  6. Guillaume Lemaitre, Olivier Houix, Karmen Franinović, Yon Visell, and Patrick Susini (2009). "The Flops glass: a device to study the emotional reactions arising from sonic interactions". In Proc. Sound and Music Computing Conference, Porto, Portugal. Available: Archived 2012-02-25 at the Wayback Machine
  7. Daniel Hug (2008). Genie in a Bottle: Object–Sound Reconfigurations for Interactive Commodities. In: Proceedings of Audiomostly 2008, 3rd Conference on Interaction With Sound (2008). Available: online
  8. Karmen Franinović, Daniel Hug and Yon Visell (June 2007). Sound Embodied: Explorations of Sonic Interaction Design for Everyday Objects in a Workshop Setting. In: Proceedings of the 13th International Conference on Auditory Display, Montréal, Canada, June 26 – 29, 2007, pp. 334–341. Available: online
  9. Richard H. Lyon, (2003). "Product sound quality-from perception to design". Sound and Vibration 37(3): 18–22. Available: online
  10. See filmsound.org: Learning Space dedicated to the Art of Film Sound Design
  11. See About gamesound.org
  12. Inger Ekman and Michal Rinott, (2010). Using vocal sketching for designing sonic interactions, Aarhus, Denmark: Designing Interactive Systems archive, Proceedings of the 8th ACM Conference on Designing Interactive Systems, ISBN   978-1-4503-0103-9. Available: online.
  13. Sandra Pauletto, Daniel Hug, Stephen Barrass and Mary Luckhurst (2009). Integrating Theatrical Strategies into Sonic Interaction Design. In: Proceedings of Audio Mostly 2009 – 4th Conference on Interaction with Sound (2009), Glasgow, 6 p. Available: PDF Archived 2011-08-27 at the Wayback Machine and online
  14. Davide Rocchesso, Pietro Polotti, and Stefano delle Monache, (28 December 2009). "Designing Continuous Sonic Interaction". International Journal of Design 3(3). Available: online and PDF
  15. Wendy E. Mackay and Anne Laure Fayard, (1999). Video brainstorming and prototyping: techniques for participatory design, Pittsburgh, Pennsylvania: Conference on Human Factors in Computing Systems, CHI '99 extended abstracts on Human factors in computing systems, ISBN   1-58113-158-5. Available: online
  16. Michel Chion, (1994). Audio-Vision: sound on screen. New York: Columbia University Press, ISBN   0-231-07898-6, ISBN   0-231-07899-4. Book review on f ilmsound.org
  17. Daniel Hug, (2010). "Investigating Narrative and Performative Sound Design Strategies for Interactive Commodities". Lecture Notes in Computer Science, Volume 5954/2010: 12-40, doi : 10.1007/978-3-642-12439-6_2. Available: online Archived 2012-12-17 at archive.today .
  18. The fifth International Workshop on Haptic and Audio Interaction Design (HAID) on September 16–17, 2010 in Copenhagen, Denmark. http://media.aau.dk/haid10/?page_id=46
  19. International Conference on New Interfaces for Musical Expression http://www.nime.org/
  20. John Thompson, JoAnn Kuchera-Morin, Marcos Novak, Dan Overholt, Lance Putnam, Graham Wakefield, and Wesley Smith, (2009). "The Allobrain: An interactive, stereographic, 3D audio, immersive virtual world". International Journal of Human-Computer Studies 67(11) (Nov. 2009): 934–946. doi : 10.1016/j.ijhcs.2009.05.005
  21. Sonification – A Definition http://sonification.de/son/definition
  22. Tobias Grosshauser and Thomas Hermann, (2010). "Multimodal closed-loop Human Machine Interaction" Proc. Interactive Sonification Workshop, Stockholm, http://interactive-sonification.org/ISon2010/proceedings
  23. Effenberg, Alfred O. (2005). "Movement sonification: Effects on perception and action". IEEE MultiMedia. 12 (2): 53–59. doi:10.1109/MMUL.2005.31. S2CID   5211383.
  24. Thomas Hermann, and Andy Hunt, (2005). "Guest Editors' Introduction: An Introduction to Interactive Sonification". IEEE MultiMedia 12(2): 20-24. doi : 10.1109/MMUL.2005.26

Further reading