Sound and music computing

Last updated

Sound and music computing (SMC) is a research field that studies the whole sound and music communication chain from a multidisciplinary point of view. By combining scientific, technological and artistic methodologies it aims at understanding, modeling and generating sound and music through computational approaches.

Contents

History

The Sound and Music Computing research field can be traced back to the 1950s, when a few experimental composers, together with some engineers and scientists, independently and in different parts of the world, began exploring the use of the new digital technologies for music applications. Since then the SMC research field has had a fruitful history and different terms have been used to identify it. Computer Music and Music Technology might be the terms that have been used the most, "Sound and Music Computing" being a more recent term. In 1974, the research community established the International Computer Music Association and the International Computer Music Conference. In 1977 the Computer Music Journal was founded. The Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University was created in the early 1970s and the Institute for Research and Coordination Acoustic/Music (IRCAM) in Paris in the late 1970s.

The Sound and Music Computing term was first proposed in the mid 1990s [1] and it was included in the ACM Computing Classification System. Using this name, in 2004 the Sound and Music Computing Conference was started and also in 2004 a roadmapping initiative was funded by the European Commission that resulted in the SMC Roadmap [2] and in the Sound and Music Computing Summer School.

With increasing research specialization within the SMC field, a number of focused conferences have been created. Particularly relevant are the International Conference on Digital Audio Effects, established in 1998, the International Conference on Music Information Retrieval (ISMIR), established in 2000, and the International Conference on New Interfaces for Musical Expression (NIME), established in 2001.

Subfields

The current SMC research field can be grouped into a number of subfields that focus on specific aspects of the sound and music communication chain.

Areas of application

SMC research is a field driven by applications. Examples of applications are:

See also

Research centers

Associations

Journals

Conferences

Open software tools

Undergraduate Programmes

MSc Programmes

Related Research Articles

Audio signal processing is a subfield of signal processing that is concerned with the electronic manipulation of audio signals. Audio signals are electronic representations of sound waves—longitudinal waves which travel through air, consisting of compressions and rarefactions. The energy contained in audio signals is typically measured in decibels. As audio signals may be represented in either digital or analog format, processing may occur in either domain. Analog processors operate directly on the electrical signal, while digital processors operate mathematically on its digital representation.

<span class="mw-page-title-main">IRCAM</span> French research institute

IRCAM is a French institute dedicated to the research of music and sound, especially in the fields of avant garde and electro-acoustical art music. It is situated next to, and is organisationally linked with, the Centre Pompidou in Paris. The extension of the building was designed by Renzo Piano and Richard Rogers. Much of the institute is located underground, beneath the fountain to the east of the buildings.

Music information retrieval (MIR) is the interdisciplinary science of retrieving information from music. MIR is a small but growing field of research with many real-world applications. Those involved in MIR may have a background in academic musicology, psychoacoustics, psychology, signal processing, informatics, machine learning, optical music recognition, computational intelligence or some combination of these.

<span class="mw-page-title-main">Sonification</span>

Sonification is the use of non-speech audio to convey information or perceptualize data. Auditory perception has advantages in temporal, spatial, amplitude, and frequency resolution that open possibilities as an alternative or complement to visualization techniques.

The following outline is provided as an overview of and topical guide to human–computer interaction:

Human-centered computing (HCC) studies the design, development, and deployment of mixed-initiative human-computer systems. It is emerged from the convergence of multiple disciplines that are concerned both with understanding human beings and with the design of computational artifacts. Human-centered computing is closely related to human-computer interaction and information science. Human-centered computing is usually concerned with systems and practices of technology use while human-computer interaction is more focused on ergonomics and the usability of computing artifacts and information science is focused on practices surrounding the collection, manipulation, and use of information.

<span class="mw-page-title-main">New Interfaces for Musical Expression</span> International conference

New Interfaces for Musical Expression, also known as NIME, is an international conference dedicated to scientific research on the development of new technologies and their role in musical expression and artistic performance.

<span class="mw-page-title-main">Eduardo Reck Miranda</span> Musical artist

Eduardo Reck Miranda is a Brazilian composer of chamber and electroacoustic pieces but is most notable in the United Kingdom for his scientific research into computer music, particularly in the field of human-machine interfaces where brain waves will replace keyboards and voice commands to permit the disabled to express themselves musically.

Computer audition (CA) or machine listening is general field of study of algorithms and systems for audio understanding by machine. Since the notion of what it means for a machine to "hear" is very broad and somewhat vague, computer audition attempts to bring together several disciplines that originally dealt with specific problems or had a concrete application in mind. The engineer Paris Smaragdis, interviewed in Technology Review, talks about these systems --"software that uses sound to locate people moving through rooms, monitor machinery for impending breakdowns, or activate traffic cameras to record accidents."

ACM Multimedia (ACM-MM) is the Association for Computing Machinery (ACM)'s annual conference on multimedia, sponsored by the SIGMM special interest group on multimedia in the ACM. SIGMM specializes in the field of multimedia computing, from underlying technologies to applications, theory to practice, and servers to networks to devices.

<span class="mw-page-title-main">Albert Bregman</span>

Albert Stanley "Al" Bregman is a Canadian professor and researcher in experimental psychology, cognitive science, and Gestalt psychology, primarily in the perceptual organization of sound.

Elizabeth D. "Beth" Mynatt is the Dean of the Khoury College of Computer Sciences at Northeastern University. She is former executive director of the Institute for People and Technology, director of the GVU Center at Georgia Tech, and Regents' and Distinguished Professor in the School of Interactive Computing, all at the Georgia Institute of Technology.

Stanford University has many centers and institutes dedicated to the study of various specific topics. These centers and institutes may be within a department, within a school but across departments, an independent laboratory, institute or center reporting directly to the dean of research and outside any school, or semi-independent of the university itself.

Computational musicology is an interdisciplinary research area between musicology and computer science. Computational musicology includes any disciplines that use computers in order to study music. It includes sub-disciplines such as mathematical music theory, computer music, systematic musicology, music information retrieval, computational musicology, digital musicology, sound and music computing, and music informatics. As this area of research is defined by the tools that it uses and its subject matter, research in computational musicology intersects with both the humanities and the sciences. The use of computers in order to study and analyze music generally began in the 1960s, although musicians have been using computers to assist them in the composition of music beginning in the 1950s. Today, computational musicology encompasses a wide range of research topics dealing with the multiple ways music can be represented.

Sonic interaction design is the study and exploitation of sound as one of the principal channels conveying information, meaning, and aesthetic/emotional qualities in interactive contexts. Sonic interaction design is at the intersection of interaction design and sound and music computing. If interaction design is about designing objects people interact with, and such interactions are facilitated by computational means, in sonic interaction design, sound is mediating interaction either as a display of processes or as an input medium.

<span class="mw-page-title-main">International Society for Music Information Retrieval</span>

The International Society for Music Information Retrieval (ISMIR) is an international forum for research on the organization of music-related data. It started as an informal group steered by an ad hoc committee in 2000 which established a yearly symposium - whence "ISMIR", which meant International Symposium on Music Information Retrieval. It was turned into a conference in 2002 while retaining the acronym. ISMIR was incorporated in Canada on July 4, 2008.

The Sound and Music Computing (SMC) Conference is the forum for international exchanges around the core interdisciplinary topics of Sound and Music Computing. The conference is held annually to facilitate the exchange of ideas in this field.

<span class="mw-page-title-main">Xavier Serra</span>

Xavier Serra is a researcher in the field of Sound and Music Computing and professor at the Pompeu Fabra University (UPF) in Barcelona. He is the founder and director of the Music Technology Group at the UPF.

<span class="mw-page-title-main">Joshua Reiss</span>

Joshua Reiss is a British author, academic, inventor and entrepreneur. He is best known for his work in intelligent audio technologies and his co-authorship of the book Audio Effects Theory Implementation and Application.

Stefania Serafin is a professor at the Department of Architecture, Design and Media technology at Aalborg University in Copenhagen.

References

  1. Camurri, A., De Poli, G., and Rocchesso, D. (1995). A taxonomy for Sound and Music Computing. Computer Music Journal, 19(2):4–5.
  2. The S2S2 Consortium (2007). A Roadmap for Sound and Music Computing. Version 1.0. ISBN   978-90-811896-1-3.