Networked music performance

Last updated

A networked music performance or network musical performance is a real-time interaction over a computer network that enables musicians in different locations to perform as if they were in the same room. [1] These interactions can include performances, rehearsals, improvisation or jamming sessions, and situations for learning such as master classes. [2] Participants may be connected by "high fidelity multichannel audio and video links" [3] as well as MIDI data connections [1] and specialized collaborative software tools. While not intended to be a replacement for traditional live stage performance, networked music performance supports musical interaction when co-presence is not possible and allows for novel forms of music expression. [2] Remote audience members and possibly a conductor may also participate. [3]

Contents

History

One of the earliest examples of a networked music performance experiment was the 1951 piece: “Imaginary Landscape No. 4 for Twelve Radios” by composer John Cage. [4] The piece “used radio transistors as a musical instrument. The transistors were interconnected thus influencing each other.” [4] [5]

In the late 1970s, as personal computers were becoming more available and affordable, groups like the League of Automatic Music Composers began to experiment with linking multiple computers, electronic instruments, and analog circuitry to create novel forms of music. [6]

The 1990s saw several important experiments in networked performance. In 1993, The University of Southern California Information Sciences Institute began experimenting with networked music performance over the Internet. [3] The Hub (band), which was formed by original members of The League of Automatic Composers, experimented in 1997 with sending MIDI data over Ethernet to distributed locations. [6] However, “ it was more difficult than imagined to debug all of the software problems on each of the different machines with different operating systems and CPU speeds in different cities”. [6] In 1998, there was a three-way audio-only performance between musicians in Warsaw, Helsinki, and Oslo dubbed “Mélange à trois”. [3] [7] The early distributed performances all faced problems such as network delay, issues synchronizing signals, echo, and troubles with the acquisition and rendering of non-immersive audio and video. [3]

The development of high-speed internet over provisioned backbones, such as Internet2, made high quality audio links possible beginning in the early 2000s. [4] One of the first research groups to take advantage of the improved network performance was the SoundWIRE group at Stanford University's CCRMA. [8] That was soon followed by projects such as the Distributed Immersive Performance experiments, [3] SoundJack, [4] and DIAMOUSES. [2]

Awareness in musical performance

Workspace awareness in a face-to-face situation is gathered through consequential communication, feedthrough, and intentional communication. [9] A traditional music performance setting is an example of very tightly-coupled, synergistic collaboration in which participants have a high level of workspace awareness. “Each player must not only be conscious of his or her own part, but also of the parts of other musicians. The other musicians' gestures, facial expressions and bodily movements, as well as the sounds emitted by their instruments [are] clues to meanings and intentions of others”. [10] Research has indicated that musicians are also very sensitive to the acoustic response of the environment in which they are performing. [3] Ideally a networked music performance system would facilitate the high level of awareness that performers experience in a traditional performance setting.

Technical issues in networked music performance

Bandwidth demand, latency sensitivity, and a strict requirement for audio stream synchronization are the factors that make networked music performance a challenging application. [11] These factors are described in more detail below.

Bandwidth

High definition audio streaming, which is used to make a networked music performance as realistic as possible, is considered to be one of the most bandwidth-demanding uses of today's networks. [11]

Latency

One of the major issues with networked music performance is that latency is introduced into the audio as it is processed by a participant's local system and sent across the network. For interaction in a networked music performance to feel natural, the latency generally must be kept below 30 milliseconds, the bound of human perception. [12] If there is too much delay in the system, it will make performance very difficult since musicians adjust their playing to coordinate the performance based on the sounds they hear created by other players. [1] However, the characteristics of the piece being played, the musicians, and the types of instruments used ultimately define the tolerance. [3] Synchronization cues may be used in a network music performance system that is designed for long latency situations. [1]

Audio stream synchronization

Both end systems and networks must synchronize multiple audio streams from separate locations to form a consistent presentation of the music. [11] This is a challenging problem for today's systems.

Objectives of a networked music performance system

The objectives of a networked music performance can be summarized as:

Current research

SoundWIRE at CCRMA, Stanford University

The SoundWIRE research group explores several research areas in the use of networks for music performance including: multi-channel audio streaming, physical models and virtual acoustics, the sonification of network performance, psychoacoustics, and networked music performance practice. [7] The group has developed a software system, JackTrip, that supports multi-channel, high quality, uncompressed streaming audio for networked music performance over the internet. [7]

The Sonic Arts Research Centre

The Sonic Arts Research Centre (SARC) at Queen's University Belfast has been a major player in carrying out network performances since 2006 and has been active in the use of networks as both collaborative and performance tools. [13] The network team at SARC is led by Prof Pedro Rebelo and Dr Franziska Schroeder with varying set-ups of performers, instruments and compositional strategies. A group of artists and researchers has emerged around this field of distributed creativity at SARC and this has helped create a broader knowledge base and focus for activities. As a result, since 2007 SARC has a dedicated team of staff and students with knowledge and experience of network performance, which SARC refers to as "distributed creativity".[ citation needed ]

Regular performances, workshops and collaborations with institutions such as SoundWire, CCRMA Stanford University, and RPI, [14] led by composer and performer Pauline Oliveros, as well as with the University of São Paulo, have helped strengthen this emerging community of researchers and practitioners. The field is related to research on distributed creativity.[ citation needed ]

Distributed Immersive Performance (DIP) experiments

The Distributed Immersive Performance project is based at the Integrated Media Systems Center at the University of Southern California. [15] Their experiments explore the challenges of creating a seamless environment for remote, synchronous collaboration. [3] The experiments use 3D audio with correct spatial sound localization as well as HD or DV video projected onto wide screen displays to create an immersive virtual space. [3] There are interaction sites set up at various locations on the University of Southern California campus and at several partner locations such as the New World Symphony in Miami Beach, Florida. [3]

DIAMOUSES

The DIAMOUSES project is coordinated by the Music Informatics Lab at the Technological Education Institution of Crete in Hellas. [16] It supports a wide range of networked music performance scenarios with a customizable platform that handles the broadcasting and synchronization of audio and video signals across a network. [2]

Wireless Music Studio (WeMUST)

The A3Lab team at Università Politecnica delle Marche conducts research on the use of the wireless medium for uncompressed audio networking in the NMP context. [17] A mix of open source software, ARM platforms and dedicated wireless equipment have been documented, especially for outdoor use, where buildings of historical importance or difficult environments (e.g. sea) can be explored for the performance. A premiere of the system have been conducted with musicians playing a Stockhausen composition on different boats over the coast in Ancona, Italy. The project also aims at shifting music computing from laptops to embedded devices. [18]

See also

Related Research Articles

Computer music is the application of computing technology in music composition, to help human composers create new music or to have computers independently create music, such as with algorithmic composition programs. It includes the theory and application of new and existing computer software technologies and basic aspects of music, such as sound synthesis, digital signal processing, sound design, sonic diffusion, acoustics, electrical engineering, and psychoacoustics. The field of computer music can trace its roots back to the origins of electronic music, and the first experiments and innovations with electronic instruments at the turn of the 20th century.

<span class="mw-page-title-main">MIDI</span> Electronic musical instrument connection standard

MIDI is a technical standard that describes a communication protocol, digital interface, and electrical connectors that connect a wide variety of electronic musical instruments, computers, and related audio devices for playing, editing, and recording music.

<span class="mw-page-title-main">Music technology (electronic and digital)</span>

Digital music technology encompasses the use of digital instruments, computers, electronic effects units, software, or digital audio equipment by a performer, composer, sound engineer, DJ, or record producer to produce, perform or record music. The term refers to electronic devices, instruments, computer hardware, and software used in performance, playback, recording, composition, mixing, analysis, and editing of music.

<span class="mw-page-title-main">Multitrack recording</span> Separate recording of multiple sound sources to create a cohesive whole

Multitrack recording (MTR), also known as multitracking, is a method of sound recording developed in 1955 that allows for the separate recording of multiple sound sources or of sound sources recorded at different times to create a cohesive whole. Multitracking became possible in the mid-1950s when the idea of simultaneously recording different audio channels to separate discrete tracks on the same reel-to-reel tape was developed. A track was simply a different channel recorded to its own discrete area on the tape whereby their relative sequence of recorded events would be preserved, and playback would be simultaneous or synchronized.

<span class="mw-page-title-main">IRCAM</span> French research institute

IRCAM is a French institute dedicated to the research of music and sound, especially in the fields of avant garde and electro-acoustical art music. It is situated next to, and is organisationally linked with, the Centre Pompidou in Paris. The extension of the building was designed by Renzo Piano and Richard Rogers. Much of the institute is located underground, beneath the fountain to the east of the buildings.

<span class="mw-page-title-main">Digital audio workstation</span> Electronic device or application software used for recording, editing and producing audio files

A digital audio workstation is an electronic device or application software used for recording, editing and producing audio files. DAWs come in a wide variety of configurations from a single software program on a laptop, to an integrated stand-alone unit, all the way to a highly complex configuration of numerous components controlled by a central computer. Regardless of configuration, modern DAWs have a central interface that allows the user to alter and mix multiple recordings and tracks into a final produced piece.

<span class="mw-page-title-main">ChucK</span> Audio programming language

ChucK is a concurrent, strongly timed audio programming language for real-time synthesis, composition, and performance, which runs on Linux, Mac OS X, Microsoft Windows, and iOS. It is designed to favor readability and flexibility for the programmer over other considerations such as raw performance. It natively supports deterministic concurrency and multiple, simultaneous, dynamic control rates. Another key feature is the ability to live code; adding, removing, and modifying code on the fly, while the program is running, without stopping or restarting. It has a highly precise timing/concurrency model, allowing for arbitrarily fine granularity. It offers composers and researchers a powerful and flexible programming tool for building and experimenting with complex audio synthesis programs, and real-time interactive control.

<span class="mw-page-title-main">Live coding</span> Integration of programming as part of running program

Live coding, sometimes referred to as on-the-fly programming, just in time programming and conversational programming, makes programming an integral part of the running program.

<span class="mw-page-title-main">David P. Anderson</span> American research scientist (born 1955)

David Pope Anderson is an American research scientist at the Space Sciences Laboratory, at the University of California, Berkeley, and an adjunct professor of computer science at the University of Houston. Anderson leads the SETI@home, BOINC, Bossa, and Bolt software projects.

Distributed creativity is a sociocultural framework for understanding how creativity emerges from the interactions of people, objects and their environment. It is a response to cognitive accounts of creativity exemplified by the widely used four Ps framework. According to Vlad Petre Glǎveanu, "instead of an individual, an objects or a place in which to 'locate' creativity, [the] aim here is to distribute it between people, objects and places."

ACM Multimedia (ACM-MM) is the Association for Computing Machinery (ACM)'s annual conference on multimedia, sponsored by the SIGMM special interest group on multimedia in the ACM. SIGMM specializes in the field of multimedia computing, from underlying technologies to applications, theory to practice, and servers to networks to devices.

Group information management (GIM) is an extension of personal information management (PIM) "as it functions in more public spheres" as a result of peoples' efforts to share and co-manage information, and has been a topic of study for researchers in PIM, human–computer interaction (HCI), and computer supported cooperative work (CSCW). People acquire, organize, maintain, retrieve and use information items to support individual needs, but these PIM activities are often embedded in group or organizational contexts and performed with sharing in mind. The act of sharing moves personal information into spheres of group activity and also creates tensions that shape what and how the information is shared. The practice and the study of GIM focuses on this interaction between personal information and group contexts.

Susanne Boll is a Professor for Media Informatics and Multimedia Systems in the Department of Computing Science at the University of Oldenburg, Germany. and is a member of the board at the research institute OFFIS. She is a member of SIGMM and SIGCHI of the ACM as well as the German Informatics Society GI. She founded and directs the HCI Lab at the University of Oldenburg and OFFIS.

Sonic interaction design is the study and exploitation of sound as one of the principal channels conveying information, meaning, and aesthetic/emotional qualities in interactive contexts. Sonic interaction design is at the intersection of interaction design and sound and music computing. If interaction design is about designing objects people interact with, and such interactions are facilitated by computational means, in sonic interaction design, sound is mediating interaction either as a display of processes or as an input medium.

Gareth Loy is an American author, composer, musician and mathematician. Loy is the author of the two volume series about the intersection of music and mathematics titled Musimathics. Loy was an early practitioner of music synthesis at Stanford, and wrote the first software compiler for the Systems Concepts Digital Synthesizer. More recently, Loy has published the freeware music programming language Musimat, designed specifically for subjects covered in Musimathics, available as a free download. Although Musimathics was first published in 2006 and 2007, the series continues to evolve with updates by the author and publishers. The texts are being used in numerous math and music classes at both the graduate and undergraduate level, with more current reviews noting that the originally targeted academic distribution is now reaching a much wider audience. Music synthesis pioneer Max Mathews stated that Loy's books are a "guided tour-de-force of the mathematics of physics and music... Loy has always been a brilliantly clear writer. In Musimathics, he is also an encyclopedic writer. He covers everything needed to understand existing music and musical instruments, or to create new music or new instruments. Loy's book and John R. Pierce's famous The Science of Musical Sound belong on everyone's bookshelf, and the rest of the shelf can be empty." John Chowning states, in regard to Nekyia and the Samson Box, "After completing the software, Loy composed Nekyia, a beautiful and powerful composition in four channels that fully exploited the capabilities of the Samson Box. As an integral part of the community, Loy has paid back many times over all that he learned, by conceiving the (Samson) system with maximal generality such that it could be used for research projects in psychoacoustics as well as for hundreds of compositions by a host of composers having diverse compositional strategies."

Sound and music computing (SMC) is a research field that studies the whole sound and music communication chain from a multidisciplinary point of view. By combining scientific, technological and artistic methodologies it aims at understanding, modeling and generating sound and music through computational approaches.

Sha Xin Wei is a media philosopher and professor at the School of Arts, Media + Engineering in the Herberger Institute for Design and the Arts + Ira A. Fulton Schools of Engineering at Arizona State University. He has created ateliers such as the Synthesis Center at Arizona State University, the Topological Media Lab at Concordia University, and Weightless Studio in Montreal for experiential experiments and experimental experience.

Carl Gutwin is a Canadian computer scientist, professor and the director of the Human–computer interaction (HCI) Lab at the University of Saskatchewan. He is also a co-theme leader in the SurfNet research network and was a past holder of a Canada Research Chair in Next-Generation Groupware. Gutwin is known for his contributions in HCI ranging from the technical aspects of systems architectures, to the design and implementation of interaction techniques, and to social theory as applied to design. Gutwin was papers co-chair at CHI 2011 and was a conference co-chair of Computer Supported Cooperative Work (CSCW) 2010.

The Internet of Musical Things is a research area that aims to bring Internet of Things connectivity to musical and artistic practices. Moreover, it encompasses concepts coming from music computing, ubiquitous music, human-computer interaction, artificial intelligence, augmented reality, virtual reality, gaming, participative art, and new interfaces for musical expression. From a computational perspective, IoMusT refers to local or remote networks embedded with devices capable of generating and/or playing musical content.

References

  1. 1 2 3 4 Lazzaro, J.; Wawrzynek, J. (2001). "Proceedings of the 11th international workshop on Network and operating systems support for digital audio and video - NOSSDAV '01". NOSSDAV '01: Proceedings of the 11th international workshop on Network and operating systems support for digital audio and video. ACM Press New York, NY, USA. pp. 157–166. doi:10.1145/378344.378367. ISBN   1581133707.
  2. 1 2 3 4 Alexandraki, C.; Koutlemanis, P.; Gasteratos, P.; Valsamakis, N.; Akoumianakis, D.; Milolidakis, G.; Vellis, G.; Kotsalis, D. (2008). "Towards the implementation of a generic platform for networked music performance: The DIAMOUSES approach". EProceedings of the ICMC 2008 International Computer Music Conference (ICMC 2008). pp. 251–258.
  3. 1 2 3 4 5 6 7 8 9 10 11 Sawchuk, A.; Chew, E.; Zimmermann, R.; Papadopoulos,C.; Kyriakakis,C. (2003). "Proceedings of the 2003 ACM SIGMM workshop on Experiential telepresence - ETP '03". ETP '03: Proceedings of the 2003 ACM SIGMM workshop on Experiential telepresence. ACM Press New York, NY, USA. pp. 110–120. doi:10.1145/982484.982506. ISBN   1581137753.
  4. 1 2 3 4 Alexander, C; Renaud, A.; Rebelo, P. (2007). "Networked music performance: state of the art". AES 30th International Conference. Audio Engineering Society.
  5. Pritchett, J. (1993). The Music Of John Cage. Cambridge University Press, Cambridge, UK.
  6. 1 2 3 Bischoff, J.; Brown, C. "Crossfade" . Retrieved 2009-11-26.
  7. 1 2 3 "SoundWIRE research group at CCRMA, Stanford University". Archived from the original on 2004-02-22. Retrieved 2009-11-23.
  8. Chafe, C.; Wilson, S.; Leistikow, R.; Chisholm, D.; Scavone, G. (2000). "A simplified approach to high quality music and sound over IP". Proceedings of the COST G-6 Conference on Digital Audio Effects (DAFX-00).
  9. Gutwin, C.; Greenberg, S. (2001). "The Importance of Awareness for Team Cognition in Distributed Collaboration". Report 2001-696-19. Dept Computer Science, University of Calgary, Alberta, Canada. pp. 1–33.
  10. Malhotra, V. (1981). "The Social Accomplishment of Music in a Symphony Orchestra: A Phenomenological Analysis". Qualitative Sociology. 4 (2): 102–125. doi:10.1007/bf00987214. S2CID   145680081.
  11. 1 2 3 Gu, X.; Dick, M.; Noyer, U.; Wolf, L. (2004). "IEEE Global Telecommunications Conference Workshops, 2004. Globe Com Workshops 2004". Global Telecommunications Conference Workshops, 2004. GlobeCom Workshops 2004. IEEE. pp. 176–185. doi:10.1109/GLOCOMW.2004.1417570. ISBN   0-7803-8798-8.
  12. Kurtisi, Z; Gu, X.; Wolf, L. (2006). "Enabling network-centric music performance in wide-area networks". Communications of the ACM. 49 (11): 52–54. doi:10.1145/1167838.1167862. S2CID   1245128.
  13. Schroeder, Franziska; Rebelo, Pedro (August 2007). "Addressing the Network: Performative Strategies for Playing Apart". Paper Presented at International Computer Music Conference (ICMC 2007), Denmark.: 133–140. Retrieved 31 August 2022.
  14. "About Us – The Center For Deep Listening". Rensselaer Polytechnic Institute. Retrieved 31 August 2022.
  15. "Distributed Immersive Performance" . Retrieved 2009-11-23.
  16. "DIAMOUSES" . Retrieved 2009-11-22.
  17. "A3Lab - WeMUST Research page" . Retrieved 2015-02-24.
  18. Gabrielli, L; Bussolotto, M; Squartini, S (2014). "Reducing the Latency in Live Music Transmission with the BeagleBoard xM Through Resampling". EDERC 2014, Milan, Italy. IEEE.