A networked music performance or network musical performance is a real-time interaction over a computer network that enables musicians in different locations to perform as if they were in the same room. [1] These interactions can include performances, rehearsals, improvisation or jamming sessions, and situations for learning such as master classes. [2] Participants may be connected by "high fidelity multichannel audio and video links" [3] as well as MIDI data connections [1] and specialized collaborative software tools. While not intended to be a replacement for traditional live stage performance, networked music performance supports musical interaction when co-presence is not possible and allows for novel forms of music expression. [2] Remote audience members and possibly a conductor may also participate. [3]
One of the earliest examples of a networked music performance experiment was the 1951 piece: “Imaginary Landscape No. 4 for Twelve Radios” by composer John Cage. [4] The piece “used radio transistors as a musical instrument. The transistors were interconnected thus influencing each other.” [4] [5]
In the late 1970s, as personal computers were becoming more available and affordable, groups like the League of Automatic Music Composers began to experiment with linking multiple computers, electronic instruments, and analog circuitry to create novel forms of music. [6]
The 1990s saw several important experiments in networked performance. In 1993, The University of Southern California Information Sciences Institute began experimenting with networked music performance over the Internet. [3] The Hub (band), which was formed by original members of The League of Automatic Composers, experimented in 1997 with sending MIDI data over Ethernet to distributed locations. [6] However, “ it was more difficult than imagined to debug all of the software problems on each of the different machines with different operating systems and CPU speeds in different cities”. [6] In 1998, there was a three-way audio-only performance between musicians in Warsaw, Helsinki, and Oslo dubbed “Mélange à trois”. [3] [7] The early distributed performances all faced problems such as network delay, issues synchronizing signals, echo, and troubles with the acquisition and rendering of non-immersive audio and video. [3]
The development of high-speed internet over provisioned backbones, such as Internet2, made high quality audio links possible beginning in the early 2000s. [4] One of the first research groups to take advantage of the improved network performance was the SoundWIRE group at Stanford University's CCRMA. [8] That was soon followed by projects such as the Distributed Immersive Performance experiments, [3] SoundJack, [4] and DIAMOUSES. [2]
Workspace awareness in a face-to-face situation is gathered through consequential communication, feedthrough, and intentional communication. [9] A traditional music performance setting is an example of very tightly-coupled, synergistic collaboration in which participants have a high level of workspace awareness. “Each player must not only be conscious of his or her own part, but also of the parts of other musicians. The other musicians' gestures, facial expressions and bodily movements, as well as the sounds emitted by their instruments [are] clues to meanings and intentions of others”. [10] Research has indicated that musicians are also very sensitive to the acoustic response of the environment in which they are performing. [3] Ideally a networked music performance system would facilitate the high level of awareness that performers experience in a traditional performance setting.
Bandwidth demand, latency sensitivity, and a strict requirement for audio stream synchronization are the factors that make networked music performance a challenging application. [11] These factors are described in more detail below.
High definition audio streaming, which is used to make a networked music performance as realistic as possible, is considered to be one of the most bandwidth-demanding uses of today's networks. [11]
One of the major issues with networked music performance is that latency is introduced into the audio as it is processed by a participant's local system and sent across the network. For interaction in a networked music performance to feel natural, the latency generally must be kept below 30 milliseconds, the bound of human perception. [12] If there is too much delay in the system, it will make performance very difficult since musicians adjust their playing to coordinate the performance based on the sounds they hear created by other players. [1] However, the characteristics of the piece being played, the musicians, and the types of instruments used ultimately define the tolerance. [3] Synchronization cues may be used in a network music performance system that is designed for long latency situations. [1]
Both end systems and networks must synchronize multiple audio streams from separate locations to form a consistent presentation of the music. [11] This is a challenging problem for today's systems.
The objectives of a networked music performance can be summarized as:
The SoundWIRE research group explores several research areas in the use of networks for music performance including: multi-channel audio streaming, physical models and virtual acoustics, the sonification of network performance, psychoacoustics, and networked music performance practice. [7] The group has developed a software system, JackTrip, that supports multi-channel, high quality, uncompressed streaming audio for networked music performance over the internet. [7]
The Sonic Arts Research Centre (SARC) at Queen's University Belfast has been a major player in carrying out network performances since 2006 and has been active in the use of networks as both collaborative and performance tools. [13] The network team at SARC is led by Prof Pedro Rebelo and Dr Franziska Schroeder with varying set-ups of performers, instruments and compositional strategies. A group of artists and researchers has emerged around this field of distributed creativity at SARC and this has helped create a broader knowledge base and focus for activities. As a result, since 2007 SARC has a dedicated team of staff and students with knowledge and experience of network performance, which SARC refers to as "distributed creativity".[ citation needed ]
Regular performances, workshops and collaborations with institutions such as SoundWire, CCRMA Stanford University, and RPI, [14] led by composer and performer Pauline Oliveros, as well as with the University of São Paulo, have helped strengthen this emerging community of researchers and practitioners. The field is related to research on distributed creativity.[ citation needed ]
The Distributed Immersive Performance project is based at the Integrated Media Systems Center at the University of Southern California. [15] Their experiments explore the challenges of creating a seamless environment for remote, synchronous collaboration. [3] The experiments use 3D audio with correct spatial sound localization as well as HD or DV video projected onto wide screen displays to create an immersive virtual space. [3] There are interaction sites set up at various locations on the University of Southern California campus and at several partner locations such as the New World Symphony in Miami Beach, Florida. [3]
The DIAMOUSES project is coordinated by the Music Informatics Lab at the Technological Education Institution of Crete in Hellas. [16] It supports a wide range of networked music performance scenarios with a customizable platform that handles the broadcasting and synchronization of audio and video signals across a network. [2]
The A3Lab team at Polytechnic University of the Marches conducts research on the use of the wireless medium for uncompressed audio networking in the NMP context. [17] A mix of open source software, ARM platforms and dedicated wireless equipment have been documented, especially for outdoor use, where buildings of historical importance or difficult environments (e.g. sea) can be explored for the performance. A premiere of the system have been conducted with musicians playing a Stockhausen composition on different boats over the coast in Ancona, Italy. The project also aims at shifting music computing from laptops to embedded devices. [18]
Computer music is the application of computing technology in music composition, to help human composers create new music or to have computers independently create music, such as with algorithmic composition programs. It includes the theory and application of new and existing computer software technologies and basic aspects of music, such as sound synthesis, digital signal processing, sound design, sonic diffusion, acoustics, electrical engineering, and psychoacoustics. The field of computer music can trace its roots back to the origins of electronic music, and the first experiments and innovations with electronic instruments at the turn of the 20th century.
A digital audio workstation is an electronic device or application software used for recording, editing and producing audio files. DAWs come in a wide variety of configurations from a single software program on a laptop, to an integrated stand-alone unit, all the way to a highly complex configuration of numerous components controlled by a central computer. Regardless of configuration, modern DAWs have a central interface that allows the user to alter and mix multiple recordings and tracks into a final produced piece.
ChucK is a concurrent, strongly timed audio programming language for real-time synthesis, composition, and performance, which runs on Linux, Mac OS X, Microsoft Windows, and iOS. It is designed to favor readability and flexibility for the programmer over other considerations such as raw performance. It natively supports deterministic concurrency and multiple, simultaneous, dynamic control rates. Another key feature is the ability to live code; adding, removing, and modifying code on the fly, while the program is running, without stopping or restarting. It has a highly precise timing/concurrency model, allowing for arbitrarily fine granularity. It offers composers and researchers a powerful and flexible programming tool for building and experimenting with complex audio synthesis programs, and real-time interactive control.
Audio Stream Input/Output (ASIO) is a computer audio interface driver protocol for digital audio specified by Steinberg, providing high data throughput, synchronization, and low latency between a software application and a computer's audio interface or sound card.
Live coding, sometimes referred to as on-the-fly programming, just in time programming and conversational programming, makes programming an integral part of the running program.
Planet CCRMA Planet CCRMA software is a collection of open-source tools and applications designed for music composition, sound synthesis, and audio processing. It is primarily built on the Linux operating system and integrates various software packages, making it suitable for music researchers, composers, and educators. It is a collection of Red Hat packages to help set up and optimize a Red Hat-based workstation for audio work.
David Pope Anderson is an American research scientist at the Space Sciences Laboratory, at the University of California, Berkeley, and an adjunct professor of computer science at the University of Houston. Anderson leads the SETI@home, BOINC, Bossa, and Bolt software projects.
Distributed creativity is a sociocultural framework for understanding how creativity emerges from the interactions of people, objects and their environment. It is a response to cognitive accounts of creativity exemplified by the widely used four Ps framework. According to Vlad Petre Glǎveanu, "instead of an individual, an objects or a place in which to 'locate' creativity, [the] aim here is to distribute it between people, objects and places."
ACM Multimedia (ACM-MM) is the Association for Computing Machinery (ACM)'s annual conference on multimedia, sponsored by the SIGMM special interest group on multimedia in the ACM. SIGMM specializes in the field of multimedia computing, from underlying technologies to applications, theory to practice, and servers to networks to devices.
Group information management (GIM) is an extension of personal information management (PIM) "as it functions in more public spheres" as a result of peoples' efforts to share and co-manage information, and has been a topic of study for researchers in PIM, human–computer interaction (HCI), and computer supported cooperative work (CSCW). People acquire, organize, maintain, retrieve and use information items to support individual needs, but these PIM activities are often embedded in group or organizational contexts and performed with sharing in mind. The act of sharing moves personal information into spheres of group activity and also creates tensions that shape what and how the information is shared. The practice and the study of GIM focuses on this interaction between personal information and group contexts.
Sonic interaction design is the study and exploitation of sound as one of the principal channels conveying information, meaning, and aesthetic/emotional qualities in interactive contexts. Sonic interaction design is at the intersection of interaction design and sound and music computing. If interaction design is about designing objects people interact with, and such interactions are facilitated by computational means, in sonic interaction design, sound is mediating interaction either as a display of processes or as an input medium.
A distributed operating system is system software over a collection of independent software, networked, communicating, and physically separate computational nodes. They handle jobs which are serviced by multiple CPUs. Each individual node holds a specific software subset of the global aggregate operating system. Each subset is a composite of two distinct service provisioners. The first is a ubiquitous minimal kernel, or microkernel, that directly controls that node's hardware. Second is a higher-level collection of system management components that coordinate the node's individual and collaborative activities. These components abstract microkernel functions and support user applications.
Gareth Loy is an American author, composer, musician and mathematician. He is the author of the two volume series about the intersection of music and mathematics titled Musimathics. He was an early practitioner of music synthesis at Stanford, and wrote the first software compiler for the Systems Concepts Digital Synthesizer. More recently, He has published the freeware music programming language Musimat, designed specifically for subjects covered in Musimathics, available as a free download. Although Musimathics was first published in 2006 and 2007, the series continues to evolve with updates by the author and publishers. The texts are being used in numerous mathematics and music classes at both the graduate and undergraduate level, with more current reviews noting that the originally targeted academic distribution is now reaching a much wider audience. Music synthesis pioneer Max Mathews stated that his books are a "guided tour-de-force of the mathematics of physics and music.He has always been a brilliantly clear writer. In Musimathics, he is also an encyclopedic writer. He covers everything needed to understand existing music and musical instruments, or to create new music or new instruments. His book and John R. Pierce's famous The Science of Musical Sound belong on everyone's bookshelf, and the rest of the shelf can be empty." John Chowning states, in regard to Nekyia and the Samson Box, "After completing the software, Loy composed Nekyia, a beautiful and powerful composition in four channels that fully exploited the capabilities of the Samson Box. As an integral part of the community, Loy has paid back many times over all that he learned, by conceiving the (Samson) system with maximal generality such that it could be used for research projects in psychoacoustics as well as for hundreds of compositions by a host of composers having diverse compositional strategies."
Sound and music computing (SMC) is a research field that studies the whole sound and music communication chain from a multidisciplinary point of view. By combining scientific, technological and artistic methodologies it aims at understanding, modeling and generating sound and music through computational approaches.
The International Society for Music Information Retrieval (ISMIR) is an international forum for research on the organization of music-related data. It started as an informal group steered by an ad hoc committee in 2000 which established a yearly symposium - whence "ISMIR", which meant International Symposium on Music Information Retrieval. It was turned into a conference in 2002 while retaining the acronym. ISMIR was incorporated in Canada on July 4, 2008.
Sha Xin Wei is a media philosopher and professor at the School of Arts, Media + Engineering in the Herberger Institute for Design and the Arts + Ira A. Fulton Schools of Engineering at Arizona State University. He has created ateliers such as the Synthesis Center at Arizona State University, the Topological Media Lab at Concordia University, and Weightless Studio in Montreal for experiential experiments and experimental experience.
Carl Gutwin is a Canadian computer scientist, professor and the director of the Human–computer interaction (HCI) Lab at the University of Saskatchewan. He is also a co-theme leader in the SurfNet research network and was a past holder of a Canada Research Chair in Next-Generation Groupware. Gutwin is known for his contributions in HCI ranging from the technical aspects of systems architectures, to the design and implementation of interaction techniques, and to social theory as applied to design. Gutwin was papers co-chair at CHI 2011 and was a conference co-chair of Computer Supported Cooperative Work (CSCW) 2010.
Saul Greenberg is a computer scientist, a Faculty Professor and Professor Emeritus at the University of Calgary. He was awarded ACM Fellowship in 2012 for contributions to computer supported cooperative work and ubiquitous computing.
The Internet of Musical Things is a research area that aims to bring Internet of Things connectivity to musical and artistic practices. Moreover, it encompasses concepts coming from music computing, ubiquitous music, human-computer interaction, artificial intelligence, augmented reality, virtual reality, gaming, participative art, and new interfaces for musical expression. From a computational perspective, IoMusT refers to local or remote networks embedded with devices capable of generating and/or playing musical content.