SDIF

Last updated

SDIF, "Sound Description Interchange Format" is a standard for the well-defined and extensible interchange of a variety of sound descriptions. SDIF consists of a fixed framework plus a large and extensible collection of spectral description types, including time-domain (analogous to regular audio file formats), Sinusoidal Models, other spectral models, and higher-level models. SDIF was jointly developed by IRCAM and CNMAT.

Additive synthesis is a sound synthesis technique that creates timbre by adding sine waves together.

IRCAM

IRCAM is a French institute for science about music and sound and avant garde electro-acoustical art music. It is situated next to, and is organisationally linked with, the Centre Georges Pompidou in Paris. The extension of the building was designed by Renzo Piano and Richard Rogers. Much of the institute is located underground, beneath the fountain to the east of the buildings.

Related Research Articles

Moving Picture Experts Group working group to set standards for audio and video compression

The Moving Picture Experts Group (MPEG) is a working group of authorities that was formed by ISO and IEC to set standards for audio and video compression and transmission. It was established in 1988 by the initiative of Hiroshi Yasuda and Leonardo Chiariglione, group Chair since its inception. The first MPEG meeting was in May 1988 in Ottawa, Canada. As of late 2005, MPEG has grown to include approximately 350 members per meeting from various industries, universities, and research institutions. MPEG's official designation is ISO/IEC JTC 1/SC 29/WG 11 – Coding of moving pictures and audio.

Waveform Audio File Format is a Microsoft and IBM audio file format standard for storing an audio bitstream on PCs. It is an application of the Resource Interchange File Format (RIFF) bitstream format method for storing data in "chunks", and thus is also close to the 8SVX and the AIFF format used on Amiga and Macintosh computers, respectively. It is the main format used on Microsoft Windows systems for raw and typically uncompressed audio. The usual bitstream encoding is the linear pulse-code modulation (LPCM) format.

Audio Interchange File Format (AIFF) is an audio file format standard used for storing sound data for personal computers and other electronic audio devices. The format was developed by Apple Inc. in 1988 based on Electronic Arts' Interchange File Format and is most commonly used on Apple Macintosh computer systems.

The XML Metadata Interchange (XMI) is an Object Management Group (OMG) standard for exchanging metadata information via Extensible Markup Language (XML).

Open Sound Control (OSC) is a protocol for networking sound synthesizers, computers, and other multimedia devices for purposes such as musical performance or show control. OSC's advantages include interoperability, accuracy, flexibility and enhanced organization and documentation.

The Extensible Data Format (X) is an XML standard developed by NASA, meant to be used throughout scientific disciplines. In many ways it is akin to XSIL, Extensible Scientific Interchange Language. NASA provides two XDF APIs, in Perl and in Java.

The Digital Audio Stationary Head or DASH standard is a reel-to-reel, digital audio tape format introduced by Sony in early 1982 for high-quality multitrack studio recording and mastering, as an alternative to analog recording methods. DASH is capable of recording two channels of audio on a quarter-inch tape, and 24 or 48 tracks on 12-inch-wide (13 mm) tape on open reels of up to 14 inches. The data is recorded on the tape linearly, with a stationary recording head, as opposed to the DAT format, where data is recorded helically with a rotating head, in the same manner as a VCR. The audio data is encoded as linear PCM and boasts strong cyclic redundancy check (CRC) error correction, allowing the tape to be physically edited with a razor blade as analog tape would, e.g. by cutting and splicing, and played back with no loss of signal. In a two-track DASH recorder, the digital data is recorded onto the tape across nine data tracks: eight for the digital audio data and one for the CRC data; there is also provision for two linear analog cue tracks and one additional linear analog track dedicated to recording time code.

A phase vocoder is a type of vocoder which can scale both the frequency and time domains of audio signals by using phase information. The computer algorithm allows frequency-domain modifications to a digital sound file.

Spectral music is a compositional technique developed in the 1970s, using computer analysis of the quality of timbre in acoustic music or artificial timbres derived from synthesis.

Extensible Metadata Platform ISO standard

The Extensible Metadata Platform (XMP) is an ISO standard, originally created by Adobe Systems Inc., for the creation, processing and interchange of standardized and custom metadata for digital documents and data sets.

MPEG-4 Part 11Scene description and application engine was published as ISO/IEC 14496-11 in 2005. MPEG-4 Part 11 is also known as BIFS, XMT, MPEG-J. It defines:

The Center for New Music and Audio Technologies is a multidisciplinary research center within University of California, Berkeley Department of Music. The Center's goal is to provide a common ground where music, cognitive science, computer science, and other disciplines meet to investigate, invent, and implement creative tools for composition, performance, and research. It was founded in the 1980s by composer Richard Felciano.

Spectral flatness measure used in digital signal processing to characterize an audio spectrum

Spectral flatness or tonality coefficient, also known as Wiener entropy, is a measure used in digital signal processing to characterize an audio spectrum. Spectral flatness is typically measured in decibels, and provides a way to quantify how noise-like a sound is, as opposed to being tone-like.

The spectral centroid is a measure used in digital signal processing to characterise a spectrum. It indicates where the "center of mass" of the spectrum is located. Perceptually, it has a robust connection with the impression of "brightness" of a sound.

Sound and music computing (SMC) is a research field that studies the whole sound and music communication chain from a multidisciplinary point of view. By combining scientific, technological and artistic methodologies it aims at understanding, modeling and generating sound and music through computational approaches.

SDIF is the Sound Description Interchange Format.

References