Physical modelling synthesis

Last updated

Physical modelling synthesis refers to sound synthesis methods in which the waveform of the sound to be generated is computed using a mathematical model, a set of equations and algorithms to simulate a physical source of sound, usually a musical instrument.

Contents

General methodology

Modelling attempts to replicate laws of physics that govern sound production, and will typically have several parameters, some of which are constants that describe the physical materials and dimensions of the instrument, while others are time-dependent functions describing the player's interaction with the instrument, such as plucking a string, or covering toneholes.

For example, to model the sound of a drum, there would be a mathematical model of how striking the drumhead injects energy into a two-dimensional membrane. Incorporating this, a larger model would simulate the properties of the membrane (mass density, stiffness, etc.), its coupling with the resonance of the cylindrical body of the drum, and the conditions at its boundaries (a rigid termination to the drum's body), describing its movement over time and thus its generation of sound.

Similar stages to be modelled can be found in instruments such as a violin, though the energy excitation in this case is provided by the slip-stick behavior of the bow against the string, the width of the bow, the resonance and damping behavior of the strings, the transfer of string vibrations through the bridge, and finally, the resonance of the soundboard in response to those vibrations.

In addition, the same concept has been applied to simulate voice and speech sounds. [1] In this case, the synthesizer includes mathematical models of the vocal fold oscillation and associated laryngeal airflow, and the consequent acoustic wave propagation along the vocal tract. Further, it may also contain an articulatory model to control the vocal tract shape in terms of the position of the lips, tongue and other organs.

Although physical modelling was not a new concept in acoustics and synthesis, having been implemented using finite difference approximations of the wave equation by Hiller and Ruiz in 1971[ citation needed ], it was not until the development of the Karplus-Strong algorithm, the subsequent refinement and generalization of the algorithm into the extremely efficient digital waveguide synthesis by Julius O. Smith III and others,[ citation needed ] and the increase in DSP power in the late 1980s [2] that commercial implementations became feasible.

Yamaha contracted with Stanford University in 1989 [3] to jointly develop digital waveguide synthesis; subsequently, most patents related to the technology are owned by Stanford or Yamaha.

The first commercially available physical modelling synthesizer made using waveguide synthesis was the Yamaha VL1 in 1994. [4] [5]

While the efficiency of digital waveguide synthesis made physical modelling feasible on common DSP hardware and native processors, the convincing emulation of physical instruments often requires the introduction of non-linear elements, scattering junctions, etc. In these cases, digital waveguides are often combined with FDTD, [6] finite element or wave digital filter methods, increasing the computational demands of the model. [7]

Technologies associated with physical modelling

Examples of physical modelling synthesis:

Related Research Articles

Additive synthesis is a sound synthesis technique that creates timbre by adding sine waves together.

<span class="mw-page-title-main">Digital synthesizer</span> Synthesizer that uses digital signal processing to make sounds

A digital synthesizer is a synthesizer that uses digital signal processing (DSP) techniques to make musical sounds, in contrast to older analog synthesizers, which produce music using analog electronics, and samplers, which play back digital recordings of acoustic, electric, or electronic instruments. Some digital synthesizers emulate analog synthesizers, while others include sampling capability in addition to digital synthesis.

<span class="mw-page-title-main">Electronic musical instrument</span> Musical instrument that uses electronic circuits to generate sound

An electronic musical instrument or electrophone is a musical instrument that produces sound using electronic circuitry. Such an instrument sounds by outputting an electrical, electronic or digital audio signal that ultimately is plugged into a power amplifier which drives a loudspeaker, creating the sound heard by the performer and listener.

<span class="mw-page-title-main">Frequency modulation synthesis</span> Form of sound synthesis

Frequency modulation synthesis is a form of sound synthesis whereby the frequency of a waveform is changed by modulating its frequency with a modulator. The (instantaneous) frequency of an oscillator is altered in accordance with the amplitude of a modulating signal.

Subtractive synthesis is a method of sound synthesis in which overtones of an audio signal are attenuated by a filter to alter the timbre of the sound.

Karplus–Strong string synthesis is a method of physical modelling synthesis that loops a short waveform through a filtered delay line to simulate the sound of a hammered or plucked string or some types of percussion.

<span class="mw-page-title-main">Music technology (electronic and digital)</span>

Digital music technology encompasses the use of digital instruments to produce, perform or record music. These instruments vary, including computers, electronic effects units, software, and digital audio equipment. Digital music technology is used in performance, playback, recording, composition, mixing, analysis and editing of music, by professions in all parts of the music industry.

Wavetable synthesis is a sound synthesis technique used to create quasi-periodic waveforms often used in the production of musical tones or notes.

Digital waveguide synthesis is the synthesis of audio using a digital waveguide. Digital waveguides are efficient computational models for physical media through which acoustic waves propagate. For this reason, digital waveguides constitute a major part of most modern physical modeling synthesizers.

<span class="mw-page-title-main">Yamaha DX7</span> Synthesizer

The Yamaha DX7 is a synthesizer manufactured by Yamaha Corporation from 1983 to 1989. It was the first successful digital synthesizer and is one of the best-selling synthesizers in history, selling more than 200,000 units.

<span class="mw-page-title-main">John Chowning</span> American classical composer

John M. Chowning is an American composer, musician, discoverer, and professor best known for his work at Stanford University, the founding of CCRMA – Center for Computer Research in Music and Acoustics in 1975 and his development of the digital implementation of FM synthesis and the digital sound spatialization while there.

<span class="mw-page-title-main">Sound Blaster 16</span> Sound card by Creative Technology

The Sound Blaster 16 is a series of sound cards by Creative Technology, first released in June 1992 for PCs with an ISA or PCI slot. It was the successor to the Sound Blaster Pro series of sound cards and introduced CD-quality digital audio to the Sound Blaster line. For optional wavetable synthesis, the Sound Blaster 16 also added an expansion-header for add-on MIDI-daughterboards, called a Wave Blaster connector, and a game port for optional connection with external MIDI sound modules.

<span class="mw-page-title-main">Synthesizer</span> Electronic musical instrument

A synthesizer is an electronic musical instrument that generates audio signals. Synthesizers typically create sounds by generating waveforms through methods including subtractive synthesis, additive synthesis and frequency modulation synthesis. These sounds may be altered by components such as filters, which cut or boost frequencies; envelopes, which control articulation, or how notes begin and end; and low-frequency oscillators, which modulate parameters such as pitch, volume, or filter characteristics affecting timbre. Synthesizers are typically played with keyboards or controlled by sequencers, software or other instruments, and may be synchronized to other equipment via MIDI.

Banded Waveguides Synthesis is a physical modeling synthesis method to simulate sounds of dispersive sounding objects, or objects with strongly inharmonic resonant frequencies efficiently. It can be used to model the sound of instruments based on elastic solids such as vibraphone and marimba bars, singing bowls and bells. It can also be used for other instruments with inharmonic partials, such as membranes or plates. For example, simulations of tabla drums and cymbals have been implemented using this method. Because banded waveguides retain the dynamics of the system, complex non-linear excitations can be implemented. The method was originally invented in 1999 by Georg Essl and Perry Cook to synthesize the sound of bowed vibraphone bars.

Gnuspeech is an extensible text-to-speech computer software package that produces artificial speech output based on real-time articulatory speech synthesis by rules. That is, it converts text strings into phonetic descriptions, aided by a pronouncing dictionary, letter-to-sound rules, and rhythm and intonation models; transforms the phonetic descriptions into parameters for a low-level articulatory speech synthesizer; uses these to drive an articulatory model of the human vocal tract producing an output suitable for the normal sound output devices used by various computer operating systems; and does this at the same or faster rate than the speech is spoken for adult speech.

David Aaron Jaffe is an American composer who has written over ninety works for orchestra, chorus, chamber ensembles, and electronics. He is best known for using technology as an electronic-music or computer-music composer in works such as Silicon Valley Breakdown. He is also known for his development of computer music algorithmic innovations, such as the physical modeling of plucked and bowed strings, as well as for his development of music software such as the NeXT Music Kit and the Universal Audio UAD-2/Apollo/LUNA Recording System.

<span class="mw-page-title-main">Korg Kronos</span> Music workstation

The Kronos is a music workstation manufactured by Korg that combines nine different synthesizer sound engines with a sequencer, digital recorder, effects, a color touchscreen display and a keyboard. Korg's latest flagship synthesizer series at the time of its announcement, the Kronos series was announced at the winter NAMM Show in Anaheim, California in January 2011.

<span class="mw-page-title-main">Korg Z1</span> Synthesizer released in 1997

The Korg Z1 is a digital synthesizer released by Korg in 1997. The Z1 built upon the foundation set by the monophonic Prophecy released two years prior by offering 12-note polyphony and featuring expanded oscillator options, a polyphonic arpeggiator and an XY touchpad for enhanced performance interaction. It was the world's first multitimbral physical modelling synthesizer.

<span class="mw-page-title-main">Korg OASYS PCI</span> Sound card

The Korg OASYS PCI is a DSP-based PCI-card for PC and Mac released in 1999. It offers many synthesizer engines from sampling and substractive to FM and physical modelling. Because of its high market price and low polyphony, production was stopped in 2001. About 2000 cards were produced.

The Nautilus is a music workstation manufactured by Korg, a successor to Kronos 2, which comes with Kronos' nine different synthesizer sound engines and other similar features. It was announced in November 2020 with availability in January 2021.

References

Footnotes

  1. Englert, Marina; Madazio, Glaucya; Gielow, Ingrid; Lucero, Jorge; Behlau, Mara (2017). "Perceptual Error Analysis of Human and Synthesized Voices". Journal of Voice. 31 (4): 516.e5–516.e18. doi:10.1016/j.jvoice.2016.12.015. PMID   28089485.
  2. Vicinanza , D (2007). "ASTRA Project on the Grid". Archived from the original on 2013-11-04. Retrieved 2013-10-23.
  3. Johnstone, B: Wave of the Future. http://www.harmony-central.com/Computer/synth-history.html Archived 2012-04-18 at the Wayback Machine , 1993.
  4. Wood, S G: Objective Test Methods for Waveguide Audio Synthesis. Masters Thesis - Brigham Young University, http://contentdm.lib.byu.edu/cdm4/item_viewer.php?CISOROOT=/ETD&CISOPTR=976&CISOBOX=1&REC=19 Archived 2011-06-11 at the Wayback Machine , 2007.
  5. "Yamaha VL1". Sound On Sound. July 1994. Archived from the original on 8 June 2015.
  6. The NESS project http://www.ness.music.ed.ac.uk
  7. C. Webb and S. Bilbao, "On the limits of real-time physical modelling synthesis with a modular environment" http://www.physicalaudio.co.uk

Further reading