Electronic music technology is the use of electro-acoustic, analog, or digital instruments, computers, electronic effects units, software or digital audio equipment by a musician, composer, sound engineer, DJ or record producer to make, performor record music. The term usually refers to the use of electronic devices, electronic and digital instruments, computer hardware and computer software that is used in the performance, playback, recording, composition, sound recording and reproduction, mixing, analysis and editing of music.
A computer is a machine that can be instructed to carry out sequences of arithmetic or logical operations automatically via computer programming. Modern computers have the ability to follow generalized sets of operations, called programs. These programs enable computers to perform an extremely wide range of tasks. A "complete" computer including the hardware, the operating system, and peripheral equipment required and used for "full" operation can be referred to as a computer system. This term may as well be used for a group of computers that are connected and work together, in particular a computer network or computer cluster.
An effects unit or effectspedal is an electronic or digital device that alters the sound of a musical instrument or other audio source. Common effects include distortion/overdrive, often used with electric guitar in electric blues and rock music; dynamic effects such as volume pedals and compressors, which affect loudness; filters such as wah-wah pedals and graphic equalizers, which modify frequency ranges; modulation effects, such as chorus, flangers and phasers; pitch effects such as pitch shifters; and time effects, such as reverb and delay, which create echoing sounds and emulate the sound of different spaces.
Computer software, or simply software, is a collection of data or computer instructions that tell the computer how to work. This is in contrast to physical hardware, from which the system is built and actually performs the work. In computer science and software engineering, computer software is all information processed by computer systems, programs and data. Computer software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. Computer hardware and software require each other and neither can be realistically used on its own.
Music technology is connected to both artistic and technological creativity. Musicians and music technology experts constantly strive to devise new forms of expression through music, and they are physically creating new devices and software to enable them to do so. Although in the 2010s, the term is most commonly used in reference to modern electronic devices and computer software such as digital audio workstations and other digital sound manipulation software, electronic and digital musical technologies have precursors in the analog music technologies of the early 20th century, such as the electromechanical Hammond organ, which was invented in 1929. In the 2010s, the ontological range of music technology has greatly increased, and it may now be electronic, digital, software-based or indeed even purely conceptual.
A digital audio workstation (DAW) is an electronic device or application software used for recording, editing and producing audio files. DAWs come in a wide variety of configurations from a single software program on a laptop, to an integrated stand-alone unit, all the way to a highly complex configuration of numerous components controlled by a central computer. Regardless of configuration, modern DAWs have a central interface that allows the user to alter and mix multiple recordings and tracks into a final produced piece.
Electric music technology refers to musical instruments and recording devices that use electrical circuits, which are often combined with mechanical technologies. Examples of electric musical instruments include the electro-mechanical electric piano, the electric guitar, the electro-mechanical Hammond organ and the electric bass. All of these electric instruments do not produce a sound that is audible by the performer or audience in a performance setting unless they are connected to instrument amplifiers and loudspeaker cabinets, which made them sound loud enough for performers and the audience to hear. Amplifiers and loudspeakers are separate from the instrument in the case of the electric guitar, electric bass and some electric organs and most electric pianos. Some electric organs and electric pianos include the amplifier and speaker cabinet within the main housing for the instrument.
The Hammond organ is an electric organ, invented by Laurens Hammond and John M. Hanert and first manufactured in 1935. Various models have been produced, most of which use sliding drawbars to specify a variety of sounds. Until 1975, Hammond organs generated sound by creating an electric current from rotating a metal tonewheel near an electromagnetic pickup, and then strengthening the signal with an amplifier so it can drive a speaker cabinet. The organ is commonly used with, and associated with, the Leslie speaker.
Music technology is taught at many different educational levels, including college diplomas and university degrees at the undergraduate and graduate level. The study of music technology is usually concerned with the creative use of technology for creating new sounds, performing, recording, programming sequencers or other music-related electronic devices, and manipulating, mixing and reproducing music. Music technology programs train students for careers in "...sound engineering, computer music, audio-visual production and post-production, mastering, scoring for film and multimedia, audio for games, software development, and multimedia production."Those wishing to develop new music technologies often train to become an audio engineer working in R&D. Due to the increasing role of interdisciplinary work in music technology, individuals developing new music technologies may also have backgrounds or training in computer programming, computer hardware design, acoustics, record producing or other fields.
Sound recording and reproduction is an electrical, mechanical, electronic, or digital inscription and re-creation of sound waves, such as spoken voice, singing, instrumental music, or sound effects. The two main classes of sound recording technology are analog recording and digital recording.
Programming is a form of music production and performance using electronic devices and computer software, such as sequencers and workstations or hardware synthesizers, sampler and sequencers, to generate sounds of musical instruments. Programming is used in most electronic music and hip hop music since the 1990s. It is also frequently used in "modern" pop and rock music from various regions of the world, and sometimes in jazz and contemporary classical music.
A music sequencer is a device or application software that can record, edit, or play back music, by handling note and performance information in several forms, typically CV/Gate, MIDI, or Open Sound Control (OSC), and possibly audio and automation data for DAWs and plug-ins.
In the 2010s, electronic and digital music technologies are widely used to assist in music education for training students in high school, college and university music programs. Electronic keyboard labs are used for cost-effective beginner group piano instruction in colleges and universities.
Music education is a field of study associated with the teaching and learning of music. It touches on all learning domains, including the psychomotor domain, the cognitive domain, and, in particular and significant ways, the affective domain, including music appreciation and sensitivity. Music training from preschool through post-secondary education is common in most nations because involvement with music is considered a fundamental component of human culture and behavior. Cultures from around the world have different approaches to music education, largely due to the varying histories and politics. Studies show that teaching music from other cultures can help students perceive unfamiliar sounds more comfortably, and they also show that musical preference is related to the language spoken by the listener and the other sounds they are exposed to within their own culture.
An electronic keyboard or digital keyboard is an electronic musical instrument, an electronic or digital derivative of keyboard instruments. Broadly speaking, the term electronic keyboard or just a keyboard can refer to any type of digital or electronic keyboard instrument. These include synthesizers, digital pianos, stage pianos, electronic organs and digital audio workstations. However, an electronic keyboard is more specifically a synthesizer with a built-in low-wattage power amplifier and small loudspeakers.
Early pioneers included Luigi Russolo, Halim El-Dabh,Pierre Schaeffer, Pierre Henry, Edgard Varèse, Karlheinz Stockhausen, Ikutaro Kakehashi, and King Tubby. Music technology has been and is being used in many modernist and contemporary experimental music situations to create new sound possibilities.
Luigi Carlo Filippo Russolo was an Italian Futurist painter, composer, builder of experimental musical instruments, and the author of the manifesto The Art of Noises (1913). He is often regarded as one of the first noise music experimental composers with his performances of noise music concerts in 1913–14 and then again after World War I, notably in Paris in 1921. He designed and constructed a number of noise-generating devices called Intonarumori.
Halim Abdul Messieh El-Dabh was an Egyptian American composer, musician, ethnomusicologist, and educator, who has had a career spanning six decades. He is particularly known as an early pioneer of electronic music. In 1944 he composed one of the earliest known works of tape music, or musique concrète. From the late 1950s to early 1960s he produced influential work at the Columbia-Princeton Electronic Music Center.
Pierre Henri Marie Schaeffer was a French composer, writer, broadcaster, engineer, musicologist and acoustician. His innovative work in both the sciences—particularly communications and acoustics—and the various arts of music, literature and radio presentation after the end of World War II, as well as his anti-nuclear activism and cultural criticism garnered him widespread recognition in his lifetime.
A synthesizer is an electronic musical instrument that generates electric signals that are converted to sound through instrument amplifiers and loudspeakers or headphones. Synthesizers may either imitate existing sounds (instruments, vocal, natural sounds, etc.), or generate new electronic timbres or sounds that did not exist before. They are often played with an electronic musical keyboard, but they can be controlled via a variety of other input devices, including music sequencers, instrument controllers, fingerboards, guitar synthesizers, wind controllers, and electronic drums. Synthesizers without built-in controllers are often called sound modules, and are controlled using a controller device.
A synthesizer or synthesiser is an electronic musical instrument that generates audio signals that may be converted to sound. Synthesizers may imitate traditional musical instruments such as piano, flute, vocals, or natural sounds such as ocean waves; or generate novel electronic timbres. They are often played with a musical keyboard, but they can be controlled via a variety of other devices, including music sequencers, instrument controllers, fingerboards, guitar synthesizers, wind controllers, and electronic drums. Synthesizers without built-in controllers are often called sound modules, and are controlled via USB, MIDI or CV/gate using a controller device, often a MIDI keyboard or other controller.
An electronic musical instrument is a musical instrument that produces sound using electronic circuitry. Such an instrument sounds by outputting an electrical, electronic or digital audio signal that ultimately is plugged into a power amplifier which drives a loudspeaker, creating the sound heard by the performer and listener.
An instrument amplifier is an electronic device that converts the often barely audible or purely electronic signal of a musical instrument into a larger electronic signal to feed to a loudspeaker. An instrument amplifier is used with musical instruments such as an electric guitar, an electric bass, electric organ, synthesizers and drum machine to convert the signal from the pickup or other sound source into an electronic signal that has enough power, due to being routed through a power amplifier, capable of driving one or more loudspeaker that can be heard by the performers and audience.
Synthesizers use various methods to generate a signal. Among the most popular waveform synthesis techniques are subtractive synthesis, additive synthesis, wavetable synthesis, frequency modulation synthesis, phase distortion synthesis, physical modeling synthesis and sample-based synthesis. Other less common synthesis types include subharmonic synthesis, a form of additive synthesis via subharmonics (used by mixture trautonium), and granular synthesis, sample-based synthesis based on grains of sound, generally resulting in soundscapes or clouds. In the 2010s, synthesizers are used in many genres of pop, rock and dance music. Contemporary classical music composers from the 20th and 21st century write compositions for synthesizer.
The development of the modern synthesizer was spurred on by the invention of the miniaturized transistor in 1947. The lightweight transistors made it was possible to make synthesizers much more portable and complex. A new breed of synthesizers appeared, mainly in America. American inventor Robert Moog's synthesizer designs in the 1960s were a significant advancement in the field over its predecessors. This was partially owed to new technologies that became available, such as the newly developed semiconductors. These new instruments were less expensive and became available worldwide. These were capable of producing a vast range of complex sounds. Later versions often incorporated automatic rhythm units, called drum machines. They had more popularity than any synthesizer from the past. The release of Wendy Carlos' album Switched-On Bach in 1968 brought Moog's synthesizer to the general public's attention. The album demonstrated that besides creating strange sounds, the synthesizer could be used to make beautiful music.
In the 1970s, the American domination of the synthesizer market was relinquished to the Japanese,with synthesizers made by Yamaha Corporation, Roland Corporation, Korg, Kawai and other companies. Yamaha's DX7 was one of the first mass-market, relatively inexpensive synthesizer keyboards. The DX7 is an FM synthesis based digital synthesizer manufactured from 1983 to 1989. It was the first commercially successful digital synthesizer. Its distinctive sound can be heard on many recordings, especially pop music from the 1980s. The monotimbral, 16-note polyphonic DX7 was the moderately priced model of the DX series keyboard synthesizers. Over 200,000 of the original DX7 were made, and it remains one of the best-selling synthesizers of all time. The most iconic bass synthesizer is the Roland TB-303, widely used in acid house music. Other classic synthesizers include the Moog Minimoog, ARP Odyssey, Yamaha CS-80, Korg MS-20, Sequential Circuits Prophet-5, Fairlight CMI, PPG Wave, Roland TB-303, Roland Alpha Juno, Nord Modular and Korg M1.
A drum machine is an electronic musical instrument designed to imitate the sound of drums, cymbals, other percussion instruments, and often basslines. Drum machines either play back prerecorded samples of drums and cymbals or synthesized re-creations of drum/cymbal sounds in a rhythm and tempo that is programmed by a musician. Drum machines are most commonly associated with electronic dance music genres such as house music, but are also used in many other genres. They are also used when session drummers are not available or if the production cannot afford the cost of a professional drummer. In the 2010s, most modern drum machines are sequencers with a sample playback (rompler) or synthesizer component that specializes in the reproduction of drum timbres. Though features vary from model to model, many modern drum machines can also produce unique sounds, and allow the user to compose unique drum beats and patterns.
Electro-mechanical drum machines were first developed in 1949, with the invention of the Chamberlin Rhythmate. Transistorized electronic drum machines later appeared in the 1960s. The Ace Tone Rhythm Ace, created by Ikutaro Kakehashi, began appearing in popular music from the late 1960s, followed by drum machines from Korg and Ikutaro's later Roland Corporation also appearing in popular music from the early 1970s.Sly and the Family Stone's 1971 album There's a Riot Goin' On helped to popularize the sound of early drum machines, along with Timmy Thomas' 1972 R&B hit "Why Can't We Live Together" and George McCrae's 1974 disco hit "Rock Your Baby" which used early Roland rhythm machines.
Early drum machines sounded drastically different than the drum machines that gained their peak popularity in the 1980s and defined an entire decade of pop music. The most iconic drum machine was the Roland TR-808, widely used in hip hop and dance music. Other classic drum machines include the Alesis HR-16, Korg Mini Pops 120, E-MU SP-12, Elektron SPS1 Machinedrum, Roland CR-78, PAiA Programmable Drum Set, LinnDrum, Roland TR-909 and Oberheim DMX.
Digital sampling technology, introduced in the 1980s, has become a staple of music production in the 2000s. Devices that use sampling, record a sound digitally (often a musical instrument, such as a piano or flute being played), and replay it when a key or pad on a controller device (e.g., an electronic keyboard, electronic drum pad, etc.) is pressed or triggered. Samplers can alter the sound using various audio effects and audio processing. Sampling has its roots in France with the sound experiments carried out by Musique Concrete practitioners.
In the 1980s, when the technology was still in its infancy, digital samplers cost tens of thousands of dollars and they were only used by the top recording studios and musicians. These were out of the price range of most musicians. Early samplers include the 12-bit Toshiba LMD-649and the 8-bit Emulator I in 1981. The latter's successor, the Emulator II (released in 1984), listed for $8,000. Samplers were released during this period with high price tags, such as the K2000 and K2500.
The first affordable sampler, the AKAI S612, became available in the mid-1980s and retailed for US$895. Other companies soon released affordable samplers, including the Mirage Sampler, Oberheim DPX-1, and more by Korg, Casio, Yamaha, and Roland. Some important hardware samplers include the Akai Z4/Z8, Ensoniq ASR-10, Roland V-Synth, Casio FZ-1, Kurzweil K250, Akai MPC60, Ensoniq Mirage, Akai S1000, E-mu Emulator, and Fairlight CMI.
One of the biggest uses of sampling technology was by hip-hop music DJs and performers in the 1980s. Before affordable sampling technology was readily available, DJs would use a technique pioneered by Grandmaster Flash to manually repeat certain parts in a song by juggling between two separate turntables. This can be considered as an early precursor of sampling. In turn, this turntablism technique originates from Jamaican dub music in the 1960s, and was introduced to American hip hop in the 1970s.
In the 2000s, most professional recording studios use digital technologies. In the 2010s, many samplers exist in the digital-only realm. This new generation of digital samplers are capable of reproducing and manipulating sounds. New genres of music have formed which would be impossible without sampling. Advanced sample libraries have made complete performances of orchestral compositions possible that sound similar to a live performance.Modern sound libraries allow musicians to have the ability to use the sounds of almost any instrument in their productions.
MIDI has been the musical instrument industry standard interface since the 1980s through to the present day.It dates back to June 1981, when Roland Corporation founder Ikutaro Kakehashi proposed the concept of standardization between different manufacturers' instruments as well as computers, to Oberheim Electronics founder Tom Oberheim and Sequential Circuits president Dave Smith. In October 1981, Kakehashi, Oberheim and Smith discussed the concept with representatives from Yamaha, Korg and Kawai. In 1983, the MIDI standard was unveiled by Kakehashi and Smith.
At the NAMM show in Los Angeles of 1983, MIDI was released. A demonstration at the convention showed two previously incompatible analog synthesizers, the Prophet 600 and Roland Jupiter-6, communicating with each other, enabling a player to play one keyboard while getting the output from both of them. This was a massive breakthrough in the 1980s, as it allowed synths to be accurately layered in live shows and studio recordings. MIDI enables different electronic instruments and electronic music devices to communicate with each other and with computers. The advent of MIDI spurred a rapid expansion of the sales and production of electronic instruments and music software.
In 1985, several of the top keyboard manufacturers created the MIDI Manufacturers Association (MMA). This newly founded association standardized the MIDI protocol by generating and disseminating all the documents about it. With the development of the MIDI File Format Specification by Opcode, every music software company's MIDI sequencer software could read and write each other's files.
Since the 1980s, personal computers developed and became the ideal system for utilizing the vast potential of MIDI. This has created a large consumer market for software such as MIDI-equipped electronic keyboards, MIDI sequencers and Digital Audio Workstations. With universal MIDI protocols, electronic keyboards, sequencers, and drum machines can all be connected together.
Some universally accepted varieties of MIDI software applications include music instruction software, MIDI sequencing software, music notation software, hard disk recording/editing software, patch editor/sound library software, computer-assisted composition software, and virtual instruments. Current developments in computer hardware and specialized software continue to expand MIDI applications.
Computer and synthesizer technology joining together changed the way music is made, and is one of the fastest changing aspects of music technology today. Dr. Max Matthews, a telecommunications engineer at Bell Telephone Laboratories' Acoustic and Behavioural Research Department, is responsible for some of the first digital music technology in the 50s. Dr. Matthews also pioneered a cornerstone of music technology; analog to digital conversion.
The first generation of professional commercially available computer music instruments, or workstations as some companies later called them, were very sophisticated elaborate systems that cost a great deal of money when they first appeared. They ranged from $25,000 to $200,000.The two most popular were the Fairlight, and the Synclavier.
It was not until the advent of MIDI that general-purpose computers started to play a role in music production. Following the widespread adoption of MIDI, computer-based MIDI editors and sequencers were developed. MIDI-to-CV/Gate converters were then used to enable analogue synthesizers to be controlled by a MIDI sequencer.
Reduced prices in personal computers caused the masses to turn away from the more expensive workstations. Advancements in technology have increased the speed of hardware processing and the capacity of memory units. Software developers write new, more powerful programs for sequencing, recording, notating, and mastering music.
Music sequencer software, such as Pro Tools, Logic Audio and many others, are the most widely used form of contemporary music technology in the 2000s. Such programs allow the user to record acoustic sounds with a microphone, mix tracks record or MIDI musical sequences, which may then be organized along a timeline and edited on a flat-panel display of a computer or Digital Audio Workstation. Musical segments recorded on can be copied and duplicated ad infinitum, without any loss of fidelity or added noise (a major contrast from analog recording, in which every copy leads to a loss of fidelity and added noise). Digital music can be edited and processed using a multitude of audio effects. Contemporary classical music sometimes uses computer-generated sounds, either pre-recorded or generated/manipulated live, in conjunction or wikt:juxtaposition with classical acoustic instruments like the cello or violin. Classical and other notated types of music are frequently written on scorewriter software.
Many musicians and artists use 'patcher' type programmes, such as Pd, Bidule, Max/MSP, Kyma (sound design language) and Audiomulch as well as (or instead of) digital audio workstations or sequencers and there are still a significant number of people using more "traditional" software only approaches such as CSound or the Composers Desktop Project. Music technology includes many forms of music reproduction. Music and sound technology refer to the use of sound engineering in a commercial, experimental or amateur hobbyist manner. Music technology and sound technology may sometimes be classed as the same thing, but they actually refer to different fields of work. Sound engineering refers primarily to the use of sound technology for sound recording or in sound reinforcement systems used in concerts and live shows.
In 1967, John Chowning, at Stanford University, accidentally discovered frequency modulation (FM) synthesis when experimenting with extreme vibrato effects in MUSIC-V. ... By 1971 he was able to use FM synthesis to synthesizer musical instrument sounds, and this technique was later used to create the Yamaha DX synthesizer, the first commercially successful digital synthesizer, in the early 1980s.
The first digital synthesizer to make it into the studios of everyone else, the Yamaha DX7, became one of the most commercially successful synthesizers of all time.
By the time the first commercially successful digital instrument, the Yamaha DX7 (lifetime sales of two hundred thousand), appeared in 1983 ...(Note: the above sales number seems about whole DX series)
FIRSTMAN existiert seit 1972 und hat seinen Ursprung in Japan. Dort ist dieFirma unter dem Markennamen HILLWOOD bekannt. HILLWOOD baute dann auch 1973 den quasi ersten Synthesizer von FIRSTMAN. Die Firma MULTIVOX liess ihre Instrumente von 1976 bis 1980 bei HILLWOOD bauen.","SQ-10 / mon syn kmi ? (1980) / Monophoner Synthesizer mit wahrscheinlich eingebautem Sequenzer. Die Tastatur umfasst 37 Tasten. Die Klangerzeugung beruht auf zwei VCOs.Cite uses deprecated parameter
A digital synthesizer is a synthesizer that uses digital signal processing (DSP) techniques to make musical sounds. This in contrast to older analog synthesizers, which produce music using analog electronics, and samplers, which play back digital recordings of acoustic, electric, or electronic instruments. Some digital synthesizers emulate analog synthesizers; others include sampling capability in addition to digital synthesis.
MIDI is a technical standard that describes a communications protocol, digital interface, and electrical connectors that connect a wide variety of electronic musical instruments, computers, and related audio devices for playing, editing and recording music. A single MIDI link through a MIDI cable can carry up to sixteen channels of information, each of which can be routed to a separate device or instrument. This could be sixteen different digital instruments, for example.
Roland Corporation is a Japanese manufacturer of electronic musical instruments, electronic equipment and software. It was founded by Ikutaro Kakehashi in Osaka on April 18, 1972. In 2005, Roland's headquarters relocated to Hamamatsu in Shizuoka Prefecture. It has factories in Taiwan, Japan, and the USA. As of March 31, 2010, it employed 2,699 employees. In 2014, Roland was subject to a management buyout by Roland's CEO Junichi Miki, supported by Taiyo Pacific Partners.
Physical modelling synthesis refers to sound synthesis methods in which the waveform of the sound to be generated is computed using a mathematical model, a set of equations and algorithms to simulate a physical source of sound, usually a musical instrument.
An analogsynthesizer is a synthesizer that uses analog circuits and analog signals to generate sound electronically.
A software synthesizer, also known as a softsynth or software instrument, is a computer program or plug-in that generates digital audio, usually for music. Computer software that can create sounds or music is not new, but advances in processing speed now allow softsynths to accomplish the same tasks that previously required the dedicated hardware of a conventional synthesizer. Softsynths are usually cheaper and more portable than dedicated hardware, and easier to interface with other music software such as music sequencers.
A music workstation is an electronic musical instrument providing the facilities of:
A sampler is an electronic or digital musical instrument which uses sound recordings of real instrument sounds, excerpts from recorded songs or found sounds. The samples are loaded or recorded by the user or by a manufacturer. These sounds are then played back by means of the sampler program itself, a MIDI keyboard, sequencer or another triggering device to perform or compose music. Because these samples are usually stored in digital memory, the information can be quickly accessed. A single sample may often be pitch-shifted to different pitches to produce musical scales and chords.
A rompler is an electronic music instrument that plays pre-fabricated sounds based on audio samples. In contrast to samplers, romplers do not record audio and have limited or no capability for generating original sounds. The term rompler is a portmanteau of the terms ROM and sampler. Both may have additional sound editing features, such as layering several waveforms and modulation with ADSR envelopes, filters and LFOs.
Sequential is an American synthesizer company founded in 1974 as Sequential Circuits by Dave Smith. In 1978, Sequential released the Prophet-5, the first programmable polyphonic synthesizer, used by artists including Michael Jackson, Madonna, and John Carpenter. Sequential was also pivotal to the development of MIDI in 1982, which synchronizes electronic instruments by different manufacturers.
A MIDI controller is any hardware or software that generates and transmits Musical Instrument Digital Interface (MIDI) data to MIDI-enabled devices, typically to trigger sounds and control parameters of an electronic music performance.
A sound module is an electronic musical instrument without a human-playable interface such as a piano-style musical keyboard. Sound modules have to be operated using an externally connected device, which is often a MIDI controller, of which the most common type is the musical keyboard. Controllers are devices that provide the human-playable interface and which may or may not produce sounds of its own. Another common way of controlling a sound module is through a sequencer, which is computer hardware or software designed to record and play back control information for sound-generating hardware. Connections between sound modules, controllers, and sequencers are generally made with MIDI, which is a standardized protocol designed for this purpose, which includes special ports (jacks) and cables.
Dave Smith is an American engineer and musician and founder of the synthesizer company Sequential. Smith was responsible for the first commercial polyphonic and microprocessor-controlled synthesizer, the Prophet-5, and later the multitimbral synthesizer. He is also referred to as the "Father of MIDI" for his role in the development of MIDI, now a standard interface protocol for electronic instruments and recording/pro audio equipment.
Ikutaro Kakehashi, also known by the nickname Taro, was a Japanese engineer, inventor and entrepreneur. He founded the musical instrument manufacturers Ace Tone, Roland Corporation, and Boss Corporation, and the audiovisual electronics company ATV Corporation.
A wind controller, sometimes referred to as a "wind synth", or "wind synthesizer", is a wind instrument capable of controlling one or more music synthesizers or other devices. Wind controllers are most commonly played and fingered like a woodwind instrument, usually the saxophone, with the next most common being brass fingering, particularly the trumpet. Models have been produced that play and finger like other acoustic instruments such as the recorder or the tin whistle. One form of wind controller, the hardware-based variety, uses electronic sensors to convert fingering, breath pressure, bite pressure, finger pressure, and other gesture information into control signals. Another form of wind controller uses software to convert the acoustic sound of an unmodified wind instrument directly into MIDI messages. In either case, the control signals or MIDI messages generated by the wind controller are used to control internal or external devices such as analog synthesizers or MIDI-compatible synthesizers, softsynths, sequencers, or even lighting systems.
The timeline of music technology provides the major dates in the history of electric music technologies inventions from the 1800s to the early 1900s and electronic and digital music technologies from 1917 and electric music technologies to the 2010s.
This is the history of science and technology in Japan.