Part of a series on |
Artificial intelligence |
---|
Music and artificial intelligence (AI) is the development of music software programs which use AI to generate music. [1] As with applications in other fields, AI in music also simulates mental tasks. A prominent feature is the capability of an AI algorithm to learn based on past data, such as in computer accompaniment technology, wherein the AI is capable of listening to a human performer and performing accompaniment. [2] Artificial intelligence also drives interactive composition technology, wherein a computer composes music in response to a live performance. There are other AI applications in music that cover not only music composition, production, and performance but also how music is marketed and consumed. Several music player programs have also been developed to use voice recognition and natural language processing technology for music voice control. Current research includes the application of AI in music composition, performance, theory and digital sound processing.
Erwin Panofksy proposed that in all art, there existed three levels of meaning: primary meaning, or the natural subject; secondary meaning, or the conventional subject; and tertiary meaning, the intrinsic content of the subject. [3] [4] AI music explores the foremost of these, creating music without the "intention" which is usually behind it, leaving composers who listen to machine-generated pieces feeling unsettled by the lack of apparent meaning. [5]
Artificial intelligence finds its beginnings in music with the transcription problem: accurately recording a performance into musical notation as it is played. Père Engramelle's schematic of a "piano roll", a mode of automatically recording note timing and duration in a way which could be easily transcribed to proper musical notation by hand, was first implemented by German engineers J.F. Unger and J. Hohlfield in 1752. [6]
In 1957, the ILLIAC I (Illinois Automatic Computer) produced the "Illiac Suite for String Quartet", a completely computer-generated piece of music. The computer was programmed to accomplish this by composer Lejaren Hiller and mathematician Leonard Isaacson. [5] : v–vii In 1960, Russian researcher Rudolf Zaripov published worldwide first paper on algorithmic music composing using the Ural-1 computer. [7]
In 1965, inventor Ray Kurzweil developed software capable of recognizing musical patterns and synthesizing new compositions from them. The computer first appeared on the quiz show I've Got a Secret . [8]
By 1983, Yamaha Corporation's Kansei Music System had gained momentum, and a paper was published on its development in 1989. The software utilized music information processing and artificial intelligence techniques to essentially solve the transcription problem for simpler melodies, although higher-level melodies and musical complexities are regarded even today as difficult deep-learning tasks, and near-perfect transcription is still a subject of research. [6] [9]
In 1997, an artificial intelligence program named Experiments in Musical Intelligence (EMI) appeared to outperform a human composer at the task of composing a piece of music to imitate the style of Bach. [10] EMI would later become the basis for a more sophisticated algorithm called Emily Howell, named for its creator.
In 2002, the music research team at the Sony Computer Science Laboratory Paris, led by French composer and scientist François Pachet, designed the Continuator, an algorithm uniquely capable of resuming a composition after a live musician stopped. [11]
Emily Howell would continue to make advancements in musical artificial intelligence, publishing its first album From Darkness, Light in 2009. [12] Since then, many more pieces by artificial intelligence and various groups have been published.
In 2010, Iamus became the first AI to produce a fragment of original contemporary classical music, in its own style: "Iamus' Opus 1". Located at the Universidad de Malága (Malága University) in Spain, the computer can generate a fully original piece in a variety of musical styles. [13] [5] : 468–481 In August 2019, a large dataset consisting of 12,197 MIDI songs, each with their lyrics and melodies, [14] was created to investigate the feasibility of neural melody generation from lyrics using a deep conditional LSTM-GAN method.
With progress in generative AI, models capable of creating complete musical compositions (including lyrics) from a simple text description have begun to emerge. Two notable web applications in this field are Suno AI, launched in December 2023, and Udio, which followed in April 2024. [15]
Developed at Princeton University by Ge Wang and Perry Cook, ChucK is a text-based, cross-platform language. [16] By extracting and classifying the theoretical techniques it finds in musical pieces, the software is able to synthesize entirely new pieces from the techniques it has learned. [17] The technology is used by SLOrk (Stanford Laptop Orchestra) [18] and PLOrk (Princeton Laptop Orchestra).
Jukedeck was a website that let people use artificial intelligence to generate original, royalty-free music for use in videos. [19] [20] The team started building the music generation technology in 2010, [21] formed a company around it in 2012, [22] and launched the website publicly in 2015. [20] The technology used was originally a rule-based algorithmic composition system, [23] which was later replaced with artificial neural networks. [19] The website was used to create over 1 million pieces of music, and brands that used it included Coca-Cola, Google, UKTV, and the Natural History Museum, London. [24] In 2019, the company was acquired by ByteDance. [25] [26] [27]
MorpheuS [28] is a research project by Dorien Herremans and Elaine Chew at Queen Mary University of London, funded by a Marie Skłodowská-Curie EU project. The system uses an optimization approach based on a variable neighborhood search algorithm to morph existing template pieces into novel pieces with a set level of tonal tension that changes dynamically throughout the piece. This optimization approach allows for the integration of a pattern detection technique in order to enforce long term structure and recurring themes in the generated music. Pieces composed by MorpheuS have been performed at concerts in both Stanford and London.
Created in February 2016, in Luxembourg, AIVA is a program that produces soundtracks for any type of media. The algorithms behind AIVA are based on deep learning architectures [29] AIVA has also been used to compose a Rock track called On the Edge, [30] as well as a pop tune Love Sick [31] in collaboration with singer Taryn Southern, [32] for the creation of her 2018 album "I am AI".
Google's Magenta team has published several AI music applications and technical papers since their launch in 2016. [33] In 2017 they released the NSynth algorithm and dataset, [34] and an open source hardware musical instrument, designed to facilitate musicians in using the algorithm. [35] The instrument was used by notable artists such as Grimes and YACHT in their albums. [36] [37] In 2018, they released a piano improvisation app called Piano Genie. This was later followed by Magenta Studio, a suite of 5 MIDI plugins that allow music producers to elaborate on existing music in their DAW. [38] In 2023, their machine learning team published a technical paper on GitHub that described MusicLM, a private text-to-music generator which they'd developed. [39] [40]
Riffusion is a neural network, designed by Seth Forsgren and Hayk Martiros, that generates music using images of sound rather than audio. [41] It was created as a fine-tuning of Stable Diffusion, an existing open-source model for generating images from text prompts, on spectrograms. [41] This results in a model which uses text prompts to generate image files, which can be put through an inverse Fourier transform and converted into audio files. [42] While these files are only several seconds long, the model can also use latent space between outputs to interpolate different files together. [41] [43] This is accomplished using a functionality of the Stable Diffusion model known as img2img. [44]
The resulting music has been described as "de otro mundo" (otherworldly), [45] although unlikely to replace man-made music. [45] The model was made available on December 15, 2022, with the code also freely available on GitHub. [42] It is one of many models derived from Stable Diffusion. [44]
Riffusion is classified within a subset of AI text-to-music generators. In December 2022, Mubert [46] similarly used Stable Diffusion to turn descriptive text into music loops. In January 2023, Google published a paper on their own text-to-music generator called MusicLM. [47] [48]Spike AI is an AI-based audio plug-in, developed by Spike Stent in collaboration with his son Joshua Stent and friend Henry Ramsey, that analyzes tracks and provides suggestions to increase clarity and other aspects during mixing. Communication is done by using a chatbot trained on Spike Stent's personal data. The plug-in integrates into digital audio workstation. [49] [50]
Artificial Intelligence has the opportunity to impact how producers create music by giving reiterations of a track that follow a prompt given by the creator. These prompts allow the AI to follow a certain style that the artist is trying to go for. [5]
AI has also been seen in musical analysis where it has been used for feature extraction, pattern recognition, and musical recommendations. [51]
Artificial intelligence has had major impacts in the composition sector as it has influenced the ideas of composers/producers and has the potential to make the industry more accessible to newcomers. With its development in music, it has already been seen to be used in collaboration with producers. Artists use these software to help generate ideas and bring out musical styles by prompting the AI to follow specific requirements that fit their needs. Future compositional impacts by the technology include style emulation and fusion, and revision and refinement. Development of these types of software can give ease of access to newcomers to the music industry. [5] Software such as ChatGPT have been used by producers to do these tasks, while other software such as Ozone11 have been used to automate time consuming and complex activities such as mastering. [52]
In the United States, the current legal framework tends to apply traditional copyright laws to AI, despite its differences with the human creative process. [53] However, music outputs solely generated by AI are not granted copyright protection. In the compendium of the U.S. Copyright Office Practices, the Copyright Office has stated that it would not grant copyrights to “works that lack human authorship” and “the Office will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.” [54] In February 2022, the Copyright Review Board rejected an application to copyright AI-generated artwork on the basis that it "lacked the required human authorship necessary to sustain a claim in copyright." [55]
The situation in the European Union (EU) is similar to the US, because its legal framework also emphasizes the role of human involvement in a copyright-protected work. [56] According to the European Union Intellectual Property Office and the recent jurisprudence of the Court of Justice of the European Union, the originality criterion requires the work to be the author’s own intellectual creation, reflecting the personality of the author evidenced by the creative choices made during its production, requires distinct level of human involvement. [56] The reCreating Europe project, funded by the European Union’s Horizon 2020 research and innovation program, delves into the challenges posed by AI-generated contents including music, suggesting legal certainty and balanced protection that encourages innovation while respecting copyright norms. [56] The recognition of AIVA marks a significant departure from traditional views on authorship and copyrights in the realm of music composition, allowing AI artists capable of releasing music and earning royalties. This acceptance marks AIVA as a pioneering instance where an AI has been formally acknowledged within the music production. [57]
The recent advancements in artificial intelligence made by groups such as Stability AI, OpenAI, and Google has incurred an enormous sum of copyright claims leveled against generative technology, including AI music. Should these lawsuits succeed, the machine learning models behind these technologies would have their datasets restricted to the public domain. [58]
A more nascent development of AI in music is the application of audio deepfakes to cast the lyrics or musical style of a pre-existing song to the voice or style of another artist. This has raised many concerns regarding the legality of technology, as well as the ethics of employing it, particularly in the context of artistic identity. [59] Furthermore, it has also raised the question of to whom the authorship of these works is attributed. As AI cannot hold authorship of its own, current speculation suggests that there will be no clear answer until further rulings are made regarding machine learning technologies as a whole. [60] Most recent preventative measures have started to be developed by Google and Universal Music group who have taken into royalties and credit attribution to allow producers to replicated the voices and styles of artists. [61]
In 2023, an artist known as ghostwriter977 created a musical deepfake called "Heart on My Sleeve" that cloned the voices of Drake and The Weeknd by inputting an assortment of vocal-only tracks from the respective artists into a deep-learning algorithm, creating an artificial model of the voices of each artist, to which this model could be mapped onto original reference vocals with original lyrics. [62] The track was submitted for Grammy consideration for the best rap song and song of the year. [63] It went viral and gained traction on TikTok and received a positive response from the audience, leading to its official release on Apple Music, Spotify, and YouTube in April of 2023. [64] Many believed the track was fully composed by an AI software, but the producer claimed the songwriting, production, and original vocals (pre-conversion) were still done by him. [62] It would later be rescinded from any Grammy considerations due to it not following the guidelines necessary to be considered for a Grammy award. [64] The track would end up being removed from all music platforms by Universal Music Group. [64] The song was a watershed moment for AI voice cloning, and models have since been created for hundreds, if not thousands, of popular singers and rappers.
In 2013, country music singer Randy Travis suffered a stroke which left him unable to sing. In the meantime, vocalist James Dupré toured on his behalf, singing his songs for him. Travis and longtime producer Kyle Lehning released a new song in May 2024 titled "Where That Came From", Travis's first new song since his stroke. The recording uses AI technology to re-create Travis's singing voice, having been composited from over 40 existing vocal recordings alongside those of Dupré. [65] [66]
Computer music is the application of computing technology in music composition, to help human composers create new music or to have computers independently create music, such as with algorithmic composition programs. It includes the theory and application of new and existing computer software technologies and basic aspects of music, such as sound synthesis, digital signal processing, sound design, sonic diffusion, acoustics, electrical engineering, and psychoacoustics. The field of computer music can trace its roots back to the origins of electronic music, and the first experiments and innovations with electronic instruments at the turn of the 20th century.
Algorithmic composition is the technique of using algorithms to create music.
Generative music is a term popularized by Brian Eno to describe music that is ever-different and changing, and that is created by a system.
David “Dave” Cope is an American author, composer, scientist, and Dickerson Emeriti Professor of Music at UC Santa Cruz. His primary area of research involves artificial intelligence and music; he writes programs and algorithms that can analyze existing music and create new compositions in the style of the original input music. He taught the groundbreaking summer workshop in Workshop in Algorithmic Computer Music (WACM) that was open to the public as well as a general education course entitled Artificial Intelligence and Music for enrolled UCSC students. Cope is also co-founder and CTO Emeritus of Recombinant Inc., a music technology company.
In video games, artificial intelligence (AI) is used to generate responsive, adaptive or intelligent behaviors primarily in non-playable characters (NPCs) similar to human-like intelligence. Artificial intelligence has been an integral part of video games since their inception in 1948, first seen in the game Nim. AI in video games is a distinct subfield and differs from academic AI. It serves to improve the game-player experience rather than machine learning or decision making. During the golden age of arcade video games the idea of AI opponents was largely popularized in the form of graduated difficulty levels, distinct movement patterns, and in-game events dependent on the player's input. Modern games often implement existing techniques such as pathfinding and decision trees to guide the actions of NPCs. AI is often used in mechanisms which are not immediately visible to the user, such as data mining and procedural-content generation. One of the most infamous examples of this NPC technology and gradual difficulty levels can be found in the game Mike Tyson's Punch-Out!! (1987).
Artificial intelligence (AI) has been used in applications throughout industry and academia. In a manner analogous to electricity or computers, AI serves as a general-purpose technology. AI programes emulate perception and understanding, and are designed to adapt to new information and new situations. Machine learning has been used for various scientific and commercial purposes including language translation, image recognition, decision-making, credit scoring, and e-commerce.
Computational creativity is a multidisciplinary endeavour that is located at the intersection of the fields of artificial intelligence, cognitive psychology, philosophy, and the arts.
Pop music automation is a field of study among musicians and computer scientists with a goal of producing successful pop music algorithmically. It is often based on the premise that pop music is especially formulaic, unchanging, and easy to compose. The idea of automating pop music composition is related to many ideas in algorithmic music, Artificial Intelligence (AI) and computational creativity.
AIVA is an electronic composer recognized by the SACEM.
Google AI is a division of Google dedicated to artificial intelligence. It was announced at Google I/O 2017 by CEO Sundar Pichai.
Deepfakes are images, videos, or audio which are edited or generated using artificial intelligence tools, and which may depict real or non-existent people. They are a type of synthetic media and modern form of a Media prank.
Digital cloning is an emerging technology, that involves deep-learning algorithms, which allows one to manipulate currently existing audio, photos, and videos that are hyper-realistic. One of the impacts of such technology is that hyper-realistic videos and photos makes it difficult for the human eye to distinguish what is real and what is fake. Furthermore, with various companies making such technologies available to the public, they can bring various benefits as well as potential legal and ethical concerns.
Artificial intelligence art is visual artwork created or enhanced through the use of artificial intelligence (AI) programs.
Synthetic media is a catch-all term for the artificial production, manipulation, and modification of data and media by automated means, especially through the use of artificial intelligence algorithms, such as for the purpose of misleading people or changing an original meaning. Synthetic media as a field has grown rapidly since the creation of generative adversarial networks, primarily through the rise of deepfakes as well as music synthesis, text generation, human image synthesis, speech synthesis, and more. Though experts use the term "synthetic media," individual methods such as deepfakes and text synthesis are sometimes not referred to as such by the media but instead by their respective terminology Significant attention arose towards the field of synthetic media starting in 2017 when Motherboard reported on the emergence of AI altered pornographic videos to insert the faces of famous actresses. Potential hazards of synthetic media include the spread of misinformation, further loss of trust in institutions such as media and government, the mass automation of creative and journalistic jobs and a retreat into AI-generated fantasy worlds. Synthetic media is an applied form of artificial imagination.
Audio deepfake technology, also referred to as voice cloning or deepfake audio, is an application of artificial intelligence designed to generate speech that convincingly mimics specific individuals, often synthesizing phrases or sentences they have never spoken. Initially developed with the intent to enhance various aspects of human life, it has practical applications such as generating audiobooks and assisting individuals who have lost their voices due to medical conditions. Additionally, it has commercial uses, including the creation of personalized digital assistants, natural-sounding text-to-speech systems, and advanced speech translation services.
Stable Diffusion is a deep learning, text-to-image model released in 2022 based on diffusion techniques. The generative artificial intelligence technology is the premier product of Stability AI and is considered to be a part of the ongoing artificial intelligence boom.
Jukedeck was a British technology company founded in 2012. It built a website that let users create royalty-free music using artificial intelligence.
Generative artificial intelligence is a subset of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data. These models learn the underlying patterns and structures of their training data and use them to produce new data based on the input, which often comes in the form of natural language prompts.
In the 2020s, the rapid advancement of deep learning-based generative artificial intelligence models raised questions about whether copyright infringement occurs when such are trained or used. This includes text-to-image models such as Stable Diffusion and large language models such as ChatGPT. As of 2023, there were several pending U.S. lawsuits challenging the use of copyrighted data to train AI models, with defendants arguing that this falls under fair use.
Dr. Larson was hurt when the audience concluded that his piece -- a simple, engaging form called a two-part invention -- was written by the computer. But he felt somewhat mollified when the listeners went on to decide that the invention composed by EMI (pronounced Emmy) was genuine Bach.
{{citation}}
: CS1 maint: numeric names: authors list (link)