This article contains content that is written like an advertisement .(December 2012) |
Hit Song Science is a term coined by Mike McCready and trademarked by the company he co-founded, Polyphonic HMI. It concerns the possibility of predicting whether a song will be a hit, before its distribution using automated means such as machine learning software.
The scientific nature of Hit Song Science is a subject of debate in the music information retrieval (MIR) community. Early studies claimed that using machine learning techniques could capture some information from audio signals and lyrics that would explain [1] popularity. However, a larger-scale evaluation [2] contradicts the claims of “Hit Song Science”, i.e. that the popularity of a music title can be learned effectively from known audio features. Many other reasons, including the well-known cumulative advantage or preferential attachment effects deeply contradicts [3] the possibility of practical applications. Nevertheless, automatic prediction techniques are the basis of hit counseling businesses (HSS Technology). Recent work by Herremans et al. [4] has shown that audio features can indeed be used to outperform a random oracle when predicting top 10 versus top 30-40 hits. [5]
A technology proposing to exploit Hit Song Science was introduced in 2003 by an artificial intelligence company out of Barcelona, Spain, called Polyphonic HMI. Polyphonic HMI has since spun off a new Delaware C corporation, Music Intelligence Solutions, Inc., which used to run uPlaya, a site geared toward music professionals. In 2006 however one of the company's founders, Mike McCready left to pursue another direction in the digital music space. The idea of Hit Song Science has generated response from many in the music industry, including Chuck D, [6] Robert Lamm, Stratton Leopold, [7] Gregg Scholl of The Orchard, [8] and officials at The Sync Agency [9] as well as Blue Infinity Music. Prior to McCready's departure, Hit Song Science was profiled by NBC, [10] BBC [11] and various major news outlets. The plotline of an episode of "Numb3rs" [12] was inspired by the technology. Music Intelligence Solutions, Inc., is using Hit Song Science as a basis for several contests done in partnership with organizations such as AllHipHop.com, Urban Latino and American Songwriter magazine, and claims to have predicted the commercial success of Norah Jones's debut album, Come Away with Me , (which won a Grammy for Best Album) and Ben Novak's debut single, "Turn Your Car Around", which reached the number 12 spot in the UK Top 40 charts. [13]
Similar technologies are now emerging with companies such as Mixcloud, MusicXray, and BandMetrics who are using their technologies. Mixcloud is working with Queen Mary Technologies.
Computer music is the application of computing technology in music composition, to help human composers create new music or to have computers independently create music, such as with algorithmic composition programs. It includes the theory and application of new and existing computer software technologies and basic aspects of music, such as sound synthesis, digital signal processing, sound design, sonic diffusion, acoustics, electrical engineering and psychoacoustics. The field of computer music can trace its roots back to the origins of electronic music, and the first experiments and innovations with electronic instruments at the turn of the 20th century.
Raymond Kurzweil is an American computer scientist, author, inventor, and futurist. He is involved in fields such as optical character recognition (OCR), text-to-speech synthesis, speech recognition technology, and electronic keyboard instruments. He has written books on health, artificial intelligence (AI), transhumanism, the technological singularity, and futurism. Kurzweil is a public advocate for the futurist and transhumanist movements and gives public talks to share his optimistic outlook on life extension technologies and the future of nanotechnology, robotics, and biotechnology.
Linear predictive coding (LPC) is a method used mostly in audio signal processing and speech processing for representing the spectral envelope of a digital signal of speech in compressed form, using the information of a linear predictive model.
A ringtone, ring tone or ring is the sound made by a telephone to indicate an incoming call. Originally referring to and made by the electromechanical striking of bells, the term now refers to any sound on any device alerting of a new incoming call—up to and including recordings of original telephone bells.
Music information retrieval (MIR) is the interdisciplinary science of retrieving information from music. MIR is a small but growing field of research with many real-world applications. Those involved in MIR may have a background in academic musicology, psychoacoustics, psychology, signal processing, informatics, machine learning, optical music recognition, computational intelligence or some combination of these.
Algorithmic composition is the technique of using algorithms to create music.
Lip sync or lip synch is a technical term for matching a speaking or singing person's lip movements with sung or spoken vocals.
Polyphonic HMI is a music analysis company jointly founded in Barcelona, Spain by Mike McCready and an artificial intelligence firm called Grupo AIA. Its principal product is called "Hit Song Science" (HSS) which uses various statistical and signal processing techniques to help record companies predict whether a particular song will have commercial success.
Pitch correction is an electronic effects unit or audio software that changes the intonation of an audio signal so that all pitches will be notes from the equally tempered system. Pitch correction devices do this without affecting other aspects of its sound. Pitch correction first detects the pitch of an audio signal, then calculates the desired change and modifies the audio signal accordingly. The widest use of pitch corrector devices is in Western popular music on vocal lines.
W710 is a mobile phone produced by Sony Ericsson.
Ford Sync is a factory-installed, integrated in-vehicle communications and entertainment system that allows users to make hands-free telephone calls, control music and perform other functions with the use of voice commands. The system consists of applications and user interfaces developed by Ford and other third-party developers. The first two generations run on the Windows Embedded Automotive operating system designed by Microsoft, while the third and fourth generations runs on the QNX operating system from BlackBerry Limited. Future versions will run on the Android operating system from Google.
Music Xray is a music tech company based in New York City. The company's official name is Platinum Blue Music Intelligence Inc but it began operating under the name "Music Xray" in July 2009.
Artificial intelligence and music (AIM) is a common subject in the International Computer Music Conference, the Computing Society Conference and the International Joint Conference on Artificial Intelligence. The first International Computer Music Conference (ICMC) was held in 1974 at Michigan State University. Current research includes the application of AI in music composition, performance, theory and digital sound processing.
Mike McCready is an American entrepreneur in the music industry, CEO of Music Xray, a blogger on Huffington Post and musician.
Harmonic pitch class profiles (HPCP) is a group of features that a computer program extracts from an audio signal, based on a pitch class profile—a descriptor proposed in the context of a chord recognition system. HPCP are an enhanced pitch distribution feature that are sequences of feature vectors that, to a certain extent, describe tonality, measuring the relative intensity of each of the 12 pitch classes of the equal-tempered scale within an analysis frame. Often, the twelve pitch spelling attributes are also referred to as chroma and the HPCP features are closely related to what is called chroma features or chromagrams.
The International Society for Music Information Retrieval (ISMIR) is an international forum for research on the organization of music-related data. It started as an informal group steered by an ad hoc committee in 2000 which established a yearly symposium - whence "ISMIR", which meant International Symposium on Music Information Retrieval. It was turned into a conference in 2002 while retaining the acronym. ISMIR was incorporated in Canada on July 4, 2008.
Textual entailment (TE), also known as Natural Language Inference (NLI), in natural language processing is a directional relation between text fragments. The relation holds whenever the truth of one text fragment follows from another text. In the TE framework, the entailing and entailed texts are termed text (t) and hypothesis (h), respectively. Textual entailment is not the same as pure logical entailment – it has a more relaxed definition: "t entails h" (t ⇒ h) if, typically, a human reading t would infer that h is most likely true. (Alternatively: t ⇒ h if and only if, typically, a human reading t would be justified in inferring the proposition expressed by h from the proposition expressed by t.) The relation is directional because even if "t entails h", the reverse "h entails t" is much less certain.
François Pachet is a French scientist, composer and director of the Spotify Creator Technology Research Lab. Before joining Spotify he led Sony Computer Science Laboratory in Paris. He is one of the pioneers of computer music closely linked to artificial intelligence, especially in the field of machine improvisation and style modelling. He has been elected ECCAI Fellow in 2014.
Artificial empathy or computational empathy is the development of AI systems—such as companion robots or virtual agents—that can detect emotions and respond to them in an empathic way.
Dorien Herremans is a Belgian computer music researcher. Herremans is currently an assistant professor in the Singapore University of Technology and Design, and research scientist at the Institute of High Performance Computing, A*STAR. She also works as a certified instructor for the NVIDIA Deep Learning Institute and is director of SUTD Game Lab. Before going to SUTD, she was a recipient of the Marie Sklodowska-Curie Postdoctoral Fellowship at the Centre for Digital Music (C4DM) at Queen Mary University of London, where she worked on the project MorpheuS: Hybrid Machine Learning – Optimization techniques To Generate Structured Music Through Morphing And Fusion. She received her Ph.D. in Applied Economics on the topic of Computer Generation and Classification of Music through Operations Research Methods. She graduated as a commercial engineer in management information systems at the [University of Antwerp] in 2005. After that, she worked as a Drupal consultant and was an IT lecturer at the Les Roches University in Bluche, Switzerland. She also worked as a mandaatassistent at the University of Antwerp, in the domain of operations management, supply chain management and operations research.
{{cite journal}}
: CS1 maint: multiple names: authors list (link)