In music cognition and musical analysis, the study of melodic expectation considers the engagement of the brain's predictive mechanisms in response to music. [1] For example, if the ascending musical partial octave "do-re-mi-fa-sol-la-ti-..." is heard, listeners familiar with Western music will have a strong expectation to hear or provide one more note, "do", to complete the octave.
Melodic expectation can be considered at the esthesic level, [2] in which case the focus lies on the listener and its response to music. [1] It can be considered at the neutral level, [2] in which case the focus switches to the actual musical content, such as the "printed notes themselves". [3] At the neutral level, the observer may consider logical implications projected onto future elements by past elements [4] [5] or derive statistical observations from information theory. [6]
The notion of melodic expectation has prompted the existence of a corpus of studies in which authors often choose to provide their own terminology in place of using the literature's. [5] This results in an important number of different terms that all point towards the phenomenon of musical expectation: [5] [7]
Expectation can also be found mentioned in relation to concepts originating from the field of information theory such as entropy. [6] [8] [11] [16] [29] [30] [31] [32] Hybridization of information theory and humanities results in the birth of yet other notions, particularly variations upon the notion of entropy modified for the need of description of musical content. [36]
Consideration of musical expectation can be sorted into four trends. [5]
Leonard Meyer's Emotion and Meaning in Music [38] is the classic text in music expectation.[ citation needed ] Meyer's starting point is the belief that the experience of music (as a listener) is derived from one's emotions and feelings about the music, which themselves are a function of relationships within the music itself. Meyer writes that listeners bring with them a vast body of musical experiences that, as one listens to a piece, conditions one's response to that piece as it unfolds. Meyer argued that music's evocative power derives from its capacity to generate, suspend, prolongate, or violate these expectations.
Meyer models listener expectation in two levels. On a perceptual level, Meyer draws on Gestalt psychology to explain how listeners build mental representations of auditory phenomena. Above this raw perceptual level, Meyer argues that learning shapes (and re-shapes) one's expectations over time.
Narmour's (1992) Implication-Realization (I-R) Model is a detailed formalization based on Meyer's work on expectation.[ citation needed ] A fundamental difference between Narmour's models and most theories of expectation lies in the author's conviction according to which a genuine theory should be formulated in falsifiable terms. According to Narmour, prior knowledge of musical expectation is based too heavily upon percepts, introspection and internalization, which bring insoluble epistemological problems. [3] The theory focuses on how implicative intervals set up expectations for certain realizations to follow. The I-R model includes two primary factors: proximity and direction. [3] [4] [24] [25] Lerdahl extended the system by developing a tonal pitch space and adding a stability factor (based on Lerdahl's prior work) and a mobility factor. [39]
Mainly developed at IRISA since 2011 by Frédéric Bimbot and Emmanuel Deruty, the system & contrast or S&C model of implication [5] [40] [41] [42] [43] derives from the two fundamental hypotheses underlying the I-R model. [4] It is rooted in Narmour's conviction according to which any model of expectation should be expressed in logical, falsifiable terms. [3] It operates at the neutral level and differs from the I-R model in several regards:
Margulis's 2005 model [15] further extends the I-R model. First, Margulis added a melodic attraction factor, from some of Lerdahl's work. Second, while the I-R model relies on a single (local) interval to establish an implication (an expectation), Margulis attempts to model intervalic (local) expectation as well as more deeply schematic (global) expectation. For this, Margulis relies on Lerdahl's and Jackendoff's Generative Theory of Tonal Music [34] to provide a time-span reduction. At each hierarchical level (a different time scale) in the reduction, Margulis applies her model. These separate levels of analysis are combined through averaging, with each level weighted according to values derived from the time-span reduction. Finally, Margulis's model is explicit and realizable, and yields quantitative output. The output – melodic expectation at each time instant – is a single function of time.
Margulis's model describes three distinct types of listener reactions, each derived from listener-experienced tension:
![]() | This section is empty. You can help by adding to it. (November 2014) |
![]() | This section is empty. You can help by adding to it. (November 2014) |
![]() | This section is empty. You can help by adding to it. (November 2014) |
Rhythm generally means a "movement marked by the regulated succession of strong and weak elements, or of opposite or different conditions". This general meaning of regular recurrence or pattern in time can apply to a wide variety of cyclical natural phenomena having a periodicity or frequency of anything from microseconds to several seconds ; to several minutes or hours, or, at the most extreme, even over many years.
In music, harmony is the concept of combining different sounds together in order to create new, distinct musical ideas. Theories of harmony seek to describe or explain the effects created by distinct pitches or tones coinciding with one another; harmonic objects such as chords, textures and tonalities are identified, defined, and categorized in the development of these theories. Harmony is broadly understood to involve both a "vertical" dimension (frequency-space) and a "horizontal" dimension (time-space), and often overlaps with related musical concepts such as melody, timbre, and form.
Musical analysis is the study of musical structure in either compositions or performances. According to music theorist Ian Bent, music analysis "is the means of answering directly the question 'How does it work?'". The method employed to answer this question, and indeed exactly what is meant by the question, differs from analyst to analyst, and according to the purpose of the analysis. According to Bent, "its emergence as an approach and method can be traced back to the 1750s. However it existed as a scholarly tool, albeit an auxiliary one, from the Middle Ages onwards."
The tritone paradox is an auditory illusion in which a sequentially played pair of Shepard tones separated by an interval of a tritone, or half octave, is heard as ascending by some people and as descending by others. Different populations tend to favor one of a limited set of different spots around the chromatic circle as central to the set of "higher" tones. Roger Shepard in 1963 had argued that such tone pairs would be heard ambiguously as either ascending or descending. However, psychology of music researcher Diana Deutsch in 1986 discovered that when the judgments of individual listeners were considered separately, their judgments depended on the positions of the tones along the chromatic circle. For example, one listener would hear the tone pair C–F♯ as ascending and the tone pair G–C♯ as descending. Yet another listener would hear the tone pair C–F♯ as descending and the tone pair G–C♯ as ascending. Furthermore, the way these tone pairs were perceived varied depending on the listener's language or dialect.
Alfred Whitford (Fred) Lerdahl is the Fritz Reiner Professor Emeritus of Musical Composition at Columbia University, and a composer and music theorist best known for his work on musical grammar and cognition, rhythmic theory, pitch space, and cognitive constraints on compositional systems. He has written many orchestral and chamber works, three of which were finalists for the Pulitzer Prize for Music: Time after Time in 2001, String Quartet No. 3 in 2010, and Arches in 2011.
In scale (music) theory, a maximally even set (scale) is one in which every generic interval has either one or two consecutive integers specific intervals-in other words a scale whose notes (pcs) are "spread out as much as possible." This property was first described by John Clough and Jack Douthett. Clough and Douthett also introduced the maximally even algorithm. For a chromatic cardinality c and pc-set cardinality d a maximally even set is
In music, tessitura is the most acceptable and comfortable vocal range for a given singer. It is the range in which a given type of voice presents its best-sounding timbre. This broad definition is often interpreted to refer specifically to the pitch range that most frequently occurs within a given part of a musical piece. Hence, in musical notation, tessitura is the ambitus, or a narrower part of it, in which that particular vocal part lies—whether high or low, etc.
"Cognitive Constraints on Compositional Systems" is an essay by Fred Lerdahl that cites Pierre Boulez's Le Marteau sans maître (1955) as an example of "a huge gap between compositional system and cognized result," though he "could have illustrated just as well with works by Milton Babbitt, Elliott Carter, Luigi Nono, Karlheinz Stockhausen, or Iannis Xenakis". To explain this gap, and in hopes of bridging it, Lerdahl proposes the concept of a musical grammar, "a limited set of rules that can generate indefinitely large sets of musical events and/or their structural descriptions". He divides this further into compositional grammar and listening grammar, the latter being one "more or less unconsciously employed by auditors, that generates mental representations of the music". He divides the former into natural and artificial compositional grammars. While the two have historically been fruitfully mixed, a natural grammar arises spontaneously in a culture while an artificial one is a conscious invention of an individual or group in a culture; the gap can arise only between listening grammar and artificial grammars. To begin to understand the listening grammar, Lerdahl and Ray Jackendoff created a theory of musical cognition, A Generative Theory of Tonal Music. That theory is outlined in the essay.
The Implication-Realization (I-R) model of melodic expectation was developed by Eugene Narmour as an alternative to Schenkerian analysis centered less on music analysis and more on cognitive aspects of expectation. The model is one of the most significant modern theories of melodic expectation, going into great detail about how certain melodic structures arouse particular expectations.
In music theory, pitch-class space is the circular space representing all the notes in a musical octave. In this space, there is no distinction between tones that are separated by an integral number of octaves. For example, C4, C5, and C6, though different pitches, are represented by the same point in pitch class space.
Music psychology, or the psychology of music, may be regarded as a branch of both psychology and musicology. It aims to explain and understand musical behaviour and experience, including the processes through which music is perceived, created, responded to, and incorporated into everyday life. Modern music psychology is primarily empirical; its knowledge tends to advance on the basis of interpretations of data collected by systematic observation of and interaction with human participants. Music psychology is a field of research with practical relevance for many areas, including music performance, composition, education, criticism, and therapy, as well as investigations of human attitude, skill, performance, intelligence, creativity, and social behavior.
Cognitive musicology is a branch of cognitive science concerned with computationally modeling musical knowledge with the goal of understanding both music and cognition.
A generative theory of tonal music (GTTM) is a theory of music conceived by American composer and music theorist Fred Lerdahl and American linguist Ray Jackendoff and presented in the 1983 book of the same title. It constitutes a "formal description of the musical intuitions of a listener who is experienced in a musical idiom" with the aim of illuminating the unique human capacity for musical understanding.
Culture in music cognition refers to the impact that a person's culture has on their music cognition, including their preferences, emotion recognition, and musical memory. Musical preferences are biased toward culturally familiar musical traditions beginning in infancy, and adults' classification of the emotion of a musical piece depends on both culturally specific and universal structural features. Additionally, individuals' musical memory abilities are greater for culturally familiar music than for culturally unfamiliar music. The sum of these effects makes culture a powerful influence in music cognition.
Research into music and emotion seeks to understand the psychological relationship between human affect and music. The field, a branch of music psychology, covers numerous areas of study, including the nature of emotional reactions to music, how characteristics of the listener may determine which emotions are felt, and which components of a musical composition or performance may elicit certain reactions.
In music cognition, melodic fission, is a phenomenon in which one line of pitches is heard as two or more separate melodic lines. This occurs when a phrase contains groups of pitches at two or more distinct registers or with two or more distinct timbres.
David Huron is a Canadian Arts and Humanities Distinguished Professor at the Ohio State University, in both the School of Music and the Center for Cognitive and Brain Sciences. His teaching and publications focus on the psychology of music and music cognition. In 2017, Huron was awarded the Society for Music Perception and Cognition Achievement Award.
Robert O. Gjerdingen is a scholar of music theory and music perception, and is an emeritus professor at Northwestern University. His most influential work focuses on the application of ideas from cognitive science, especially theories about schemas, as an analytical tool in an attempted "archaeology" of style and composition methods in galant European music of the eighteenth century. Gjerdingen received his PhD from the University of Pennsylvania in 1984 after studying with Leonard B. Meyer and Eugene Narmour. His 2007 book Music in the Galant Style, an authoritative study on galant schemata, received the Wallace Berry award from the Society for Music Theory in 2009 and has become influential in the field of music theory. Gjerdingen was also editor of the journal Music Perception from 1998 to 2002.
The speech-to-song illusion is an auditory illusion discovered by Diana Deutsch in 1995. A spoken phrase is repeated several times, without altering it in any way, and without providing any context. This repetition causes the phrase to transform perceptually from speech into song.Though mostly notable with languages that are non-tone, like English and German, it is possible to happen with tone languages, like Thai and Mandarin.
Lola L. Cuddy is a Canadian psychologist recognized for her contributions to the field of music psychology. She is a professor emerita in the Department of Psychology at Queen's University in Kingston, Ontario.
{{cite journal}}
: Cite journal requires |journal=
(help){{cite journal}}
: Cite journal requires |journal=
(help){{cite web}}
: CS1 maint: multiple names: authors list (link)