Melodic fission

Last updated
Melodic fission occurring in mm 1-2 of the Allemande from J.S. Bach's violin partita in B minor (BWV 1002). Red and blue have been used to denote the two separate streams. BWV 1002 Allemande melodic fission example.png
Melodic fission occurring in mm 1-2 of the Allemande from J.S. Bach's violin partita in B minor (BWV 1002). Red and blue have been used to denote the two separate streams.

In music cognition, melodic fission (also known as melodic or auditory streaming, or stream segregation), is a phenomenon in which one line of pitches (an auditory stream) is heard as two or more separate melodic lines. This occurs when a phrase contains groups of pitches at two or more distinct registers or with two or more distinct timbres.

Contents

The term appears to stem from a 1973 paper by W. J. Dowling. [2] In music analysis and, more specifically, in Schenkerian analysis, the phenomenon more is often termed compound melody. [3]

In psychophysics, auditory scene analysis is the process by which the brain separates and organizes sounds into perceptually distinct groups, known as auditory streams.

The counterpart to melodic fission is melodic fusion. [4]

Contributing factors

Register

Listeners tend to perceive fast melodic sequences which contain tones from two different registers as two melodic lines. [5] The greater the distance between groups of tones in a melody, the more likely they will be heard as two different and interrupted streams instead of one continuous stream. [6] [7] Studies involving the interleaving of two melodies have found that the closer the melodies are in register, the more difficult it is for listeners to perceptually separate the melodies. [8] Tempo is important, as the threshold for registrar distance between melodic phrases still perceived as one stream increases as the tempo of the melody decreases. [9]

Timbre

The more distinct the timbre of groups of pitches within one stream, the greater the likelihood that listeners will separate them into different streams. [10] Similar to results found with experiments in pitch level, slower tempos increase the chance of perception of timbrally distinct pitches as one continuous stream. [11] Timbral difference may override registral similarity in the perception of segregated streams. [12] Additionally, quick and contrasting attack times in groups of tones lead to fission. [13]

Volume

Differences in volume of groups of pitches can also lead to stream segregation. [14] Logically, the louder the volume of a group of tones, the greater likelihood of melodic fission. In addition, when two streams are perceptually segregated due to differences in volume, the quieter stream is perceived as continuous, but interrupted by the louder stream. [15]

Repetition

Perception of separate streams builds as the melodic sequence is repeated over time, first rapidly, and then at a decreased rate. [16] However, a few factors can impede this process and "reset" fission perception, including silence between presentation of the melody, [17] alteration of signal location (right or left ear) of the melody, and abrupt changes in volume. [18] [19] [20]

See also

Related Research Articles

<span class="mw-page-title-main">Timbre</span> Quality of a musical note or sound or tone

In music, timbre, also known as tone color or tone quality, is the perceived sound quality of a musical note, sound or tone. Timbre distinguishes different types of sound production, such as choir voices and musical instruments. It also enables listeners to distinguish different instruments in the same category.

<span class="mw-page-title-main">Pitch (music)</span> Perceptual property in music ordering sounds from low to high

Pitch is a perceptual property of sounds that allows their ordering on a frequency-related scale, or more commonly, pitch is the quality that makes it possible to judge sounds as "higher" and "lower" in the sense associated with musical melodies. Pitch is a major auditory attribute of musical tones, along with duration, loudness, and timbre.

The octave illusion is an auditory illusion discovered by Diana Deutsch in 1973. It is produced when two tones that are an octave apart are repeatedly played in alternation ("high-low-high-low") through stereo headphones. The same sequence is played to both ears simultaneously; however when the right ear receives the high tone, the left ear receives the low tone, and conversely. Instead of hearing two alternating pitches, most subjects instead hear a single tone that alternates between ears while at the same time its pitch alternates between high and low.

<span class="mw-page-title-main">Illusory continuity of tones</span> Auditory illusion

The illusory continuity of tones is the auditory illusion caused when a tone is interrupted for a short time, during which a narrow band of noise is played. The noise has to be of a sufficiently high level to effectively mask the gap, unless it is a gap transfer illusion. Whether the tone is of constant, rising or decreasing pitch, the ear perceives the tone as continuous if the discontinuity is masked by noise. Because the human ear is very sensitive to sudden changes, however, it is necessary for the success of the illusion that the amplitude of the tone in the region of the discontinuity not decrease or increase too abruptly. While the inner mechanisms of this illusion is not well understood, there is evidence that supports activation of primarily the auditory cortex is present.

Deutsch's scale illusion is an auditory illusion in which two series of unconnected notes appear to combine into a single recognisable melody, when played simultaneously into the left and right ears of a listener.

Attentional blink (AB) is a phenomenon that reflects temporal limitations in the ability to deploy visual attention. When people must identify two visual stimuli in quick succession, accuracy for the second stimulus is poor if it occurs within 200 to 500 ms of the first.

The kappa effect or perceptual time dilation is a temporal perceptual illusion that can arise when observers judge the elapsed time between sensory stimuli applied sequentially at different locations. In perceiving a sequence of consecutive stimuli, subjects tend to overestimate the elapsed time between two successive stimuli when the distance between the stimuli is sufficiently large, and to underestimate the elapsed time when the distance is sufficiently small.

In perception and psychophysics, auditory scene analysis (ASA) is a proposed model for the basis of auditory perception. This is understood as the process by which the human auditory system organizes sound into perceptually meaningful elements. The term was coined by psychologist Albert Bregman. The related concept in machine perception is computational auditory scene analysis (CASA), which is closely related to source separation and blind signal separation.

<span class="mw-page-title-main">Albert Bregman</span>

Albert Stanley "Al" Bregman is a Canadian professor and researcher in experimental psychology, cognitive science, and Gestalt psychology, primarily in the perceptual organization of sound.

Roberta "Bobby Lou" Klatzky is a Professor of Psychology at Carnegie Mellon University (CMU). She specializes in human perception and cognition, particularly relating to visual and non-visual perception and representation of space and geometric shapes. Klatzky received a B.A. in mathematics from the University of Michigan in 1968 and a Ph.D in psychology from Stanford University in 1972. She has done extensive research on human haptic and visual object recognition, navigation under visual and nonvisual guidance, and perceptually guided action.

Musical memory refers to the ability to remember music-related information, such as melodic content and other progressions of tones or pitches. The differences found between linguistic memory and musical memory have led researchers to theorize that musical memory is encoded differently from language and may constitute an independent part of the phonological loop. The use of this term is problematic, however, since it implies input from a verbal system, whereas music is in principle nonverbal.

<span class="mw-page-title-main">Illusory conjunctions</span> Illusory conjunctions

Illusory conjunctions are psychological effects in which participants combine features of two objects into one object. There are visual illusory conjunctions, auditory illusory conjunctions, and illusory conjunctions produced by combinations of visual and tactile stimuli. Visual illusory conjunctions are thought to occur due to a lack of visual spatial attention, which depends on fixation and the amount of time allotted to focus on an object. With a short span of time to interpret an object, blending of different aspects within a region of the visual field – like shapes and colors – can occasionally be skewed, which results in visual illusory conjunctions. For example, in a study designed by Anne Treisman and Schmidt, participants were required to view a visual presentation of numbers and shapes in different colors. Some shapes were larger than others but all shapes and numbers were evenly spaced and shown for just 200 ms. When the participants were asked to recall the shapes they reported answers such as a small green triangle instead of a small green circle. If the space between the objects is smaller, illusory conjunctions occur more often.

The neuroscience of music is the scientific study of brain-based mechanisms involved in the cognitive processes underlying music. These behaviours include music listening, performing, composing, reading, writing, and ancillary activities. It also is increasingly concerned with the brain basis for musical aesthetics and musical emotion. Scientists working in this field may have training in cognitive neuroscience, neurology, neuroanatomy, psychology, music theory, computer science, and other relevant fields.

Neuroscientists have learned much about the role of the brain in numerous cognitive mechanisms by understanding corresponding disorders. Similarly, neuroscientists have come to learn much about music cognition by studying music-specific disorders. Even though music is most often viewed from a "historical perspective rather than a biological one" music has significantly gained the attention of neuroscientists all around the world. For many centuries music has been strongly associated with art and culture. The reason for this increased interest in music is because it "provides a tool to study numerous aspects of neuroscience, from motor skill learning to emotion".

Change deafness is a perceptual phenomenon that occurs when, under certain circumstances, a physical change in an auditory stimulus goes unnoticed by the listener. There is uncertainty regarding the mechanisms by which changes to auditory stimuli go undetected, though scientific research has been done to determine the levels of processing at which these consciously undetected auditory changes are actually encoded. An understanding of the mechanisms underlying change deafness could offer insight on issues such as the completeness of our representation of the auditory environment, the limitations of the auditory perceptual system, and the relationship between the auditory system and memory. The phenomenon of change deafness is thought to be related to the interactions between high and low level processes that produce conscious experiences of auditory soundscapes.

Representational momentum is a small, but reliable, error in our visual perception of moving objects. Representational moment was discovered and named by Jennifer Freyd and Ronald Finke. Instead of knowing the exact location of a moving object, viewers actually think it is a bit further along its trajectory as time goes forward. For example, people viewing an object moving from left to right that suddenly disappears will report they saw it a bit further to the right than where it actually vanished. While not a big error, it has been found in a variety of different events ranging from simple rotations to camera movement through a scene. The name "representational momentum" initially reflected the idea that the forward displacement was the result of the perceptual system having internalized, or evolved to include, basic principles of Newtonian physics, but it has come to mean forward displacements that continue a presented pattern along a variety of dimensions, not just position or orientation. As with many areas of cognitive psychology, theories can focus on bottom-up or top-down aspects of the task. Bottom-up theories of representational momentum highlight the role of eye movements and stimulus presentation, while top-down theories highlight the role of the observer's experience and expectations regarding the presented event.

Object-based attention refers to the relationship between an ‘object’ representation and a person’s visually stimulated, selective attention, as opposed to a relationship involving either a spatial or a feature representation; although these types of selective attention are not necessarily mutually exclusive. Research into object-based attention suggests that attention improves the quality of the sensory representation of a selected object, and results in the enhanced processing of that object’s features.

Perceptual load theory is a psychological theory of attention. It was presented by Nilli Lavie in the mid-nineties as a potential resolution to the early/late selection debate.

Lola L. Cuddy is a Canadian psychologist recognized for her contributions to the field of music psychology. She is a professor emerita in the Department of Psychology at Queen's University in Kingston, Ontario.

<span class="mw-page-title-main">Michael Kubovy</span>

Michael Kubovy is an Israeli American psychologist known for his work on the psychology of perception and psychology of art.

References

  1. Davis, Stacey "Implied Polyphony in the Solo String Works of J. S. Bach: A Case for the Perceptual Relevance of Structural Expression". Music Perception, 2006, Vol. 23, 429.
  2. W. J. Dowling (1973), "The perception of interleaved melodies", Cognitive Psychology 5, pp. 322-337. A. S. Bregman & J. Campbell (1971), "Primary auditory stream segregation and perception of order in rapid sequences of tones", Journal of Experimental Psychology 89, pp. 244-249, had spoken of "auditory stream segregation".
  3. The term appears to have been coined by Walter Piston (1947), Counterpoint, New York, Norton, under the form "compound melodic line" (London edition, 1947, p. 23). In the context of Schenkerian analysis, see for instance Forte & Gilbert (1982), Introduction to Schenkerian Analysis, Chapter 3, pp. 67-80. See also Schenkerian analysis. Manfred Bukofzer (1947), Music in the Baroque Era, New York, Norton, had spoken of "implied polyphony".
  4. Saighoe, Francis. "Resultant Melodies: A Psycho-Structural Analysis". Journal of the Ghana Teacher's Association, 1991, Vol. 1, pp. 30-39.
  5. Deutsch, Diana (2012). Psychology of Music. St. Louis, Missouri: Academic Press.
  6. Dowling, W. J. "The Perception of Interleaved Melodies". Cognitive Psychology, 1973, Vol. 5, 322-337.
  7. Dowling, W.J., Lung, K.M., and Herrbold, S., "Aiming Attention in Pitch and Time in the Perception of Interleaved Melodies". Perception & Psychophysics, 1987, Vol. 41, 642-656
  8. Bey, C, and McAdams, S. "Postrecognition of Interleaved Melodies as an Indirect Measure of Auditory Stream Formation". Journal of Experimental Psychology: Human Perception and Performance, 2003, Vol. 29, 267-279.
  9. Van Noorden, Leo (1975). Temporal Coherence in the Perception of Tone Sequences. The Netherlands: Technische Hogeschool Eindhoven.
  10. Wessel, David (1979). "Timbre Space as a Musical Control Structure". Computer Music Journal. 3: 45–52. doi:10.2307/3680283.
  11. Warren, R. M.; Obusek, C. J.; Farmer, R. M.; Warren, R. P. (1969). "Auditory Sequence: Confusion of Patterns Other than Speech or Music". Science. 164: 586–587. doi:10.1126/science.164.3879.586.
  12. Deutsch, Diana (2012). Psychology of Music. St. Louis, Missouri: Academic Press.
  13. Iverson, P. (1995). "Auditory Stream Segregation by Music Timbre: Effects of Static and Dynamic Acoustic Attributes". Journal of Experimental Psychology: Human Perception and Performance. 21: 751–763. doi:10.1037/0096-1523.21.4.751.
  14. Dowling, W. J. "The Perception of Interleaved Melodies". Cognitive Psychology, 1973, Vol. 5, 322-337.
  15. Van Noorden, Leon (1975). Temporal Coherence in the Perception of Tone Sequences. The Netherlands: Technische Hogeschool Eindhoven.
  16. Anstis, S. M.; Saida, S. (1985). "Adaptation to Auditory Streaming of Frequency Modulated Tones". Journal of Experimental Psychology: Human Perception and Performance. 11: 257–271. doi:10.1037/0096-1523.11.3.257.
  17. Beavois, M. W.; Meddis, R. (1997). "Time Decay of Auditory Stream Biasing". Perception & Psychophysics. 59: 81–86. doi: 10.3758/bf03206850 .
  18. Anstis, S. M.; Saida, S. (1985). "Adaptation to Auditory Streaming of Frequency Modulated Tones". Journal of Experimental Psychology: Human Perception and Performance. 11: 257–271. doi:10.1037/0096-1523.11.3.257.
  19. Rogers, W. L.; Bregman, A. S. (1993). "An Experimental Evaluation of Three Theories of Auditory Scene Analysis". Perception & Psychophysics. 53: 179–189. doi: 10.3758/bf03211728 .
  20. Rogers, W. L.; Bregman, A. S. (1998). "Culmulation of the Tendency to Segregate Auditory Streams: Resetting by Changes in Location and Loudness". Perception & Psychophysics. 60: 1216–1227. doi: 10.3758/bf03206171 .