Culture in music cognition refers to the impact that a person's culture has on their music cognition, including their preferences, emotion recognition, and musical memory. Musical preferences are biased toward culturally familiar musical traditions beginning in infancy, and adults' classification of the emotion of a musical piece depends on both culturally specific and universal structural features. [1] [2] [3] Additionally, individuals' musical memory abilities are greater for culturally familiar music than for culturally unfamiliar music. [4] [5] The sum of these effects makes culture a powerful influence in music cognition.
Culturally bound preferences and familiarity for music begin in infancy and continue through adolescence and adulthood. [1] [6] People tend to prefer and remember music from their own cultural tradition. [1] [3]
Familiarity for culturally regular meter styles is already in place for young infants of only a few months' age. [1] The looking times of 4- to 8-month old Western infants indicate that they prefer Western meter in music, while Turkish infants of the same age prefer both Turkish and Western meters (Western meters not being completely unfamiliar in Turkish culture). Both groups preferred either meter when compared with arbitrary meter. [1]
In addition to influencing preference for meter, culture affects people's ability to correctly identify music styles. Adolescents from Singapore and the UK rated familiarity and preference for excerpts of Chinese, Malay, and Indian music styles. [6] Neither group demonstrated a preference for the Indian music samples, although the Singaporean teenagers recognized them. Participants from Singapore showed higher preference for and ability to recognize the Chinese and Malay samples; UK participants showed little preference or recognition for any of the music samples, as those types of music are not present in their native culture. [6]
An individual's musical experience may affect how they formulate preferences for music from their own culture and other cultures. [7] American and Japanese individuals (non-music majors) both indicated preference for Western music, but Japanese individuals were more receptive to Eastern music. Among the participants, there was one group with little musical experience and one group that had received supplemental musical experience in their lifetimes. Although both American and Japanese participants disliked formal Eastern styles of music and preferred Western styles of music, participants with greater musical experience showed a wider range of preference responses not specific to their own culture. [7]
Bimusicalism is a phenomenon in which people well-versed and familiar with music from two different cultures exhibit dual sensitivity to both genres of music. [8] In a study conducted with participants familiar with Western, Indian, and both Western and Indian music, the bimusical participants (exposed to both Indian and Western styles) showed no bias for either music style in recognition tasks and did not indicate that one style of music was more tense than the other. In contrast, the Western and Indian participants more successfully recognized music from their own culture and felt the other culture's music was more tense on the whole. These results indicate that everyday exposure to music from both cultures can result in cognitive sensitivity to music styles from those cultures. [8]
Bilingualism typically confers specific preferences for the language of lyrics in a song. [9] When monolingual (English-speaking) and bilingual (Spanish- and English-speaking) sixth graders listened to the same song played in an instrumental, English, or Spanish version, ratings of preference showed that bilingual students preferred the Spanish version, while monolingual students more often preferred the instrumental version; the children's self-reported distraction was the same for all excerpts. Spanish (bilingual) speakers also identified most closely with the Spanish song. [9] Thus, the language of lyrics interacts with a listener's culture and language abilities to affect preferences.
The cue-redundancy model of emotion recognition in music differentiates between universal, structural auditory cues and culturally bound, learned auditory cues (see schematic below). [2] [3]
Structural cues that span all musical traditions include dimensions such as pace (tempo), loudness, and timbre. [10] Fast tempo, for example, is typically associated with happiness, regardless of a listener's cultural background.
Culture-specific cues rely on knowledge of the conventions in a particular musical tradition. [2] [11] Ethnomusicologists have said that there are certain situations in which a certain song would be sung in different cultures. These times are marked by cultural cues and by the people of that culture. [12] A particular timbre may be interpreted to reflect one emotion by Western listeners and another emotion by Eastern listeners. [3] [13] There could be other culturally bound cues as well, for example, rock n' roll music is usually identified to be a rebellious type of music associated with teenagers and the music reflects their ideals and beliefs that their culture believes. [14]
According to the cue-redundancy model, individuals exposed to music from their own cultural tradition utilize both psychophysical and culturally bound cues in identifying emotionality. [10] Conversely, perception of intended emotion in unfamiliar music relies solely on universal, psychophysical properties. [2] Japanese listeners accurately categorize angry, joyful, and happy musical excerpts from familiar traditions (Japanese and Western samples) and relatively unfamiliar traditions (Hindustani). [2] Simple, fast melodies receive joyful ratings from these participants; simple, slow samples receive sad ratings, and loud, complex excerpts are perceived as angry. [2] Strong relationships between emotional judgments and structural acoustic cues suggest the importance of universal musical properties in categorizing unfamiliar music. [2] [3]
When both Korean and American participants judged the intended emotion of Korean folksongs, the American group's identification of happy and sad songs was equivalent to levels observed for Korean listeners. [10] Surprisingly, Americans exhibited greater accuracy in anger assessments than the Korean group. The latter result implies cultural differences in anger perception occur independently of familiarity, while the similarity of American and Korean happy and sad judgments indicates the role of universal auditory cues in emotional perception. [10]
Categorization of unfamiliar music varies with intended emotion. [2] [13] Timbre mediates Western listeners' recognition of angry and peaceful Hindustani songs. [13] Flute timbre supports the detection of peace, whereas string timbre aids anger identification. Happy and sad assessments, on the other hand, rely primarily on relatively "low-level" structural information such as tempo. Both low-level cues (e.g., slow tempo) and timbre aid in the detection of peaceful music, but only timbre cued anger recognition. [13] Communication of peace, therefore, takes place at multiple structural levels, while anger seems to be conveyed nearly exclusively by timbre. Similarities between aggressive vocalizations and angry music (e.g., roughness) may contribute to the salience of timbre in anger assessments. [15]
The stereotype theory of emotion in music (STEM) suggests that cultural stereotyping may affect emotion perceived in music. STEM argues that for some listeners with low expertise, emotion perception in music is based on stereotyped associations held by the listener about the encoding culture of the music (i.e., the culture representative of a particular music genre, such as Brazilian culture encoded in Bossa Nova music). [16] STEM is an extension of the cue-redundancy model because in addition to arguing for two sources of emotion, some cultural cues can now be specifically explained in terms of stereotyping. Particularly, STEM provides more specific predictions, namely that emotion in music is dependent to some extent on the cultural stereotyping of the music genre being perceived.
Because musical complexity is a psychophysical dimension, the cue-redundancy model predicts that complexity is perceived independently of experience. However, South African and Finnish listeners assign different complexity ratings to identical African folk songs. [17] Thus, the cue-redundancy model may be overly simplistic in its distinctions between structural feature detection and cultural learning, at least in the case of complexity.
When listening to music from within one's own cultural tradition, repetition plays a key role in emotion judgments. American listeners who hear classical or jazz excerpts multiple times rate the elicited and conveyed emotion of the pieces as higher relative to participants who hear the pieces once. [18]
Methodological limitations of previous studies preclude a complete understanding of the roles of psychophysical cues in emotion recognition. Divergent mode and tone cues elicit "mixed affect", demonstrating the potential for mixed emotional percepts. [19] Use of dichotomous scales (e.g., simple happy/sad ratings) may mask this phenomenon, as these tasks require participants to report a single component of a multidimensional affective experience.
Enculturation is a powerful influence on music memory. Both long-term and working memory systems are critically involved in the appreciation and comprehension of music. Long-term memory enables the listener to develop musical expectation based on previous experience while working memory is necessary to relate pitches to one another in a phrase, between phrases, and throughout a piece. [20]
Neuroscientific evidence suggests that memory for music is, at least in part, special and distinct from other forms of memory. [21] The neural processes of music memory retrieval share much with the neural processes of verbal memory retrieval, as indicated by functional magnetic resonance imaging studies comparing the brain areas activated during each task. [5] Both musical and verbal memory retrieval activate the left inferior frontal cortex, which is thought to be involved in executive function, especially executive function of verbal retrieval, and the posterior middle temporal cortex, which is thought to be involved in semantic retrieval. [5] [22] [23] However, musical semantic retrieval also bilaterally activates the superior temporal gyri containing the primary auditory cortex. [5]
Despite the universality of music, enculturation has a pronounced effect on individuals' memory for music. Evidence suggests that people develop their cognitive understanding of music from their cultures. [4] People are best at recognizing and remembering music in the style of their native culture, and their music recognition and memory is better for music from familiar but nonnative cultures than it is for music from unfamiliar cultures. [4] Part of the difficulty in remembering culturally unfamiliar music may arise from the use of different neural processes when listening to familiar and unfamiliar music. For instance, brain areas involved in attention, including the right angular gyrus and middle frontal gyrus, show increased activity when listening to culturally unfamiliar music compared to novel but culturally familiar music. [20]
Enculturation affects music memory in early childhood before a child's cognitive schemata for music is fully formed, perhaps beginning at as early as one year of age. [24] [25] Like adults, children are also better able to remember novel music from their native culture than from unfamiliar ones, although they are less capable than adults at remembering more complex music. [24]
Children's developing music cognition may be influenced by the language of their native culture. [26] For instance, children in English-speaking cultures develop the ability to identify pitches from familiar songs at 9 or 10 years old, while Japanese children develop the same ability at age 5 or 6. [26] This difference may be due to the Japanese language's use of pitch accents, which encourages better pitch discrimination at an early age, rather than the stress accents upon which English relies. [26]
Enculturation also biases listeners' expectations such that they expect to hear tones that correspond to culturally familiar modal traditions. [27] For example, Western participants presented with a series of pitches followed by a test tone not present in the original series were more likely to mistakenly indicate that the test tone was originally present if the tone was derived from a Western scale than if it was derived from a culturally unfamiliar scale. [27] Recent research indicates that deviations from expectations in music may prompt out-group derogation. [28]
Despite the powerful effects of music enculturation, evidence indicates that cognitive comprehension of and affinity for different cultural modalities is somewhat plastic. One long-term instance of plasticity is bimusicalism, a musical phenomenon akin to bilingualism. Bimusical individuals frequently listen to music from two cultures and do not demonstrate the biases in recognition memory and perceptions of tension displayed by individuals whose listening experience is limited to one musical tradition. [8]
Other evidence suggests that some changes in music appreciation and comprehension can occur over a short period of time. For instance, after half an hour of passive exposure to original melodies using familiar Western pitches in an unfamiliar musical grammar or harmonic structure (the Bohlen–Pierce scale), Western participants demonstrated increased recognition memory and greater affinity for melodies in this grammar. [29] This suggests that even very brief exposure to unfamiliar music can rapidly affect music perception and memory.
Wishful thinking is the formation of beliefs based on what might be pleasing to imagine, rather than on evidence, rationality, or reality. It is a product of resolving conflicts between belief and desire. Methodologies to examine wishful thinking are diverse. Various disciplines and schools of thought examine related mechanisms such as neural circuitry, human cognition and emotion, types of bias, procrastination, motivation, optimism, attention and environment. This concept has been examined as a fallacy. It is related to the concept of wishful seeing.
Facial perception is an individual's understanding and interpretation of the face. Here, perception implies the presence of consciousness and hence excludes automated facial recognition systems. Although facial recognition is found in other species, this article focuses on facial perception in humans.
Music psychology, or the psychology of music, may be regarded as a branch of both psychology and musicology. It aims to explain and understand musical behaviour and experience, including the processes through which music is perceived, created, responded to, and incorporated into everyday life. Modern music psychology is primarily empirical; its knowledge tends to advance on the basis of interpretations of data collected by systematic observation of and interaction with human participants. Music psychology is a field of research with practical relevance for many areas, including music performance, composition, education, criticism, and therapy, as well as investigations of human attitude, skill, performance, intelligence, creativity, and social behavior.
The reminiscence bump is the tendency for older adults to have increased or enhanced recollection for events that occurred during their adolescence and early adulthood. It was identified through the study of autobiographical memory and the subsequent plotting of the age of encoding of memories to form the lifespan retrieval curve.
Musical memory refers to the ability to remember music-related information, such as melodic content and other progressions of tones or pitches. The differences found between linguistic memory and musical memory have led researchers to theorize that musical memory is encoded differently from language and may constitute an independent part of the phonological loop. The use of this term is problematic, however, since it implies input from a verbal system, whereas music is in principle nonverbal.
Autobiographical memory (AM) is a memory system consisting of episodes recollected from an individual's life, based on a combination of episodic and semantic memory. It is thus a type of explicit memory.
In psychology, context-dependent memory is the improved recall of specific episodes or information when the context present at encoding and retrieval are the same. In a simpler manner, "when events are represented in memory, contextual information is stored along with memory targets; the context can therefore cue memories containing that contextual information". One particularly common example of context-dependence at work occurs when an individual has lost an item in an unknown location. Typically, people try to systematically "retrace their steps" to determine all of the possible places where the item might be located. Based on the role that context plays in determining recall, it is not at all surprising that individuals often quite easily discover the lost item upon returning to the correct context. This concept is heavily related to the encoding specificity principle.
Priming is the idea that exposure to one stimulus may influence a response to a subsequent stimulus, without conscious guidance or intention. The priming effect refers to the positive or negative effect of a rapidly presented stimulus on the processing of a second stimulus that appears shortly after. Generally speaking, the generation of priming effect depends on the existence of some positive or negative relationship between priming and target stimuli. For example, the word nurse might be recognized more quickly following the word doctor than following the word bread. Priming can be perceptual, associative, repetitive, positive, negative, affective, semantic, or conceptual. Priming effects involve word recognition, semantic processing, attention, unconscious processing, and many other issues, and are related to differences in various writing systems. Research, however, has yet to firmly establish the duration of priming effects, yet their onset can be almost instantaneous.
Emotion can have a powerful effect on humans and animals. Numerous studies have shown that the most vivid autobiographical memories tend to be of emotional events, which are likely to be recalled more often and with more clarity and detail than neutral events.
Visual object recognition refers to the ability to identify the objects in view based on visual input. One important signature of visual object recognition is "object invariance", or the ability to identify objects across changes in the detailed context in which objects are viewed, including changes in illumination, object pose, and background context.
Fuzzy-trace theory (FTT) is a theory of cognition originally proposed by Valerie F. Reyna and Charles Brainerd that draws upon dual-trace conceptions to predict and explain cognitive phenomena, particularly in memory and reasoning. The theory has been used in areas such as cognitive psychology, human development, and social psychology to explain, for instance, false memory and its development, probability judgments, medical decision making, risk perception and estimation, and biases and fallacies in decision making.
Memory supports and enables social interactions in a variety of ways. In order to engage in successful social interaction, people must be able to remember how they should interact with one another, whom they have interacted with previously, and what occurred during those interactions. There are a lot of brain processes and functions that go into the application and use of memory in social interactions, as well as psychological reasoning for its importance.
The encoding specificity principle is the general principle that matching the encoding contexts of information at recall assists in the retrieval of episodic memories. It provides a framework for understanding how the conditions present while encoding information relate to memory and recall of that information.
Eyewitness memory is a person's episodic memory for a crime or other witnessed dramatic event. Eyewitness testimony is often relied upon in the judicial system. It can also refer to an individual's memory for a face, where they are required to remember the face of their perpetrator, for example. However, the accuracy of eyewitness memories is sometimes questioned because there are many factors that can act during encoding and retrieval of the witnessed event which may adversely affect the creation and maintenance of the memory for the event. Experts have found evidence to suggest that eyewitness memory is fallible.
Research into music and emotion seeks to understand the psychological relationship between human affect and music. The field, a branch of music psychology, covers numerous areas of study, including the nature of emotional reactions to music, how characteristics of the listener may determine which emotions are felt, and which components of a musical composition or performance may elicit certain reactions.
Retrieval-induced forgetting (RIF) is a memory phenomenon where remembering causes forgetting of other information in memory. The phenomenon was first demonstrated in 1994, although the concept of RIF has been previously discussed in the context of retrieval inhibition.
Reconstructive memory is a theory of memory recall, in which the act of remembering is influenced by various other cognitive processes including perception, imagination, motivation, semantic memory and beliefs, amongst others. People view their memories as being a coherent and truthful account of episodic memory and believe that their perspective is free from an error during recall. However, the reconstructive process of memory recall is subject to distortion by other intervening cognitive functions such as individual perceptions, social influences, and world knowledge, all of which can lead to errors during reconstruction.
The psychology of music preference is the study of the psychological factors behind peoples' different music preferences. Music is heard by people daily in many parts of the world, and affects people in various ways from emotion regulation to cognitive development, along with providing a means for self-expression. Music training has been shown to help improve intellectual development and ability, though no connection has been found as to how it affects emotion regulation. Numerous studies have been conducted to show that individual personality can have an effect on music preference, mostly using personality, though a recent meta-analysis has shown that personality in itself explains little variance in music preferences. These studies are not limited to American culture, as they have been conducted with significant results in countries all over the world, including Japan, Germany, Spain, and Brazil.
Emotion perception refers to the capacities and abilities of recognizing and identifying emotions in others, in addition to biological and physiological processes involved. Emotions are typically viewed as having three components: subjective experience, physical changes, and cognitive appraisal; emotion perception is the ability to make accurate decisions about another's subjective experience by interpreting their physical changes through sensory systems responsible for converting these observed changes into mental representations. The ability to perceive emotion is believed to be both innate and subject to environmental influence and is also a critical component in social interactions. How emotion is experienced and interpreted depends on how it is perceived. Likewise, how emotion is perceived is dependent on past experiences and interpretations. Emotion can be accurately perceived in humans. Emotions can be perceived visually, audibly, through smell and also through bodily sensations and this process is believed to be different from the perception of non-emotional material.
Music-evoked autobiographical memories (MEAMs) refer to the recollection of personal experiences or past events that are triggered when hearing music or some musical stimulus. While there is a degree of inter-individual variation in music listening patterns and evoked responses, MEAMs are generally triggered in response to a wide variety of music, often popular or classical genres, and are estimated to occur in the range from one to a few times per day, regardless of formal instrumental practice or music lessons. Consistent with the hallmarks of general autobiographical memories, everyday MEAMs similarly exhibit a recency effect, a reminiscence bump, and childhood amnesia, encoding autobiographical knowledge at several levels of specificity and across several common social and situational contexts. The phenomenon of MEAMs has been widely studied in the fields of psychology, neuroscience, and musicology. In recent years, the subject has garnered significant interest from researchers and the general public alike due to music's capacity to evoke vivid, emotional, and episodically rich autobiographical memories.