Cognitive musicology

Last updated

Cognitive musicology is a branch of cognitive science concerned with computationally modeling musical knowledge with the goal of understanding both music and cognition. [1]

Contents

Cognitive musicology can be differentiated from other branches of music psychology via its methodological emphasis, using computer modeling to study music-related knowledge representation with roots in artificial intelligence and cognitive science. The use of computer models provides an exacting, interactive medium in which to formulate and test theories. [2]

This interdisciplinary field investigates topics such as the parallels between language and music in the brain. Biologically inspired models of computation are often included in research, such as neural networks and evolutionary programs. [3] This field seeks to model how musical knowledge is represented, stored, perceived, performed, and generated. By using a well-structured computer environment, the systematic structures of these cognitive phenomena can be investigated. [4]

Even while enjoying the simplest of melodies there are multiple brain processes that are synchronizing to comprehend what is going on. After the stimulus enters and undergoes the processes of the ear, it enters the auditory cortex, part of the temporal lobe, which begins processing the sound by assessing its pitch and volume. From here, brain functioning differs amongst the analysis of different aspects of music. For instance, the rhythm is processed and regulated by the left frontal cortex, the left parietal cortex and the right cerebellum standardly. Tonality, the building of musical structure around a central chord, is assessed by the prefrontal cortex and cerebellum (Abram, 2015). Music is able to access many different brain functions that play an integral role in other higher brain functions such as motor control, memory, language, reading and emotion. Research has shown that music can be used as an alternative method to access these functions that may be unavailable through non-musical stimulus due to a disorder. Musicology explores the use of music and how it can provide alternative transmission routes for information processing in the brain for diseases such as Parkinson's and dyslexia as well.

Notable researchers

The polymath Christopher Longuet-Higgins, who coined the term "cognitive science", is one of the pioneers of cognitive musicology. Among other things, he is noted for the computational implementation of an early key-finding algorithm. [5] Key finding is an essential element of tonal music, and the key-finding problem has attracted considerable attention in the psychology of music over the past several decades. Carol Krumhansl and Mark Schmuckler proposed an empirically grounded key-finding algorithm which bears their names. [6] Their approach is based on key-profiles which were painstakingly determined by what has come to be known as the probe-tone technique. [7] This algorithm has successfully been able to model the perception of musical key in short excerpts of music, as well as to track listeners' changing sense of key movement throughout an entire piece of music. [8] David Temperley, whose early work within the field of cognitive musicology applied dynamic programming to aspects of music cognition, has suggested a number of refinements to the Krumhansl-Schmuckler Key-Finding Algorithm. [9]

Otto Laske was a champion of cognitive musicology. [10] A collection of papers that he co-edited served to heighten the visibility of cognitive musicology and to strengthen its association with AI and music. [11] The foreword of this book reprints a free-wheeling interview with Marvin Minsky, one of the founding fathers of AI, in which he discusses some of his early writings on music and the mind. [12] AI researcher turned cognitive scientist Douglas Hofstadter has also contributed a number of ideas pertaining to music from an AI perspective. [13] Musician Steve Larson, who worked for a time in Hofstadter's lab, formulated a theory of "musical forces" derived by analogy with physical forces. [14] Hofstadter [15] also weighed in on David Cope's experiments in musical intelligence, [16] which take the form of a computer program called EMI which produces music in the form of, say, Bach, or Chopin, or Cope.

Cope's programs are written in Lisp, which turns out to be a popular language for research in cognitive musicology. Desain and Honing have exploited Lisp in their efforts to tap the potential of microworld methodology in cognitive musicology research. [17] Also working in Lisp, Heinrich Taube has explored computer composition from a wide variety of perspectives. [18] There are, of course, researchers who chose to use languages other than Lisp for their research into the computational modeling of musical processes. Robert Rowe, for example, explores "machine musicianship" through C++ programming. [19] A rather different computational methodology for researching musical phenomena is the toolkit approach advocated by David Huron. [20] At a higher level of abstraction, Geraint Wiggins has investigated general properties of music knowledge representations such as structural generality and expressive completeness. [21]

Although a great deal of cognitive musicology research features symbolic computation, notable contributions have been made from the biologically inspired computational paradigms. For example, Jamshed Bharucha and Peter Todd have modeled music perception in tonal music with neural networks. [22] Al Biles has applied genetic algorithms to the composition of jazz solos. [23] Numerous researchers have explored algorithmic composition grounded in a wide range of mathematical formalisms. [24] [25]

Within cognitive psychology, among the most prominent researchers is Diana Deutsch, who has engaged in a wide variety of work ranging from studies of absolute pitch and musical illusions to the formulation of musical knowledge representations to relationships between music and language. [26] [27] [28] Equally important is Aniruddh D. Patel, whose work combines traditional methodologies of cognitive psychology with neuroscience. Patel is also the author of a comprehensive survey of cognitive science research on music. [29]

The AI approach to music perception and cognition based on finding structures in data without knowing the structures — similarly to segregating objects in abstract painting without assigning meaningful labels to them — was pioneered by Andranik Tangian. The idea is to find the least complex data representations in the sense of Kolmogorov, i.e. requiring the least memory storage, which can be regarded as saving the brain energy. The illustration that perception is data representation rather than “physical” recognition is the effect of polyphonic voices produced by a loudspeaker — a single physical body, and the effect of a single tone produced by several physical bodies — organ register pipes tuned as a chord and activated by a single key. This data representation approach enables the recognition of interval relations in chords and tracing polyphonic voices with no reference to pitch (thereby explaining the predominance of interval hearing over absolute hearing) and to break the rhythm-tempo vicious circle while rhythm recognition under variable tempo. [30] [31] [32] [33]

Perhaps the most significant contribution to viewing music from a linguistic perspective is the Generative Theory of Tonal Music (GTTM) proposed by Fred Lerdahl and Ray Jackendoff. [34] Although GTTM is presented at the algorithmic level of abstraction rather than the implementational level, their ideas have found computational manifestations in a number of computational projects, [35] [36] in particular, to structuralize musical performance and to adjust meaningful performance timing. [37]

For the German-speaking area, Laske's conception of cognitive musicology has been advanced by Uwe Seifert in his book Systematische Musiktheorie und Kognitionswissenschaft. Zur Grundlegung der kognitiven Musikwissenschaft ("Systematic music theory and cognitive science. The foundation of cognitive musicology") [38] and subsequent publications.

Music and language acquisition skills

Both music and speech rely on sound processing and require interpretation of several sound features such as timbre, pitch, duration, and their interactions (Elzbieta, 2015). A fMRI study revealed that the Broca's and Wernicke's areas, two areas that are known to activated during speech and language processing, were found activated while the subject was listening to unexpected musical chords (Elzbieta, 2015). This relation between language and music may explain why, it has been found that exposure to music has produced an acceleration in the development of behaviors related to the acquisition of language. The Suzuki music education which is very widely known, emphasizes learning music by ear over reading musical notation and preferably begins with formal lessons between the ages of 3 and 5 years. One fundamental reasoning in favor of this education points to a parallelism between natural speech acquisition and purely auditory based musical training as opposed to musical training due to visual cues. There is evidence that children who take music classes have obtained skills to help them in language acquisition and learning (Oechslin, 2015), an ability that relies heavily on the dorsal pathway. Other studies show an overall enhancement of verbal intelligence in children taking music classes. Since both activities tap into several integrated brain functions and have shared brain pathways it is understandable why strength in music acquisition might also correlate with strength in language acquisition.

Music and pre-natal development

Extensive prenatal exposure to a melody has been shown to induce neural representations that last for several months. In a study done by Partanen in 2013, mothers in the learning group listened to the ‘Twinkle twinkle little star' melody 5 times per week during their last trimester. After birth and again at the age of 4 months, they played the infants in the control and learning group a modified melody in which some of the notes were changed. Both at birth and at the age of 4 months, infants in the learning group had stronger event related potentials to the unchanged notes than the control group. Since listening to music at a young age can already map out neural representations that are lasting, exposure to music could help strengthen brain plasticity in areas of the brain that are involved in language and speech processing. [39] [40]

Music therapy effect on cognitive disorders

If neural pathways can be stimulated with entertainment there is a higher chance that it will be more easily accessible. This illustrates why music is so powerful and can be used in such a myriad of different therapies. Music that is enjoyable to a person illicit an interesting response that we are all aware of. Listening to music is not perceived as a chore because it is enjoyable, however our brain is still learning and utilizing the same brain functions as it would when speaking or acquiring language. Music has the capability to be a very productive form of therapy mostly because it is stimulating, entertaining, and appears rewarding. Using fMRI, Menon and Levitin found for the first time that listening to music strongly modulates activity in a network of mesolimbic structures involved in reward processing. This included the nucleus accumbens and the ventral tegmental area (VTA), as well as the hypothalamus, and insula, which are all thought to be involved in regulating autonomic and physiological responses to rewarding and emotional stimuli (Gold, 2013).

Pitch perception was positively correlated with phonemic awareness and reading abilities in children (Flaugnacco, 2014). Likewise, the ability to tap to a rhythmic beat correlated with performance on reading and attention tests (Flaugnacco, 2014). These are only a fraction of the studies that have linked reading skills with rhythmic perception, which is shown in a meta-analysis of 25 cross-sectional studies that found a significant association between music training and reading skills (Butzlaff, 2000). Since the correlation is so extensive it is natural that researchers have tried to see if music could serve as an alternative pathway to strengthen reading abilities in people with developmental disorders such as dyslexia. Dyslexia is a disorder characterized by a long lasting difficulty in reading acquisition, specifically text decoding. Reading results have been shown to be slow and inaccurate, despite adequate intelligence and instruction. The difficulties have been shown to stem from a phonological core deficit that impacts reading comprehension, memory and prediction abilities (Flaugnacco, 2014). It was shown that music training modified reading and phonological abilities even when these skills are severely impaired. By improving temporal processing and rhythm abilities, through training, phonological awareness and reading skills in children with dyslexia were improved. The OPERA hypothesis proposed by Patel (2011), states that since music places higher demands on the process than speech it brings adaptive brain plasticity of the same neural network involved in language processing.

Parkinson's disease is a complex neurological disorder that negatively impacts both motor and non-motor functions caused by the degeneration of dopaminergic (DA) neurons in the substantia nigra (Ashoori, 2015). This in turn leads to a DA deficiency in the basal ganglia. The deficiencies of dopamine in these areas of the brain have shown to cause symptoms such as tremors at rest, rigidity, akinesia, and postural instability. They are also associated with impairments of internal timing of an individual (Ashoori, 2015). Rhythm is a powerful sensory cue that has shown to help regulate motor timing and coordination when there is a deficient internal timing system in the brain. Some studies have shown that musically cued gait training significantly improves multiple deficits of Parkinson's, including in gait, motor timing, and perceptual timing. Ashoori's study consisted of 15 non-demented patients with idiopathic Parkinson's who had no prior musical training and maintained their dopamine therapy during the trials. There were three 30-min training sessions per week for 1 month where the participants walked to the beats of German folk music without explicit instructions to synchronize their footsteps to the beat. Compared to pre-training gait performance, the Parkinson's patients showed significant improvement in gait velocity and stride length during the training sessions. The gait improvement was sustained for 1 month after training, which indicates a lasting therapeutic effect. Even though this was uncued it shows how the gait of these Parkinson's patients was automatically synchronized with the rhythm of the music. The lasting therapeutic effect also shows that this might have affected the internal timing of the individual in a way that could not be accessed by other means.

See also

Related Research Articles

<span class="mw-page-title-main">Cognitive science</span> Interdisciplinary scientific study of cognitive processes

Cognitive science is the interdisciplinary, scientific study of the mind and its processes with input from linguistics, psychology, neuroscience, philosophy, computer science/artificial intelligence, and anthropology. It examines the nature, the tasks, and the functions of cognition. Cognitive scientists study intelligence and behavior, with a focus on how nervous systems represent, process, and transform information. Mental faculties of concern to cognitive scientists include language, perception, memory, attention, reasoning, and emotion; to understand these faculties, cognitive scientists borrow from fields such as linguistics, psychology, artificial intelligence, philosophy, neuroscience, and anthropology. The typical analysis of cognitive science spans many levels of organization, from learning and decision to logic and planning; from neural circuitry to modular brain organization. One of the fundamental concepts of cognitive science is that "thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures."

<span class="mw-page-title-main">Cognitive neuroscience</span> Scientific field

Cognitive neuroscience is the scientific field that is concerned with the study of the biological processes and aspects that underlie cognition, with a specific focus on the neural connections in the brain which are involved in mental processes. It addresses the questions of how cognitive activities are affected or controlled by neural circuits in the brain. Cognitive neuroscience is a branch of both neuroscience and psychology, overlapping with disciplines such as behavioral neuroscience, cognitive psychology, physiological psychology and affective neuroscience. Cognitive neuroscience relies upon theories in cognitive science coupled with evidence from neurobiology, and computational modeling.

Cognitive linguistics is an interdisciplinary branch of linguistics, combining knowledge and research from cognitive science, cognitive psychology, neuropsychology and linguistics. Models and theoretical accounts of cognitive linguistics are considered as psychologically real, and research in cognitive linguistics aims to help understand cognition in general and is seen as a road into the human mind.

<span class="mw-page-title-main">Cognition</span> Act or process of knowing

Cognition is the "mental action or process of acquiring knowledge and understanding through thought, experience, and the senses". It encompasses all aspects of intellectual functions and processes such as: perception, attention, thought, imagination, intelligence, the formation of knowledge, memory and working memory, judgment and evaluation, reasoning and computation, problem-solving and decision-making, comprehension and production of language. Cognitive processes use existing knowledge and discover new knowledge.

Amusia is a musical disorder that appears mainly as a defect in processing pitch but also encompasses musical memory and recognition. Two main classifications of amusia exist: acquired amusia, which occurs as a result of brain damage, and congenital amusia, which results from a music-processing anomaly present since birth.

Computational cognition is the study of the computational basis of learning and inference by mathematical modeling, computer simulation, and behavioral experiments. In psychology, it is an approach which develops computational models based on experimental results. It seeks to understand the basis behind the human method of processing of information. Early on computational cognitive scientists sought to bring back and create a scientific form of Brentano's psychology.

Mathematical psychology is an approach to psychological research that is based on mathematical modeling of perceptual, thought, cognitive and motor processes, and on the establishment of law-like rules that relate quantifiable stimulus characteristics with quantifiable behavior. The mathematical approach is used with the goal of deriving hypotheses that are more exact and thus yield stricter empirical validations. There are five major research areas in mathematical psychology: learning and memory, perception and psychophysics, choice and decision-making, language and thinking, and measurement and scaling.

David J. Heeger is an American neuroscientist, psychologist, computer scientist, data scientist, and entrepreneur. He is a professor at New York University, Chief Scientific Officer of Statespace Labs, and Chief Scientific Officer and co-founder of Epistemic AI.

In philosophy of mind, the computational theory of mind (CTM), also known as computationalism, is a family of views that hold that the human mind is an information processing system and that cognition and consciousness together are a form of computation. Warren McCulloch and Walter Pitts (1943) were the first to suggest that neural activity is computational. They argued that neural computations explain cognition. The theory was proposed in its modern form by Hilary Putnam in 1967, and developed by his PhD student, philosopher, and cognitive scientist Jerry Fodor in the 1960s, 1970s, and 1980s. It was vigorously disputed in analytic philosophy in the 1990s due to work by Putnam himself, John Searle, and others.

Music psychology, or the psychology of music, may be regarded as a branch of both psychology and musicology. It aims to explain and understand musical behaviour and experience, including the processes through which music is perceived, created, responded to, and incorporated into everyday life. Modern music psychology is primarily empirical; its knowledge tends to advance on the basis of interpretations of data collected by systematic observation of and interaction with human participants. Music psychology is a field of research with practical relevance for many areas, including music performance, composition, education, criticism, and therapy, as well as investigations of human attitude, skill, performance, intelligence, creativity, and social behavior.

<span class="mw-page-title-main">Stanislas Dehaene</span> French cognitive neuroscientist

Stanislas Dehaene is a French author and cognitive neuroscientist whose research centers on a number of topics, including numerical cognition, the neural basis of reading and the neural correlates of consciousness. As of 2017, he is a professor at the Collège de France and, since 1989, the director of INSERM Unit 562, "Cognitive Neuroimaging".

Computer audition (CA) or machine listening is the general field of study of algorithms and systems for audio interpretation by machines. Since the notion of what it means for a machine to "hear" is very broad and somewhat vague, computer audition attempts to bring together several disciplines that originally dealt with specific problems or had a concrete application in mind. The engineer Paris Smaragdis, interviewed in Technology Review, talks about these systems — "software that uses sound to locate people moving through rooms, monitor machinery for impending breakdowns, or activate traffic cameras to record accidents."

<span class="mw-page-title-main">Computational creativity</span> Multidisciplinary endeavour

Computational creativity is a multidisciplinary endeavour that is located at the intersection of the fields of artificial intelligence, cognitive psychology, philosophy, and the arts.

Some of the research that is conducted in the field of psychology is more "fundamental" than the research conducted in the applied psychological disciplines, and does not necessarily have a direct application. The subdisciplines within psychology that can be thought to reflect a basic-science orientation include biological psychology, cognitive psychology, neuropsychology, and so on. Research in these subdisciplines is characterized by methodological rigor. The concern of psychology as a basic science is in understanding the laws and processes that underlie behavior, cognition, and emotion. Psychology as a basic science provides a foundation for applied psychology. Applied psychology, by contrast, involves the application of psychological principles and theories yielded up by the basic psychological sciences; these applications are aimed at overcoming problems or promoting well-being in areas such as mental and physical health and education.

Educational neuroscience is an emerging scientific field that brings together researchers in cognitive neuroscience, developmental cognitive neuroscience, educational psychology, educational technology, education theory and other related disciplines to explore the interactions between biological processes and education. Researchers in educational neuroscience investigate the neural mechanisms of reading, numerical cognition, attention and their attendant difficulties including dyslexia, dyscalculia and ADHD as they relate to education. Researchers in this area may link basic findings in cognitive neuroscience with educational technology to help in curriculum implementation for mathematics education and reading education. The aim of educational neuroscience is to generate basic and applied research that will provide a new transdisciplinary account of learning and teaching, which is capable of informing education. A major goal of educational neuroscience is to bridge the gap between the two fields through a direct dialogue between researchers and educators, avoiding the "middlemen of the brain-based learning industry". These middlemen have a vested commercial interest in the selling of "neuromyths" and their supposed remedies.

<span class="mw-page-title-main">Embodied cognition</span> Interdisciplinary theory

Embodied cognition is the concept suggesting that many features of cognition are shaped by the state and capacities of the organism. The cognitive features include a wide spectrum of cognitive functions, such as perception biases, memory recall, comprehension and high-level mental constructs and performance on various cognitive tasks. The bodily aspects involve the motor system, the perceptual system, the bodily interactions with the environment (situatedness), and the assumptions about the world built the functional structure of organism's brain and body.

Usha Claire Goswami is a researcher and professor of Cognitive Developmental Neuroscience at the University of Cambridge, a Fellow of St. John's College, Cambridge, and the director of the Centre for Neuroscience in Education, Downing Site. She obtained her Ph.D. in developmental psychology from the University of Oxford before becoming a professor of cognitive developmental psychology at the University College London. Goswami's work is primarily in educational neuroscience with major focuses on reading development and developmental dyslexia.

Iris Berent is an Israeli-American cognitive psychologist. She is a Professor of Psychology at Northeastern University and the director of the Language and Mind Lab. She is among the founders of experimental phonology—a field that uses the methods of experimental psychology and neuroscience to study phonological competence. She has also explored the role of phonological competence in reading ability and dyslexia. Her recent work has examined how laypeople reason about innate knowledge, such as universal grammar.

Massimiliano (Max) Garagnani is a University Professor at the University of London, and is primarily known for his work on bio-realistic neural network models that closely mimic the structure, connectivity, and physiology of the human cortex. Garagnani presently runs the Goldsmiths Computational Cognitive Neuroscience Postgraduate Programme at the University of London, and further serves as a visiting researcher at the Free University of Berlin.

Aniruddh (Ani) D. Patel is a cognitive psychologist known for his research on music cognition and the cognitive neuroscience of music. He is Professor of Psychology at Tufts University, Massachusetts. From a background in evolutionary biology, his work includes empirical research, theoretical studies, brain imaging techniques, and acoustical analysis applied to areas such as cognitive musicology, parallel relationships between music and language, and evolutionary musicology. Patel received a Guggenheim Fellowship in 2018 to support his work on the evolution of musical cognition.

References

  1. Laske, Otto (1999). Navigating New Musical Horizons (Contributions to the Study of Music and Dance) . Westport: Greenwood Press. ISBN   978-0-313-30632-7.
  2. Laske, O. (1999). AI and music: A cornerstone of cognitive musicology. In M. Balaban, K. Ebcioglu, & O. Laske (Eds.), Understanding music with AI: Perspectives on music cognition. Cambridge: The MIT Press.
  3. Graci, C (2009). "A brief tour of the learning sciences featuring a cognitive tool for investigating melodic phenomena". Journal of Educational Technology Systems. 38 (2): 181–211. doi:10.2190/et.38.2.i. S2CID   62657981.
  4. Hamman, M., 1999. "Structure as Performance: Cognitive Musicology and the Objectification of Procedure," in Otto Laske: Navigating New Musical Horizons, ed. J. Tabor. New York: Greenwood Press.
  5. Longuet-Higgins, C. (1987) Mental Processes: Studies in cognitive science. Cambridge, MA, US: The MIT Press.
  6. Krumhansl, Carol (1990). Cognitive Foundations of Musical Pitch. Oxford Oxfordshire: Oxford University Press. ISBN   978-0-19-505475-0.
  7. Krumhansl, C.; Kessler, E. (1982). "Tracing the dynamic changes in perceived tonal organisation in a spatial representation of musical keys". Psychological Review. 89 (4): 334–368. doi:10.1037/0033-295x.89.4.334. PMID   7134332.
  8. Schmuckler, M. A.; Tomovski, R. (2005). "Perceptual tests of musical key-finding". Journal of Experimental Psychology: Human Perception and Performance. 31 (5): 1124–1149. CiteSeerX   10.1.1.582.4317 . doi:10.1037/0096-1523.31.5.1124. PMID   16262503.
  9. Temperley, David (2001). The Cognition of Basic Musical Structures. Cambridge: MIT Press. ISBN   978-0-262-20134-6.
  10. Laske, Otto (1999). Otto Laske . Westport: Greenwood Press. ISBN   978-0-313-30632-7.
  11. Balaban, Mira (1992). Understanding Music with AI. Menlo Park: AAAI Press. ISBN   978-0-262-52170-3.
  12. Minsky, M (1981). "Music, mind, and meaning". Computer Music Journal. 5 (3): 28–44. doi:10.2307/3679983. JSTOR   3679983.
  13. Hofstadter, Douglas (1999). Gödel, Escher, Bach . New York: Basic Books. ISBN   978-0-465-02656-2.
  14. Larson, S (2004). "Musical Forces and Melodic Expectations: Comparing Computer Models with Experimental Results". Music Perception. 21 (4): 457–498. doi:10.1525/mp.2004.21.4.457.
  15. Cope, David (2004). Virtual Music. Cambridge: The MIT Press. ISBN   978-0-262-53261-7.
  16. Cope, David (1996). Experiments in Musical Intelligence. Madison: A-R Editions. ISBN   978-0-89579-337-9.
  17. Honing, H (1993). "A microworld approach to formalizing musical knowledge". Computers and the Humanities. 27 (1): 41–47. doi:10.1007/bf01830716. hdl: 2066/74729 . S2CID   1375183.
  18. Taube, Heinrich (2004). Notes from the Metalevel. New York: Routledge. ISBN   978-90-265-1975-8.
  19. Rowe, Robert (2004). Machine Musicianship. City: MIT Pr. ISBN   978-0-262-68149-0.
  20. Huron, D. (2002). Music Information Processing Using the Humdrum Toolkit: Concepts, Examples, and Lessons. "Computer Music Journal, 26" (2), 11–26.
  21. Wiggins, G.; et al. (1993). "A Framework for the Evaluation of Music Representation Systems". Computer Music Journal. 17 (3): 31–42. CiteSeerX   10.1.1.558.8136 . doi:10.2307/3680941. JSTOR   3680941.
  22. Bharucha, J. J., & Todd, P. M. (1989). Modeling the perception of tonal structure with neural nets. Computer Music Journal, 44−53
  23. Biles, J. A. 1994. "GenJam: A Genetic Algorithm for Generating Jazz Solos." Proceedings of the 1994 International Computer Music Conference. San Francisco: International Computer Music Association
  24. Nierhaus, Gerhard (2008). Algorithmic Composition. Berlin: Springer. ISBN   978-3-211-75539-6.
  25. Cope, David (2005). Computer Models of Musical Creativity. Cambridge: MIT Press. ISBN   978-0-262-03338-1.
  26. Deutsch, Diana (1999). The Psychology of Music. Boston: Academic Press. ISBN   978-0-12-213565-1.
  27. Deutsch, Diana, ed. (2013). The Psychology of Music, 3rd Edition. San Diego, California: Academic Press. ISBN   978-0123814609.
  28. Deutsch, D. (2019). Musical Illusions and Phantom Words: How Music and Speech Unlock Mysteries of the Brain. Oxford University Press. ISBN   9780190206833. LCCN   2018051786.
  29. Patel, Aniruddh (1999). Music, Language, and the Brain. Oxford: Oxford University Press. ISBN   978-0-12-213565-1.
  30. Tanguiane (Tangian), Andranick (1993). Artificial Perception and Music Recognition. Lecture Notes in Artificial Intelligence. Vol. 746. Berlin-Heidelberg: Springer. ISBN   978-3-540-57394-4.
  31. Tanguiane (Tangian), Andranick (1994). "A principle of correlativity of perception and its application to music recognition". Music Perception. 11 (4): 465–502. doi:10.2307/40285634. JSTOR   40285634.
  32. Tanguiane (Tangian), Andranick (1995). "Towards axiomatization of music perception". Journal of New Music Research. 24 (3): 247–281. doi:10.1080/09298219508570685.
  33. Tangian, Andranik (2001). "How do we think: modeling interactions of memory and thinking". Cognitive Processing. 2: 117–151.
  34. Lerdahl, Fred; Ray Jackendoff (1996). A Generative Theory of Tonal Music. Cambridge: MIT Press. ISBN   978-0-262-62107-6.
  35. Katz, Jonah; David Pesetsky (May 2009). "The Recursive Syntax and Prosody of Tonal Music" (PDF). Recursion: Structural Complexity in Language and Cognition. Conference at UMass Amherst.
  36. Hamanaka, Masatoshi; Hirata, Keiji; Tojo, Satoshi (2006). "Implementing 'A Generative Theory of Tonal Music'". Journal of New Music Research. 35 (4): 249–277. doi:10.1080/09298210701563238. S2CID   62204274.
  37. Tangian, Andranik (1999). "Towards a generative theory of interpretation for performance modeling". Musicae Scientiae. 3 (2): 237–267. doi:10.1177/102986499900300205. S2CID   145716284.
  38. Uwe Seifert: Systematische Musiktheorie und Kognitionswissenschaft. Zur Grundlegung der kognitiven Musikwissenschaft. Orpheus Verlag für systematische Musikwissenschaft, Bonn 1993
  39. "Music Therapy for Health and Wellness". Psychology Today. Retrieved 21 June 2013.
  40. "How Music Helps with Mental Health – Mind Boosting Benefits of Music Therapy". www.myaudiosound.co.uk. Retrieved 21 May 2019.

Further reading