From Course 9.71
Koelsch and colleagues (2002), aimed to investigate the neural correlates of music in this paper. Particularly, they wanted to test whether the cortical network comprising music processing overlaps with the regions involved in language processing. They stated that some temporal and frontal single areas related to language had been found to be involved in music processing, but only for “one-part stimuli” (melodies). The network comprising the areas of both Broca and Wernicke had not been found to be related to music processing yet. Koelsch and colleagues speculated that the use of one-part stimuli in previous experiments was the reason why these areas were not shown to be active in music processing. Therefore, they decided to use multipart stimuli (chords) in a fMRI study to investigate the neural correlates of music with respect to the known “language network” in the human brain.
Koelsch et al (2002) designed four experimental conditions using chord-sequences: in key, clusters, modulations, and deviant instruments. They based their conditions on the principles of Western tonal music (major-minor tonal system). Therefore, the in-key condition was simply a sequence of chords in a major key, the cluster condition was a dissonant tone-cluster, the modulation condition was a sequence of chords in the minor key, and the deviant instruments were major chords played by an instrument other than piano. The subjects, ten non-musicians, were instructed to detect deviant instruments and clusters by pressing a right button when no cluster had occurred since the last response and by pressing a left button for deviant instruments. The researchers designed the experiment in this way to focus the attention of the participants on the tone and the harmonic aspects of the stimuli. In addition, no motor response was present in the detection of clusters and the task-irrelevant modulations were investigated as well.
The analysis of the fMRI-data in this study revealed that the clusters, deviant instruments, and modulations activated a very similar broad neuronal network when compared to in-key blocks. The authors discussed that the regions in this cortical network are also well known to be involved in the processing of language. This result, in combination with some previous studies, led Koelsch and colleagues to conclude that the cortical language network was less domain-specific as previously believed.
The first shortcoming that I noticed in this conclusion is that assuming a correct fMRI-data analysis, there is not enough evidence to say that this “neural activation overlap” means that both language and music processing share the same network. For instance, two physically distinguishable networks of neurons could be spread across very similar regions in the brain without necessarily being the same ensemble. Alternatively, there could be neurons acting as integration centers in these regions for both language and music networks. However, this does not mean that the language network serves to process music.
Moreover, even when the idea of having one network for music and language processing seems reasonable in terms of a system for acoustical analysis of tones, their explanation of musical semantics for the activation of Wernicke’s area in cluster, modulating, and deviant-instrument sequences seems weak. This is because the fluctuations of chords and instruments do not have a predetermined meaning as signifiers in a language do. Similarly, Koelsch and colleagues concluded that the activation of BA 44 was a result of the syntactic processing of dissonant and minor chords. It is true that there are rules in the construction of melodies in western music, but deviant instruments also elicited areas of music-syntactic processing, which makes this a weak argument as well.
With respect to the group analysis of the fMRI data, a one-sample t-test across the contrast images of all subjects was performed. The problem in this type of analysis is that averaging across participants, whose anatomy is not exactly the same, blurs brain activations and can generate a false overlap of closely adjacent responses that do not actually overlap. A better approach, in this case, would be the determination of regions of interest in each individual and then comparing the brains of the subjects. Furthermore, a multi-voxel pattern analysis would be helpful to identify the arrangements of neural activation for language and music in overlapping regions, supporting the idea that there are distinct or similar networks for these two domains.
Another weakness of the analysis in this study is that they did not obtain their own comparative data for language. I think that they should have included tasks analogous to syntax and semantics in language, which is what they were trying to test in music processing, even if such comparisons are debatable. Finally, I think that it would have been better to test their hypothesis of shared networks for music and language by developing an easier task. For instance, the speech-to-song illusion would have been a good start because the stimulus is the same, but the perception changes over time. This task could have also prevented differences in neural response driven by acoustic differences, as it is the case in many of these music-speech studies.
Koelsch, S., Gunter, T.C., v Cramon, D.Y., Zysset, S., Lohmann, G., and Friederici, A.D. (2002). Bach speaks: a cortical “language-network” serves the processing of music. Neuroimage 17, 956-966.
Peretz, I., Vuvan, D., Lagrois, M.E., and Armony, J.L. (2015). Neural overlap in processing music and speech. Philos Trans R Soc Lond B Biol Sci 370, 20140090.
Tierney, A., Dick, F., Deutsch, D., and Sereno, M. (2013). Speech versus song: multiple pitch-sensitive areas revealed by a naturally occurring musical illusion. Cereb Cortex 23, 249-254.