FacebookTwitterYoutubeInstagramGoogle Plus

Notes and Neurons: In Search of the Common Chorus

Is our response to music hard-wired or culturally determined? Is the reaction to rhythm and melody universal or influenced by environment? John Schaefer, scientist Daniel Levitin, and musical artist Bobby McFerrin engage in live performances and cross-cultural demonstrations to illustrate music’s noteworthy interaction with the brain and our emotions.Learn More

View Additional Video Information


MCFERRIN: (Vocalizing)  

[00:00:05] MCFERRIN: (Vocalizing)  

[00:09:24] MCFERRIN: Okay. I got to sing a blues tune. I have to.

MCFERRIN: (Singing)

JOHN SCHAEFER, RADIO HOST: Well, I’m not sure what we can follow Bobby McFerrin with, but we’ll come up with something. The program, is Notes and Neurons. And what just happened here? I mean Bobby McFerrin sat at center stage and performed a couple of pieces of music that none of us had ever heard before. And yet you were picking up on these very clear cues and we’re not just understanding what Bobby was doing, but actually becoming part of the process of making music. How does something like that happened? How does our brain know how to process and decode the cues that Bobby is giving you? That’s kind of what we’re going to be talking about on the stage here tonight with a very distinguished panel that of course includes to my far left, the one and only Bobby McFerrin. To his right is Lawrence Parsons, who’s a neuroscientist at Sheffield University in the UK. Daniel Levitin is a neuroscientist at McGill University in Canada and the author of two best selling books, The World in Six Songs, which comes out in paperback next month. And This Is Your Brain on Music.  And Jamshed Bharucha to my immediate left is the provost at Tufts University neuroscientist and my guest just this afternoon on WNYC Sound Check. So Dan. Let me start with you. How does something like this happened? Why does neuroscience concern itself with questions like this these days?

[00:15:02] DANIEL LEVITIN, PSYCHOLOGIST: It might not be obvious that neuroscience, which is the study of how neurons communicate with one another and how they assemble into circuits that govern the brain. It might not be obvious why a neuroscientists would be interested in music, but the answer is that neuroscience is interested with human behavior in all of its aspects and all of its manifestations. And one might argue that there are few things more human than music. Music is a a very important part of our lives and it has been part of the daily lives of humans as far back as we know, as far back as as you can look in history, pre-history. And neuroscience is just in the last 20 years, changed the tools necessary to really look at the living brain in action and ask some of the most interesting and thought-provoking questions about where music happens in the brain and how.

SCHAEFER: Jamshed, we like to say music is the universal language. Is it truly universal or is it a series of dialects where you know, it might be difficult for a person from one culture to understand the quote on quote language of the music of another?

JAMSHED BHARUCHA, COGNITIVE NEUROSCIENTIST: It is often said that music is the universal language, particularly of the emotions and there’s some truth to that, but it’s not entirely true. There are aspects of music clearly that are culturally very, very specific. And that really people who have grown up in that culture or have been immersed in it can share.

SCHAEFER: Bobby, the blues is specific to a culture. But what you started with tonight could have come from anywhere or nowhere. I mean, are you sort of consciously going for something universal?

BOBBY MCFERRIN, MUSICIAN: No, I’m not really sure exactly what’s going to happen in the very beginning. It’s the joy that I get from just simply watching things as they come out and develop is really what I like to begin with. I’d like to start some place that is unfamiliar to me and simply watch where it goes. I’m sure I’m going to learn quite a bit of things, quite a few things tonight listening to you all talk, but I have a feeling that most everyone here, it’s some place inside of them. They have a sincere desire to participate, to become part of something which is a very strong need, I think in all of us to be part of some kind of a community, have some kind of a relationship, whether it’s with another person or with music. And so I find very, very easily regardless of what country or culture that I’m in to give an invitation for people to sing and they readily jump on it. They might be a little hesitant at first, because usually in performances, you know, the artist does not invite someone so- a room of strangers to just come in and play. They might be a little hesitant, but I sincerely believe that everyone wants to and that’s-that gives me tremendous joy.

SCHAEFER: Jamshed is that one of the universals, is that one of the things that you can, that we can agree is a universal?

BHARUCHA:Oh absolutely. One of the primordial functions, is to breed, foster social cohesion and really get people’s brain states aligned so that they can form a larger community than themselves.

SCHAEFER: Larry, one of the things Bobby just said was, you know, somewhere inside them people want to be involved. When we get involved in music, we often think that the part of our body that’s getting involved is a little bit lower than the brain and it can be different body parts depending on the different types of music. Can you explain to us-it’s a family program and we’re going to keep it that way. Can you explain to us, you know, how it is that the brain is really what we’re talking about when we talk about how we respond to music.

LAWRENCE PARSONS, NEUROSCIENTIST: It does remind me bit of somebody who said that-maybe it was Woody Allen who said something about the sexiest organ in your body is your brain. Well, the same is true as music actually. And that’s probably why neuroscientists are so interested in it, neuroscientists who are interested in the central nervous system, which is the brain. And what we’ll do, is just go through a few things that we’ve learned about brain basis of music, music in it’s all possible manifestations. And when we say music as probably it’s clear from what we just saw from Bobby, we made music and dance. There’s a physicality to it and also a participatory element too and the socialized element to it. But just looking at a single person’s brain in order to understand this, we can sort of summarize on this slide which has obviously at scientific tactical summary of the different regions of the brain that are involved in this.

[00:20:11] PARSONS:And two basic things that are important to keep in mind. One is that there is no single part of the brain that’s sort of small and specialized for all the things that we call music. And the second part of it is that all the, in a sense, almost every part of the brain as we know it, including the central as well as the peripheral nervous system or more tactically the Cortex, the subcortex, and the cerebellum. All of those areas both left and right in each of us, gets involved in some way, in some element of some kinds of musical experiences. So in a single phrase, music is a whole nervous system activity. Even when you’re listening in a most passive way, you’re often what we say implicitly involved with your whole body in the musical experience. But in particular when you’re playing, even if you’re not doing something of this sort of, you’re doing something of the sort that those folks over there are going to do later or have been doing. Then even then you get your whole body involved. But your body is only organized by the activity of the cells in the brain, which Jamshed mentioned earlier. So to briefly summarize, what we’ll do is we’ll walk through this a little bit and walk through the different stations which all interact quickly and dynamically over time, more or less in what we call parallel, that’s all circulating in activating at the same time dynamically to create the things that we think of as music, dance, and all the emotional experiences and social experiences that go with it. So what we have here is areas that the brain that are specialized for auditory, for physical body movement, for visual processing. When you- visual is a part of the music experience. Your own personalities, memories and associations you have with the sounds that you hear with the music, expectations you have, as you’ve learned to understand the music that you’re familiar with, you have a sense of where it’s going, what would happen next.

And you’re surprised often, you’re intrigued, you’re seduced. And then a wide range of emotional experiences and manifestations related to music. Music speaks a particular kind of emotional language that’s different than other emotions. You can have emotions when you meet the first love of your life, when you’re hungry, and a wide range of kinds of emotions. You can have music seems to tap partially into those kinds of emotional experiences, but then have other kinds of components. Deep-primordial is a good word, deep social and primordial abstract kinds of emotions. So to go through just brief, so there’s a, this is a small section of immediate auditory regions and it goes all the way down here for a musical auditory structure in particular. This is a set of areas here on the inside and the medial temporal and other areas that are representing the memories and associations and past general knowledge. Here’s another set of areas in that, in what we call the pre-frontal and inferior frontal region that are driving and maintaining these representations of expectancy. What should happen next in a broad sense of planning? What do you think is going to happen? If you’re performing you have to improvise or carry out a scripted piece of performance and these areas are involved in supporting that kind of knowledge and performance as well. Then you have areas that are involved in organizing the movement. If you’re dancing, if you’re playing the music, you’re doing both. There’s a lot of sensory motor activity, so you move your body and you sense that it’s moving and you change the way you’re moving in order to connect with what you’re sensing and what you intended to do. All of these things happened there in these different brain areas. And then I think the last is the sensory area as well, and visual representation. And in this slide here represents sort of-we’re looking at sort of the outside of the brain and this represents the what we call the medial surface. If you took the middle part of the brain and looked at the medial surface. And these represent, in particular several of the, basic brain areas that subserve emotion. So these areas here, these areas here, and these area here, are different aspects of different kinds of emotion. We’ll talk a little bit later about the brain-how the brain differentiates emotional states, whereas we feel as we get closer to our, in house performance.

MCFERRIN: I have a question. How do we experience music differently? Say for example, you got tickets for a concert, you know who you’re going to go see, it begins at 8:00. You know, that’s one kind of musical experience that most of us have had. But what’s the difference? Do we experience music differently say if you suddenly come upon it, you’re walking in a park and all of a sudden you find a band playing? Do you experience physically the music differently? Because on one hand you know what you’re going to get pretty much you have expectations about the performance. Maybe you’re even familiar with the music, but it’s a different orchestra or there’s a classical concert, different conductor, whatever. Then for you to suddenly come upon music completely unexpected and it might be a music that you’re unfamiliar with. I mean, is there a difference in experience in music that way? Or do we take it in the same way as a process in the same way?

[00:25:58] PARSONS: Well, in terms of brain function it’s all happening in the same way. You’re certainly influenced by the context and your expectations, but there’s a lot of implicit unconsciousness, so if you hear a beat, even a non-musician will start tapping your foot. And it’s probably happening before it gets up here to where you’re, you’re analyzing the melody and the harmony and so on. There’s this. Which might go back to things like walking and basic primordial functions.

SCHAEFER: Let me just, if I can just follow up on Bobby’s question. You talked about expectations and I’m looking at that slide. All the different parts of the brain that are at work when we’re listening to music. When you’re talking about memories and associations, expectations, I think what Bobby is getting at is if you don’t have time to erect those in your mind before the experience of the music has hit you, can you can, you know, does that change your experience of the music because your mind hasn’t had the chance to erect those edifices?

LEVITIN: I think the expectations happened at two time courses, right? So there’s a long term expectation of having bought your tickets a month in advance and you’re sort of mentally preparing yourself for the night of the concert. You might listen to the music of the performer or composer ahead of time and you’re sort of playing in your head what you think is going to happen and this is a sort of long term expectation. And then at the moment of the concert, whether it’s music you’ve heard before or not, whether you’re a musician or not. Whether you know the difference between a C sharp and a G and a H or not. Right? You’ve got these short term expectations going.


LEVITIN: You have this short term expectations that are sort of partly innate and partly acculturated. You don’t always know what’s going to happen, but because there’s typically a pulse, you know when the next event is going to be and your brain is trying to form predictions about the micro timing of are they going to play it a little early? Are they going to play it a little late? Or they may play around with the time instead of doing this or they do know. All this kind of rhythmic play. We’ll talk about that, but I think the other thing that’s interesting about Bobby’s question is that it illustrates that there are really different kinds of people who come to music and life with very different mindsets. There are people who approach life in a more open and experimental way. They hear music wherever it is, and they’re not surprised by it. They hear it in the wind rustling through the trees. They hear it in the birds singing, they come upon it in the park. They were ready for it and there are other people who have to steel themselves up for the event. Right? I mean it’s in psychobabble we would call it a difference in openness to new experience or in readiness to accept something at the spur of the moment,

MCFERRIN: I must say, I’m sorry, I must say that when I saw Miles Davis’s band for the first time in Los Angeles in 1971 or whenever it was, literally physically I felt different when I left.

LEVITIN: That wasn’t just what was being smoked in the auditorium?

MCFERRIN: No it was a small club in Los Angeles and I felt that physically, I was different because I hit even though I knew I was going to go see Miles on a particular day of the week and I was going to go see him. I had no idea what he was going to play, what the group was, and I literally, literally left the club, a changed person. I was never the same because I had never heard music like that. I never experienced music like that. I mean Miles walked out, they improvised for an hour and you know, played all kinds of stuff all over the place and I felt like my molecular structure literally had been transformed.

MCFERRIN: And I never experienced music like that ever again and I never played music that way. I mean it made me question and reexamine everything that I ever did musically. You know, after that.

LEVITIN: You may have had a big burst of dopamine production during. And I think your physiology did change. What neuroscientists think of as learning is new connections in the brain.

PARSONS: Well I’ll say you know, as an artist, as a young artist, you were prepared for that kind of experience, the change because you obviously had a lot of-it hits you upside and drove you in a new direction.

[00:30:24] SCHAEFER: Yeah. Well, if we’re going to talk about, you know, the things that might be universal or the things that might be-Jamshed, on the show earlier today, you referred to yourself as a cultural relativists. So I mean, I guess that implies there are some things that are not universal among music that will be specific to cultures and stuff. I suppose if we’re going to talk about these questions, we should at some point get to the basics and that is the way sound hits our ear. Dan in one of your books, you provocatively answer the question, “If a tree falls in the forest and no one’s there to hear it, does it make a sound” with a definitive no. It does not make it. Why? Why do you say that?

LEVITIN: Well, I think that sound is a construction of the brain. What you have in the physical world and what a physicist can measure are the vibrations of molecules and the intensity of that vibration. But that’s not sound the way we commonly think of it. When we think of sound, we think of being able to distinguish a car horn from a bird song or the ocean waves from a cello. And you know, that’s sound and it requires a brain to interpret that I would say.

SCHAEFER: So it needs to be the action of those vibrations upon your eardrums.

LEVITIN: So I would say it’s the end of a chain of events, of neural processing that delivers you this impression of.

SCHAEFER: All right, so there are lots of different types of sound waves. And of course when we hear music, we’re not hearing usually a single sound wave. We’re hearing combinations of them and if we’re going to talk again about what’s universal, maybe we should start with the basic building blocks, the simple octave, the so called perfect intervals, the fourth and the fifth. And you know, between the musicians and Dan, your sax and Jamshed the violin and Bobby, the whole Bobby, we should be able to sort of demonstrate a little bit about-of what we’re talking about. So let’s start with the octave.

LEVITIN: So pitch, rhythm and timbre are three of the fundamental building blocks of music that composers and performers use to create musical sound. And pitch is just a fancy word for changes in the frequency of a sound. Amber can play in a note here for us. Now what she’s done here to change the sound is she’s pressed halfway along the string and it sounds higher. We call that an octave. And if you notice where your finger is, it’s exactly half the length of the string. And if she changes that ratio, in a particular direction in a particular way, she can make an interval that we call the fifth. The fifth and the octave are two near cultural universals. Virtually every music system that we know of, not only now, but going back to the Greeks and beyond the musical system. So far as we know, have the Hebrews, the Hittites, Arabian music, Persian music, you know, back in the BC era, all had the octave and almost all of them had the fifth. Another building block of our western music is the fourth.  Then we have another interval is the third. Can you play a descending minor third? Now this descending minor third has some very interesting properties that Jamshed is going to talk about.

BHARUCHA: Well my colleague Meagan Curtis and I have done some research on whether the speech, the intervals within speech themselves can convey emotion in a way that’s analogous to the way in which music conveys emotion. When you listen to people speak, you don’t really hear music, you don’t hear the pitch in very clear ways, but it turns out that with technology today, we can analyze the voice and we can see musical intervals in them at times. And so we had actresses say sentences, short sentences with different emotions, happiness, anger, sadness and pleasantness and analyze the musical intervals. And it turns out that the descending minor third, as Amber played, is prevalent in sad speech. So this is a descending minor third.  

[00:35:14] BHARUCHA: We had actresses say words like, “OK.” So you’ve heard a lot of, OK. Now I’m singing that of course, but spoken naturally and analyzed by a computer, you can see these intervals come out. As opposed to angry speech, what we found is that there was a preponderance of ascending minor seconds or semi tones.  Ok. Now the positive emotions of happiness and pleasantness did not produce any interval of pitch code at all. So it’s only the negative emotions. And in terms of evolution, that makes some sense. It’s very important that you detect negative emotions because there are consequences. You may not. You may not have to detective positive emotions.

MCFERRIN: So what does this voice convey? In a world where scientists-

LEVITIN: Well I’m glad you brought that up because one of the other components of all this is timbre. And one can perform a note. The same note with many different timbres and great performers on any instrument learn how to coax these different sounds and the timbre for those of you that are familiar with the definition of the term, the official definition is it’s everything that you hear when the musician is not changing the pitch or the duration of the note. That’s the best that the physicists could come up with. That’s actually the official definition. And what we mean is when a trumpet and a cello are playing the same note, it’s what allows you to distinguish them. Or even when somebody performs a note with different total color. And I was thinking you might demonstrate for us some notes played with yes, but play the same pitch, same duration, but with different timbres.

MCFERRIN: You mean so like. (Vocalizing) As one. (Vocalizing)

LEVITIN: And each is associated with a particular emotion.


LEVITIN: And there’s this kind of signaling that composers and performers and audience members know as part of a common cultural vocabulary that if you do certain things, it’s meant to convey certain emotions. And these are largely but not entirely based on metaphors from animal cries and baby cries and natural sounds. And Amber, I think can make two different timbres, on the same note, on the cello probably. I don’t mean to put you on the spot, but. The final component that we were going to talk about just briefly is rhythm. One can play the same notes with different rhythms. That is different, temporal patterns. And Amber’s going to demonstrate that now.

SCHAEFER: So the dotted rhythm. We’ve been talking so far about intervals, pitch, timbre, we’ve been talking at it mostly from a Western standpoint. I mean, you know, the major and minor thirds, those are Western constructs. I mean there are a pure mathematical thirds, but we don’t use them in the west. For that you’d have to go to, to certain non-Western cultures. In terms of-before we leave rhythm Jamshed, in the Indian classical tradition, rhythm seems to be built differently. There’s a kind of expanded sense of time in the way rhythm happens in Indian classical music.

BHARUCHA: Sure. There are some significant cultural differences. And in Indian classical music, the rhythmic cycles tend to be more extended and they can also be multiples of more complex numbers. In western music, most of the rhythms or the meters are duple meters they’re in groups of two or four or sometimes triple meters, which are in groups of three. In Indian classical music, you might have groups of seven, you might have groups of thirteen, longer intervals. And perhaps I can ask Naren to just illustrate first a simple rhythm and then a more complex meter. (Instrumentation) So that’s based on two and multiples of two. And then maybe a more complex meter. (Instrumentation) So that’s just one cycle. And then he repeats those cycles.

[00:40:44] SCHAEFER: And within the cycle there are, you begin to vary the strokes and it very quickly becomes a very complex and impressive feat to play these instruments. Let’s talk a little bit about scale as well because scale forms are quite different. If we could ask Parag, could you play the Indian sarod, but can you play a C major scale on that? Just a regular western…?  (Instrumentation) Now, if we start on the same note what happens with…I mean let’s choose a…

BHARUCHA: Well western music, both classical and popular music, the major and minor scales are by far the most prevalent. But in many other cultures there are many, many other scales that are still in use. And perhaps Parag, you could play a couple that are distinctly different from the major scale. (Instrumentation) Perhaps you could just play a short segment of music actually based on that scale. (Instrumentation)

SCHAEFER: That scale is quite foreign. That’s a very dissimilar scale to the Western C major scale. And yet, this goes back to what Larry was talking about almost as soon as we got started, the expectation. How quickly the mind adjusts having heard the first part of the scale to kind of fill in the blanks. And you’ve done some, some work in, in both India and here in the states on this.

BHARUCHA: That’s right. So when you hear a fragment of music, the brain fills in a much larger representation that’s based on cultural expectations. And these expectations are implemented by the brain automatically and very, very fast. And they’re almost impossible to suppress. So if you hear several notes that are part of a Indian scale, but not including the other notes that are typically present, the brain actually puts them in. And what Meagan Curtis and I found is that people misremember notes that are supposed to be in a scale but not actually played, if it’s from a familiar culture. So let me play one scale. This scale is called Bhairav and it’s a scale found in Indian classical music. Now that second note is distinctly non-Western. And so what we found is that when you play a fragment of this scale, that does not include that note. So for example. To western listeners, their brains do not fill in this note. And we wanted to see if there is evidence in fact that for Indian listeners, they do fill in this note. And so we’ve conducted another experiment, which is going on simultaneously in India and here in the United States. And I’m going to show you a couple of clips of the experiment actually being conducted. The first clip is the experiment going on in Bangalore with a collaborator of ours Shantala Hegde. And you will first see a singer playing just some music and then you’ll see her being a subject in an experiment in which she hears a brief melody and then she’s supposed to complete the melody. And then in the simultaneous experiment that we’re doing here at the United States, at Tufts, we have an American singer trying to complete those same melodies. And what you will see is that the American singer’s brain, you won’t see the brain, but you will see evidence that the brain is really leading her.

[00:45:10] BHARUCHA: There are a couple of singers-to expect different notes and therefore to produce different notes. But not including that second note that I played. That’s call it the Re flat that you don’t have to know about the technical aspects of music. But if you just remember that note is a Re flat because I’ll be talking about it a little bit later. One of the things you’ll see in these clips is that while the American singers start off completing the melody in a very Western mode, towards the end of the experiment, they had a little bit of acculturation and you’ll see one of the singers actually starting to insert that re flat, but that’s very late in the process. So if I could have the first video clip for us, please.


HEGDE: I’ll be taking you through an experiment and I will be playing few melodies from a music device here. They’re short melodies and I would want you to listen to each melody and complete it.

CURTIS: So in this experiment, you’re going to hear a few notes on each trial, just the beginning of a melody. And what I would like you to is to elaborate on that musical phrase just for a few notes and end on a note that you think is a good resting point for that phrase. And so anytime you’re ready, we can start the experiment.

CURTIS: This is the program that we use to analyze the data and what you’re seeing here is a spectrogram and it also shows the fundamental frequency contour in blue. In the analysis, we look at the pitch classes that were used in the original stimulus and we can compare the pitch classes used by the singer to determine whether they used the same pitch classes and oftentimes we find that they fill in pitches that were not contained in the original stimulus. Even if the participant has heard a melody that contains notes from the Indian scale, they actually are very good at filling in Western pitches that are a little bit inconsistent with what you would hear in the Indian scale.   

BHARUCHA: So for the most part, the Indian singers were quite automatically inserting that Re flat, whereas the American singers were not. They were inserting a Re natural, which is a very Western, which is the tone you would expect in major or minor restaurants scale, except towards the end that one American singer started trying to sound non-Western and very quickly inserted that Re flat in. And so it is possible in shorter periods of time, in an entire lifetime to develop some degree of acculturation.

[00:50:25] SCHAEFER: Does that say something about that individual singer or can you generalize more about a certain versatility or plasticity of our brain in terms of responding to music we haven’t heard before?

BHARUCHA: Well,  let me show you an example of a model, a brain model, a computational model, which is very theoretical about how our brains brain circuits might form as a result of exposure to a lifetime of music or even a short period of music. If I could borrow the pointer please. So this is-each one of these ovals here represents it sort of a theoretical neuron in the brain and there are neurons that are tuned to different pitches. So here we have the do re me fa so la ti do, just as an example of neurons that are going to get active when those particular frequencies are present in the music. At the top, we have another set of neurons that, that we call expectation representations. They are probably in a different part of the brain, and represents what the brain expects for remembers or perhaps misremembers as part of a gestalt or a holistic representation of the musical scale. And these neurons in the first layer and the second layer are connected by all of these lines that are synapses. Now, let’s say you start off not having any cultural exposure and then you get some cultural exposure. What happens is that these synaptic connections represented by the lines get stronger or weaker. Some become excitatory, some become inhibitory, and the next slide shows what happens as a result of cultural learning. The red lines are the synaptic connections that have gotten stronger. The green lines are the synaptic connections that have gotten a lot weaker and just, the pattern of red and green lines really constitutes the brains and representation of culture in this particular context of tonal culture.

SCHAEFER: If I can just ask, so in this example, if we’re talking to a do re me and let’s just call me an e for sake of argument. If your cultural scale leads you to expect an E flat or an E natural, it leads you to not expect. That’s what the green crossings there to you to not expect the reverse.

BHARUCHA: Exactly. And these crosses here of green cross. Whoops. I’m sorry. I need to go back a slide. Thank you. These crossed lines here mean if you hear a re flat, uh, you’re not going to expect a re. And if you hear a re you’re not going to expect a Re flat. And the same with me flat and me. What you’ll see in a later slide is that the western brain sometimes shows an equal expectation for me flat and me. Because if you play at a fragment of the Indian scale Bhairav, it thinks it’s both major and minor, and I’ll show you that in a minute.

LEVITIN: The key point here, if I could just add, is that your brain doesn’t born hardwired for the music of a particular culture. It’s not because you were born in India or in the United States, that you’re more likely to, you know, to learn one or the other as far as the brain is concerned. The brain is essentially configured to learn any of the world’s musics or any of the world’s languages. And it’s exposure. It’s the actual input influences the shape that the neural circuits take.

BHARUCHA: Exactly. This particular neural net model can learn potentially any cultural system. And in fact, we’ve done simulations using this model of ancient Greek modes which used to exist in the West, but no longer exists to try to understand how the ancient Greeks might have heard music and the kinds of expectations they might have had. So now if this is the network that has learned the Indian scales and if you are now presented with a fragment, that’s just me fa so la flat, it expects the re flat to stay at the top. If you instead have- Sorry, this is so big in the back, it’s hard for me to pay attention.

SCHAEFER: Cover one eye and read the bottom line.

BHARUCHA: We see the Indian, the Indian brain expecting the ref lot based on the fragments that have been played. OK, so the expectation is up there in the left-hand corner. Now a simulation of western exposure would be in the next slide where the network has been exposed only to major and minor scales. You’d you don’t see the same shot crossing of the green lines here, for example. Now what if you expose this western trained neural net to the same fragment, mi fa so la flat. You don’t get the re activated at all. You don’t get the re flat expected at all and still get the re. And as I said earlier on, you get some expectation of me flat and me, as you can see, both of them are slightly colored red, which means that there’s a little bit of ambiguity. As soon as the western a computer simulation hears the flat la, it thinks, ah Ha, you know, that must be minor.

MCFERRIN: So Julie Andrews didn’t expect “doe, a deer, female deer.”

BHARUCHA: Could you sing that now with a re flat? So this, this is just a computer simulation of how the brain actually learns a culture. The brain is the organ of culture and it’s like a sponge. It really absorbs the patterns that are in your culture and encodes the brain using those patterns and then uses those encoded patterns as filters to filter the music that you hear.

SCHAEFER: And I guess one of the, one of the great difficulties in a world that has been changed by Internet access and you know, jet age travel and stuff, is that it’s very hard now to find a culture where the people who’ve been so kind of isolated that you can actually have them, you know, expose them to western music for the first time. And do some kind of real theoretical work with a real practical work.

BHARUCHA: Nowadays if you want to do cross-cultural work and find a tribe that has never been exposed to work outside to music outside. You come to the United States.  

SCHAEFER: Well it really goes back to both expectation and end the quality, Dan, of the intervals as well. I mean, you know, those, those minor seconds which, you know, can denote anger in, in some cases, you know, played together is this kind of a crushing dissonance. And it raises the whole question of what is consonant? What is dissonant? Is that distinct from culture to culture?

LEVITIN: It’s hard to say because there are some cases of minor seconds where they don’t sound aggressive. When Thelonious Monk plays them, they sound cheerful and almost comical. It’s not just that you’re playing a minor second, you’re playing it within a certain context of melody and harmony, rhythm, tempo, a certain timbre. Now Tom Fritz’s experiment is very impressive and it has given an answer that many of us have wondered about, I think for a long time, but going the other direction I listened to, I’ve been listening to a lot of Chinese opera and I can’t tell what emotion is being represented at all in that. I like the music and I find it interesting and beautiful, but I don’t know, maybe it’s just me.

SCHAEFER: It’s not just you. I have the same, you know, maybe it’s just the two of us.

MCFERRIN: Can I try something? Talking about expectations. Expectations.

MCFERRIN: What’s interesting about that is regardless of where I am anywhere, every audience gets that, but it doesn’t matter, you know, it’s just, you know, the pentatonic scale for some reason.

[01:01:33] LEVITIN: If you’re looking for a job in neuroscience.

SCHAEFER: Just to phrase the question scientifically, Larry, what the hell just happened?  Expectation. I mean, you know, how, how does the brain know that? What and how does it fall into the pentatonic scale of all the intervals that are out there?

PARSONS: On the one hand, he takes a spatial metaphor which is the key, you know, essentially the keyboard and he builds it up, notice he starts in a small interval and then slowly expands it out and the audience, sort of builds their expectations about what they’re going to sing. And then Bobby just plays without an inexpensive out and he builds a melody and harmonizes and improvises on top of it. So it starts with some shared knowledge and shared expectations and then it’s improvised collectively. I think that’s how we would think of it as a behavior- that’s the behavioral side. And then there’s elements of a joint action, shared cognition.

BHARUCHA: It’s also related to motion. Music is integrally related to movement and I think the fact that Bobby was so clear in his movements really enabled that to establish the spatial representation in a kind of solidified way.

SCHAEFER: If this were not New York City, if this were Bangalore and you know, people were expecting those, you know, the second to be flatted, would they have been singing a slightly different tune?

PARSONS: Well if he moved in the same way they’d be singing the same tune but with slightly different syllables.

LEVITIN: So I wonder if you’ve ever jumped back to make the sharps and flats.

SCHAEFER: Let me, let me raise to other related questions. One is people who grow up in a culture with a pitched language like Chinese, where you know, the pitch of speech will change the meaning of a word. And the other related question is people who grow up in a culture with a click language, where there’s a rhythm like, like Xhosa and those other South African languages. I can’t even pronounce the name of.

PARSONS: The original languages.

SCHAEFER: So in both of those cases, there is music inherent in the speech, isn’t there? Or at least there’s musicality. I mean there’s pitch in one and there’s rhythm in the other.

PARSONS: Music or what we call prosody or tone changes, tone contours are part of implicit with speech. I mean, the general thing is that music in its primordial or evolutionary path was deeply embedded in dance and mimicry. So we would, you know, before maybe language, we would mimic animals, mimic physical events, mimic our own actions, mimic the gods. Whatever sort of things we needed to dramatize. We would, we would find out ways to convey them. And there will be this very broad, rich bandwidth of music, dance language, protolanguage, gesture, facial gestures, and it would all be participatory. This complicated soup in the sense of human, a shared social experiences is a source of human culture in some ways. When we think about a music concert, it, we just take a little slice of that in a sense, pull it out. And then there’s an audience that’s supposed to sit there. You know, Bobby’s an exception. There’s an audience that sits there passively and then some performer, either a music performer or a dance performer, pacifies them while they look at the experts do his thing or her thing. And then that’s a very unnatural evolutionary speaking, unnatural situation.

[01:05:48] SCHAEFER: That’s a question for another panel and maybe another world science festival, but you know, how and why did we begin separating out all these different components of what seems to have been a kind of proto-communication that involved the body and speech and music and whatever. What is, I mean, can we look at infants as you know, a kind of a Tabula Rasa, a blank slate to give us some kind of clue of what early human communication might have been like? Some of these assumptions that their brains don’t have the greens and the reds yet. Everything is-all possibilities are open, right?

PARSONS: It’s important to keep in mind that humans are special. No other creature really cares about consonance and dissonance. It’s important to keep in mind that the things we prefer, we prefer them because we’re human and there there’s nothing in the sound itself that other animals, other species would prefer.

BHARUCHA: I would be a contrarian here. I don’t think that these data show that it’s hardwired. I think that it shows that it could have been learned very, very early, but, but we don’t know for sure that…

SCHAEFER: In utero for example. How would that happen? Just through exposures of…

LEVITIN: The part that the thing that’s difficult about doing experiments in physics or in neuroscience is controlling all the variables and the developing fetus by the age of 12 weeks, the auditory system is fully functional and it can hear sound through the amniotic fluid. It sounds like what it would, what music sounds like if you put your ears under the bathtub. You get mostly low frequency information, but that’s enough to track chord progressions and bass notes. So just born infants. I’ve already had months of exposure.

SCHAEFER: It would be great to be able to do an experiment like this right here, right now. We can make a step in that direction. Bobby, you’re going to get hooked up to, to do…

MCFERRIN: I’m gonna get hooked up.

LEVITIN: Like you were that night of the Miles Davis concert.

SCHAEFER: Dan, you talked about having the control variables and how difficult that can be. So what are the ways in which we measure some of the things that you’re talking about in these experiments? I mean, for example, what we’re about to do now,  features, galvanic skin response, things like that. Does that sort of the, the, the general…Larry?

PARSONS: I’m describing at least initially is a way to understanding those things is how does the brain represent emotions that you experience in music? And so in the way we’ve been studying as neuroscientists is on this slide, which is there. Yeah. Well, before that we’ll just mention maybe briefly, a little bit about evolutionary theory. So the reason that something is pleasing to us as a, as a, as an animal, as a human is because it’s good for us and our survival. So the reason we have these positive associations and maybe all this range of emotion when we listen to music and dance is because those experiences are important to our survival. Things that are important to our survival are linked by our history into our reward system to make us seek them out and experience them. Again, and again, and again, so why is music adaptive or is it adaptive? So this is a point of controversy and you’ll see if you see the film that there are different views about it. And there are speculations, there’s no yet clear evidence for how to decide if it’s adaptive or not, but these are the kinds of speculation. So people speculate that emotion, that music allows us to regulate our emotions together and that’s a way of sort of harmonizing ourselves socially as a group. That it allows us to choose our mates, that it allows us to bond with our offspring, the ones that were caring for as caretakers. It allows us to cohere as a group before a battle or hunt or some other kind of ritual that we might have. Manages situation of social complexity where it’s ambiguous or difficult to figure out how we proceed.

[01:10:27] PARSONS: We can use music in that sense. Coalitions, we signal to other groups that by singing and dancing together in a kind of coherent way that we’re organized, we can’t be screwed with. You can’t come and take our belongings, our land, because we’re a coherent signal group. And then finally, the most interesting from a neuroscience point of view is that music is a source of play and play allows different parts of the brain, different neural systems to wire up at critical periods of development and that those things confer an advantage and there’s a set of findings that look at the benefits of being a musician early in life on intellectual development and intellectual abilities later in life that tap this idea. So these are all speculations about how music has an adaptive nature. So it’s adaptive and then it has an emotional component as well. And in the next slide after this slide, is an example of some of the kind of brain activity you see. Again, this is just average people lying in a scanner listening to music from the 1920’s in Turkey that they’d never heard before, but they really liked. And as a consequence, you get a lot of a set of brain areas activated that are active for things like reward situations for drugs or sexual experiences or food, all kinds of gratifications. And here they are active just for novel music you’ve ever heard before. A bit like walking along and hearing some kind of music just that you didn’t expect. You can also plot the kinds of brain areas that are active when you have these experiences that are sometimes called chills, chill experiences where you have this kind of shiver effect, it’s something you really care for and you get all the emotional areas.

And then you get this so madison that represent motion and sensing your body move at the same time and memory, all of these kinds of areas that are richly embedded. And then finally, an important feature about emotional experiences in general, which are shown in, in music as well, is that we can distinguish subtle changes between, in we’re just choosing gross verbal terms, happy, sad, angry and danceable. And there’s a whole range of things you could use, terms to describe music, but it’s much more complicated and language fails to describe music’s experiences. Yet people can distinguish these subtle emotional states and their distinguished, their discriminations depend on these subtle differences in brain activity that we all feel sometimes we can describe, but we can all feel and, and scientists, people like me and the rest of the three of us here work at trying to distinguish how those things happen in the brain. So they happen in the brain, but they also are connected from the central nervous system to the peripheral nervous system. Changes in the peripheral nervous system affect the way the sort of perspiration and other properties of your skin which can be detected by what’s called a galvanic skin response, GSR. And what we’re going to do right now is when a wire up Bobby and two other volunteers and we’re going to let them listen to some music. And while they listen to music, we’re going to show you an image of the, the skin conductance, for each of them in three separate panels on the screen. And then we’ll chat with them about their experiences, about what they felt and their verbal descriptions of it. And then we’ll go back and forth between them and across the four different samples of music and give you a sort of an informal demonstration of the kind of scientific objective measures that we use to study music and emotion.

SCHAEFER: All right. So shall we-do we have our two volunteers? Have they been dragged kicking and screaming from the audience up onto the stage yet?  All right. Here we go.

[01:15:37] LEVITIN: You might ask yourself, why don’t we just ask people what they’re feeling when they listen to music? You know, why all this other, why all these wires and all this other stuff. Then one reason is that people often try to please the experimenter, they say what they think the experiment or wants to hear. And for the most part we trust that people are giving us honest answers. But if we can find something that is not under conscious control, that’s often considered to be a pure measure. And for the most part, most people can’t control how much their hands sweat, their hands, sweat or they don’t. And you know, it’s part of what we call the autonomic nervous system for a reason. It’s automatic. Unless you’re a true zen master, it’s not something that you can typically control.

SCHAEFER: And so in addition to the skin response, a pulse rate? Is that also…?

PARSONS: Breathing, pulse rate. There’s a range of things we can use which are outside of the brain which reflect these changes.

LEVITIN: That are governed by automatic processes. EMG, whether you’re tensing your muscles are not subconsciously, part of the fight or flight response, which is the galvanic skin response is also part of. This is the sympathetic nervous system gearing up to take some action.

SCHAEFER: And the chill effect that sugar that runs down your spine when you’re really, really lucky. What is, you know, how does that, is there a way to measure that? Does that, is there kind of, aside from the images in the brain, is there a skin response to repulse response that goes with that?

PARSONS: There is probably a pulse response. Nobody’s studied it closely as far as I know, and we don’t understand why. It’s obviously a complicated experience and so far as I know people have people like John has figured out what seems to be a musical undercurrent to it, but no one understands why we are actually.

SCHAEFER: So Mitchell’s doing the galvanic skin response. Is he measuring pulse rate as well?

MITCHELL: No, we’re just doing GSR.

MCFERRIN: And why the headphones?

MITCHELL: I wanted to create a little bubble for the subjects so that there’s no distractions.

MCFERRIN: We’re going to hear it too, right?

MITCHELL: We’re going to hear what they’re going to hear except for the white noise that they’re hearing right now. So, Bobby, please enjoy

SCHAEFER: Enjoy your white noise, Bobby. What’s the point of the white noise, Mitchell?

MITCHELL: The white noise is I’m basically starting the recording right now and I am recording their baseline physiological state because if we want to do a comparisons between, between our three participants, we need to normalize for some physiological differences. There is one thing I wanna point out is that the GSR is there’s a little lag involved. So from the time our participants are hearing it and processing it and the neural signal getting to the fingers and actually producing sweat, it takes about one to three seconds. So keep that in mind when, when you’re looking at the signal. So can you play track one?  

SCHAEFER: Can you guys hear us?

[01:20:03] PARSONS: While you’re chatting we’re going to do some data analysis. Let me start with you. What did you, how did you react to that music? Had you heard it before?

AUDIENCE: I’ve heard this song before. Yes.

PARSONS: Did you like it? How did you feel?

AUDIENCE: I liked it. I mean, it makes me think of the intro to half of the 80’s movies I can think of. I like 80’s movies.

PARSONS: And how about you?

AUDIENCE: I’ve never heard of that music before. But yeah, I also liked it and I’ve found that as I was listening to it in my head, I was trying to follow the melody because there was some repetition. I liked it.

MCFERRIN: Yes, I liked it. Yeah. I found myself listening to the different textures, different voices.

PARSONS: What’s interesting is the thing to notice here, is that are, you know, so we have a non-musician, an amateur musician, which most people are kind of an amateur musician, and she’s from a different culture since you hadn’t heard this piece. Bobby is a musician as well as a member of our culture. So we have sort of a mix of individual backgrounds and presumably that’s reflected in some of their responses in what they say as well as…So this is the average response, during the last piece. You can see this is the response here, Bobby’s response here. So the average over the whole interval of listening to the music and then response as the middle value. So now with a little bit of preparation.

SCHAEFER: Can I just ask, what is that response measuring?

PARSONS: It’s sort of a excitability. In a sense, it’s just one dimension. Are you surprised? I mean, the sort of things Bobby described as an expert listening are not reflected in this signal. This is a response to the emotional and of course as members of the audience, you all had an idea of what maybe this music was about, but as we go through the other selections, you’ll see that there’s an experimental design behind the contrast between the different music pieces. So why don’t we go to the next one.  So we’ll just play round robin again. So, what kinds of feelings or emotions did you have for reactions? Did you have to do that piece?

AUDIENCE: I guess a contemplativeness tinged with a little bit of stress. I listen to classical music station when I study, and I am taking the GRE in three weeks.

PARSONS: You’re taking a test.

AUDIENCE: So I was actually listening to some music earlier today while studying for the GRE.

PARSONS: So you can see that there are very specific associations which are kind of unpredictable to particular pieces

AUDIENCE: For me, I play a lot of classical pieces and piano, so,  I guess I can say that I’m more familiar with piano sounds and I enjoy it very much so I think it was very soothing and made me happy.

PARSONS: Do you actually know the piece?


[01:25:09] MCFERRIN: I enjoy that a lot when I was a kid. Well first of all, I grew up in a home where there was a lot of classical music. Both parents were classically trained singers and whenever I was sick, my mother would give me two things. She’d give me music and medicine because she knew that the medicine will take care of the aches and pains, the music, but also get me to relax and to focus the, you know, take my attention away from how bad I was feeling. So that was like medicine in a way.

PARSONS: So when you were listening here, it made me think of the memories of…So you can see quite different experiences. Let’s see what the, how it looks. I think that if you haven’t noticed in public, many of you have zero. All the bars are going below zero.

MITCHELL: That’s right. So this is with respect to the white noise, so somewhere somewhat of a neutral state is that white noise. So we can see here that across the board, our participants were actually relaxing while they were listening to this piece.

PARSONS: We can say that Forrest, he’s the extreme whereas Bobby was the extreme for the, for Delirious, Prince’s Delirious. So I think we can, any other questions? So we’ll move to the next selection. How bad does this interact with the GRE experience?

AUDIENCE: Probably not well.

PARSONS: What were your reactions to that?

AUDIENCE: Irritating noise.

PARSONS: Was it music?

AUDIENCE: Debatable.

PARSONS: Was it familiar?

AUDIENCE: No, it wasn’t.

PARSONS: So irritation was the core feeling that you had. Did you have a sense of it going anywhere? Like resolving? Did you have a expectancy that moving in a particular direction or meaning something to you?

AUDIENCE: No, I wouldn’t say that. I was hoping it would move towards.

PARSONS: Do you think the composer had an idea that he was trying to convey or she was trying to convey?

AUDIENCE: Probably.

PARSONS: Did you recognize something in it? If you had one word. So you didn’t recognize it. OK, well let’s, let’s move on. How did you react?

AUDIENCE: I pretty much agree with him that it was annoying, sound, irritating sound, but I’m just waiting for it to end.  

PARSONS: So irritation that’s the keyword there.

AUDIENCE: Well, in the beginning when it first started, I was surprised. It wasn’t a pleasant feeling for me I guess.

PARSONS: Was it shocking or did you feel…

AUDIENCE: I was expecting sort of a melody or music of some kind, but yeah, I would say shocking and surprising in a not pleasant way.

MCFERRIN: I was waiting for…

PARSONS: So we had an idea of what that would, that piece of music would, the kind of emotion that would elicit. None of the except for this cultural reference, none of the people actually used the sort of canonical term that we were expecting to elicit. But they seem to concur on the reaction in some ways. So let’s see what the data looked like on this from the GSR.  

[01:30:25] PARSONS: So again they’re excited in a way. There’s a difference between the three. Maybe judging by this, but you can see it’s quite different than what we saw for the contemplative piece that preceded it.

MCFERRIN: Yeah. I have a little headache.

SCHAEFER: So this is the same scale as the first piece. But all three are much higher up than they were in the first piece.

PARSONS: Yeah, I think so. What at the fourth selection will compute some statistics that will average across the, the three listeners and then show you the difference between each of the four different selections and their responses. So we get a clearer sense of how they vary. But now will go for the very last of the pieces. Start again. Give us your reactions to that.

AUDIENCE: I liked it any different than anything that I normally listen to, but I wasn’t quite sure what to expect at any point. It was very peaks and valleys, but it was really nice.

PARSONS: And did you have a sense of what feelings the singer or the composer was conveying?  

AUDIENCE: I think it sounded mostly happy, but I could be wrong.  

AUDIENCE: I agree with him for the most part. But I sort of recognize that it might be, might have an Asian related, possibly Chinese opera.

PARSONS: You haven’t heard it before?

AUDIENCE: I’ve heard Chinese opera before, but I wouldn’t say that I’m really familiar with that.

PARSONS: Could you read the emotion or the changes in emotion as you heard it?

AUDIENCE: Hmm. Not really.

PARSONS: Could you understand any of the language?



[01:34:05] MCFERRIN: Well I find it interesting, I’m not sure if I liked it or not. I’d have to listen to a little bit more of it to see where it’s going to go. It was a little bit irritating in a way, but at the same time, I wanted actually I wanted to hear more of it because I wanted to hear where it was going to go, you know, so I was curious, I was just curious about it. If it had gone on in the same way with the same intensity for five or 10 more minutes I think I would have gotten tired of it.

PARSONS: Yeah. So that you can see, again, he’s looking for the musical structure, maybe because it’s quite novel for your listening experiences and the musical side of the production is of interest to you, whereas you two seem to have slightly different reactions again. So let’s look at the average for this one and then we’ll pause briefly and then let him…check it out. Where were you? OK. So, you know one thing we might notice is that for the first time, the highest of the responses for this last piece. Bobby shows the least of the response overall. So the, again, there’s this quite specific interaction as we say in science between each of their individual backgrounds and makeup and the particular piece, and it changes across the four different kinds of pieces that were used that which we thought of as having a different either cultural and-or emotional valence or kind of general tone. Apart from all the musical subtleties of listening to the piece. So now if you can compute the average. We’re approaching the end of our presentation of our event tonight, but we’re going to just show a little bit of the average, across the three.

MITCHELL: This is more emotionally relevant pieces. The other one was more of a cultural, right.

PARSONS: So what we thought in Prince’s Delirious, it would be something which is danceable, very physical or sort of, you know, yeah, sexual. I’m from Los Angeles. This piece was of course meant to be more contemplative, peaceful, a slightly sad. And then of course the stressful or angry or I’m not sure what term we had in mind there exactly as a single verbal label was quite different, but again, average across our three listeners who have different experiences and different reactions sometimes share, but sometimes you can see that we get this a distinct pattern showing again how scientists approach the problem in a studying as a component, people’s overt conscious awareness of emotional reactions to music as well as these subtle, implicit as we say, reactions to music. So I think we’re done with that.

SCHAEFER: Mitchell, thank you very much. Thank you guys too.

SCHAEFER: we’ve been talking a lot about music. And we’re going to conclude with some actual music. This will actually give a couple of our panelists a chance to join the musicians who I have not properly or formally introduced so far yet. So let me introduce you to our cellist,  Amber Docters van Leeuwen. The tabla player, Naren Budhakar. And Parag Chordia plays the sarod. And our final performance will necessitate a little bit of musical chairs. So Dan and Bobby are gonna move over here and Jamshed and I are going to move over there and we’ll have a little musical encore.  

[01:39:19] SCHAEFER: Now we all know what to listen for, right?  


SCHAEFER: Thanks to the musicians and our panelists, Larry, Dan, Jamshed, Bobby, of course. I want to thank you folks as well for coming to our panel tonight. It’s been much more than a panel. Much more we could talk about, but we’ll leave it there for the moment. Again, thank you all and good night.

Notes and Neurons: In Search of the Common Chorus

Is our response to music hard-wired or culturally determined? Is the reaction to rhythm and melody universal or influenced by environment? John Schaefer, scientist Daniel Levitin, and musical artist Bobby McFerrin engage in live performances and cross-cultural demonstrations to illustrate music’s noteworthy interaction with the brain and our emotions.Learn More

View Additional Video Information

Up Next