ROUGH EDITED COPY
EHDI
PACIFIC SALON 6/7
"MUSIC AS A TOOL FOR IMPROVING AUDITION"
PRESENTER: SHANDA BRASHEARS
3/15/16
9:40-10:10 A.M.
REMOTE CART/CAPTIONING PROVIDED BY:
ALTERNATIVE COMMUNICATION SERVICES, LLC
PO BOX 278
LOMBARD, IL 60148
18003350911
acscaptions.com
* * * * *
(This text is being provided in a rough draft format. Communication Access Realtime Translation (CART) is provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings.)
> Good morning. Come on in and get settled. I'm happy to see everyone this morning. That was a wonderful plenary session. I was super-excited to hear somebody from a musical family talk about American Sign Language in such a very inspiring way.
I'll give everybody a minute to come in and get settled.
Our interpreters as well.
So my name is Shanda Brashears, I'm an audiologist at the duPont Hospital for Children in Wilmington, Delaware. Background on why this topic is important to me. I was born and raised in New Orleans, one of the most musical places there is. Yay!
And I was born into a musical family. My parents also were both born into musical families and they learned it like they learned English, Italian and Cajun French. They wanted for me to have a little bit more of a formal education in music, so I started studying classical piano and music theory when I was five and paid off for my parents ten-fold, because as a church choir director, music teacher and wannabe rock star I was able to put myself through undergrad and graduate school as a musician.
So I'm going to talk a little bit about the different skills in music and language that overlap. Of course, auditory memory is one that we're all very familiar with. We all learned our ABCswith, you know, a song. The ABC song. And my kids learned all of their state capitals, which I never managed to memorize because I didn't have a catchy tune. They had a catchy tune in elementary school and three years later they knew their state capitals from that song. We'll talk later why it is that we are able to remember things in long-term memory more easily when we set them to music.
So motherese is the technique that moms use and dads too when they speak to babies. It’s a sing-songy way of talking to infants. It has a little bit of a higher pitch. There's a lot more variation and melodic intonation when you talk to a baby in this way, and a lot of research has supported that when you talk to a baby in this way, they will attend to the signal more quickly and for a longer period of time than with regular adult speech. In a moment we're going to talk more about pitch perception and the perception of rhythm and timing and how those skills overlap between music and language perception.
And fluency is probably one of the most successful areas that music has been able to show a positive way to intervene and treat people who have problems with stuttering. I have a couple of videos that we're going to go over at the end, if we have time. Otherwise these links are going to be on the EHDI website in my talk where you can just go on to youtube and search stutterers singing. I've never seen a report of a stutterer who doesn't stutter while singing. It's virtually impossible.
We're going to talk about that in a little while.
Let's discuss pitch perception first. In speech and hearing we like to oversimplify things and we all learned that vowels are low frequency and consonants are high frequency, but, in fact, if you look at the spectrum of an ah versus oo, you see that the oo is not as intense. It's slightly lower in frequency, but the shape of it is very, very similar. They're very similar. So if you have trouble perceiving pitch, you may very likely not be able to discriminate an ah versus an oo sound without the visual input. And the ee is also really quite similar to these other two. So a musical correlate and a fun game that you can play with kids without having to be a musician is to learn how to identify timbre. That's the word in music used to describe the way one instrument sounds different from another instrument. So if you have, like in this example, a trombone versus a French horn, you can have them play exactly the same note. That means that the fundamental frequency is the same but same as how we have in speech, the harmonics in music are different between one versus the other. This is the a different way to practice the same skill of discriminating subtle differences in frequency or pitch. You can do a game with music prerecorded. You don't have to play the instruments yourselves. You don't have to have the musical instruments in your home or in your school or clinic setting. You can -- once a child learns what the different instruments sound like, then you can practice discriminating between them. It's fun and it also improves their pitch discrimination for language purposes.
Timing is something that we probably don't talk enough about in audiology. We measure people's ability to hear soft sounds at various frequencies, but we don't often measure their ability to discriminate subtle differences in timing. There is a test called the random gap detection test which you can get somebody's threshold for how quick of a change they can perceive, but an example in speech where you need to be able to have good temporal resolution is the difference between a bah and a pah. So I used the cues for that. If you give somebody that visual, extra information, if they can't hear that 65 millisecond difference in the beginning between the voiced vowel -- excuse me -- consonant to vowel change in the bah versus unvoiced in the pah, it looks the same on the mouth. So if you can't hear that 65 millisecond difference and not getting a visual cue, the ability to discriminate these two things is going to be very difficult if not impossible for some hearing impaired people.
I had the pleasure of working with two different string players many years ago who were hearing impaired. One had a bilateral conductive hearing loss and his parents were so astute they figured if he played an instrument like viola or violin, which he learned both, he would be able to hear the instrument without any technology by bone conduction, and this was a very rewarding experience for him and his family and the orchestra he played in. And another young lady who wanted to play the cello who has a sensorineural hearing loss and binaurally aided, the orchestra teacher did not want either of them in her ensemble until they could prove competency in their instrument. Nobody else has to prove competency to play in the orchestra, but these two kids did. I helped them out and one of the first things I wanted to be able to prove they could do is tune their instruments without using an electronic tuner. So of course when teaching a kid how to do this, it's great to have an electronic tuner where it tells you red if you're far away from the pitch you're trying to tune to yellow when you're close and green when you're there, but if you listen to these very subtle timing differences that manifest in a phenomenon called beading, so let's say you play an A440 and you've got that target pitch you're trying to tune to, if your instrument is close to that, you hear a phenomenon where the waveforms are close in frequency actually slap against each other and that's why we call it beading, but kids call it the wah-wahs, and as you tune your instrument so that target pitch... wah-wah-wah... smooths out, and that's how you know your instrument is in tune, and I was able to teach these kids to do that and they practiced with the tutor and were able to finally do it auditorily on their own with amplification.
So are there any audiologists in the room? All right. Fantastic!
So I'm going to give you some magical musical tips for visual reinforcement audiometry. So for those who don't know, this is the technique we use to test infants and toddlers in the sound booth, and what we first have to do in order to get them to respond to the soft sounds that we're trying to assess threshold with is to condition them to turn. So if we talk to the baby, and I just really urge you to stay away from the bah-bah-bah... hi, Johnny! Over here! Look! Good job. It's a butterfly!
It's a baby. Talk to.
Don't, bah... bah...
Motherese, super-excited and they're not turning. When all else fails... sing. Twinkle-twinkle. Hyper baby I give them wheels on the bus. Sometimes I have to try hard not to dance so I'm not visually cueing them, but you can see kids...
¶ the wheels on the bus go round and round ¶
They start doing this.
Or give them...
¶ itsy bitsy spider... ¶
They may or may not turn, but a lot of babies start giving you itsy bitsy spider fingers.
So, you know, babies will turn to speech much more quickly and readily than they'll turn to tones and they will turn to singing much more quickly than they will turn to speech. Try it in your clinic.
So then when it's time to condition them to the tones, I like to, instead of just pulsing it or giving them a continuous tone, I like to give them a rhythm, and the one that I gave you hear is bah-bah-bah
So playing that, they turn to it with visual reinforcer I give them... bah-bah-bah
And you can condition normally developing babies with this method in one try. If you have an infant or toddler only giving you five to ten trials you don't want to waste three of them on conditioning. In your brain, the connection between the auditory stimulus and the visual reinforcer is being made by pairing that rhythm. And then, of course, you don't want to go... bah-bah-bah every single time.
These are simple rhythms. You can change them every time or every other time to keep it novel and interesting.
Okay. Fun is over.
Just kidding.
This slide is up here to show we don't just have efferent pathways, we have afferent pathways from the brain to the ear. This is what I learned in grad school but not complete because it only has efferents going to the level of the midbrain. We learned that the efferents do innervate the hair cells and we can measure the strength of the efferent auditory system in the level of the cochlea. And the test we use is called the efferent suppression of otoacoustic emissions. This is not used widely. We do use it at Nemours to show hypersensitivity to sound if there's physiological basis to the acoustics. Those with the disorder have an extraordinarily large amount of efferent suppression. When working at the Kresge lab I became interested in a body of literature in France in which they were using contralateral suppression. So basically you measure somebody's otoacoustic emissions and put broadband noise in the other ear 65-decibels and record otoacoustic emissions again. Normal listeners they will reduce 1 to 1.5-decibels. So at Kresge they were developing a technique not only to measure the contralateral efferent pathway but ipsilateral and binaural pathway, so we do it with a noise burst before the click stimulus.
They wanted to show this was a new technique and they wanted to figure out what the auditory efferent system does, and so they tested a lot of normal hearing adults and found that people who have really acute auditory skills as being able to pick tones out of noise, discriminate fine differences in pitch, and loudness adaptation, which is the ability to accurately perceive loudness over a long period of time, that these really fine auditory skills were more prevalent in people who have a large amount of efferent suppression, and accidentally by the second study they started to realize that it was the -- a lot of the people who had these great auditory skills, not surprisingly, were musicians. And so then they said, let's just see if musicians have more efferent suppression than non-musicians and in fact they did. You see a contralateral test. And then, of course, you know, conveniently for the musicians another benefit of strong auditory efferent is protection from noise-induced hearing loss up to a point.
So I got so interested in this, I decided to study it myself using the ipsilateral and binaural as well as contralateral paradigms and I studied the Louisiana philharmonic orchestra and compared them to normal hearing, non-musician age-matched controls, and for all conditions the musicians had statistically significantly more efferent suppression than non-musicians and it was the most thought-provoking AAA paper for 2013. I won an award in the one and only first author publication I ever did. I don't know why I haven't gotten around to another one. I guess life got in the way.
So since that time functional MRI has come out and music has been shown to activate many different centers of the brain, more so than just about any other activity researchers have been able to find and you know, we think we don't like math, but actually our brains crave pattern recognition. They crave predictability and music is organized in a very mathematical fashion that our brains kind of latch on to and really love.
And another oversimplification, I think we all learned growing up or when we were studying speech and hearing or just in our travels is that language is left-brained, music and math are right-brained. But, in fact, music is whole-brained. It lights up more parts of the brain than anything else and this is actually why stutterers can't sing. So there are enough overlap and enough activity in the contralateral part of the brain from where language is specifically processed that if you have a part of your brain that is impaired and causing you to stutter, if you put your speech to song, enough other parts of the brain are activated to where they can sort of pull that impaired part along with it. Then, of course, if you have never experienced fluency, you don't know how to produce it, but if you give somebody the opportunity of producing fluent speech through song, of course then you have a chance of generalizing that behavior to spoken word.
So there's also in this a lot of evidence for music as a form of bilingualism and hopefully we'll get to a little more about that in a moment.
And then this slide is -- I'm sorry to breeze through the fMRI slide so quickly but we don't have a lot of time and quite frankly I'm not a neurologist and I don't understand them 1,000%, but this slide is just to demonstrate that we have a lot of areas of the brain for processing melodies and other musical things that overlap with the language part. So if you are...
[ no audio ]
> We don't always think of that way. We can improve kids cortical auditory function.
This is a fun concept. So it's hard to find somebody who says I don't like music. You know, most people like music, and maybe you don't like all types of music but there's probably at least some type of music or singer or something that you do like about it, and the reason for that is partly physiological. We have been able to show researchers have been able to show since these fMRI studies started coming out that we actually -- our brains release dopamine. It's a pleasurable response that can be measured in the brain to music, and that mesolimbic reward system that we can tap into with music is why we're able to remember things that we learn from music for such a long period of time. So whenever we have an emotional response to something, our brains assign importance to that thing. So if we can do something that is fun and releasing dopamine in our brains, we're going to do it again and again and we're going to remember it. It's going to stick with us sometimes for our entire lifetime.
Another interesting part about this is when you play music for normal hearing listeners, children or adults, if it's a song that they are familiar with and that they like, the dopamine release actually starts before the pleasurable change. So let's say, you know, you play somebody music and they have a little dopamine release and the chorus comes on and that's their favorite part and they know all the words, so they get a dopamine verse. Once they're familiar with the song, the big dopamine burst, because the chorus is coming, actually happens several seconds beforehand. It's anticipatory response, pattern recognition, we know what to expect and, in fact, we're right that we like so much.
So here are a lot of areas or a few areas, I should say, of neurological disorders that music has been effective in treating, and hearing loss is probably one of the less studied areas in this list, but within the realm of hearing loss, I would say that auditory processing disorders is at least becoming a bigger area for therapists to start to use music to treat kids who have problems processing auditory information, and typically this would be a normal hearing population. A lot more could be done with kids who have sensorineural hearing loss or neural hearing loss to incorporate music into their therapeutic interventions and I just want to throw out some terminology here for a second, because the term "sensorineural" has been bothering me for a long time. Sensory hair cells in the cochlea are the outer hair cells and they're responsible for helping us hear soft sounds up to about 65-decibels. If you have nothing but outer hair cell loss, you're going to have absent or abnormal otoacoustic emissions. You're probably going to have normal middle ear reflexes because once the sound is loud enough, your auditory system is functioning pretty normally. You're going to be a very good hearing aid user. You might not have too many problems with pitch perception or temporal resolution, perceiving timing changes, but as soon as you go past the outer hair cells to the inner hair cells or the junction between inner hair cells and the auditory nerve, the auditory nerve itself or anywhere higher up in the brain, that's neural. So, of course, there's people who have sensory and neural impairments within their auditory system and sensorineural is the right thing to call their hearing loss. But we have enough tools in audiology that we can be more specific than that. If it's just an outer hair cell loss, we should call it a sensory hearing loss. If it's just a neural loss and the outer hair cells are present, we now call that auditory neuropathy. But it's also more complicated than that. Some kids have problems with parts of their auditory system that are higher up than what auditory neuropathy would refer to inner hair cells, gap junction or auditory nerve. So if it's higher up than that, you shouldn't call it auditory neuropathy. We should call it what it is, and we have the tools to do that a lot better than what we're doing right now.