Derek Houston, PhD, a cognitive psychologist focusing on speech perception and early language development in infants with and without hearing loss, talks about how researchers are using interactions between parents and their children with cochlear implants to better understand how these children learn language.
Interview conducted by Ivanhoe Broadcast News in April 2017.
What is it that you and your team are looking to study?
Houston: This project is looking at the interaction between parents and their children, and how that effectively leads to language development, specifically novel word learning.
Tell us a little bit about how you’re studying, what steps you are taking in the lab going about looking at that interaction.
Houston: It’s kind of taking a very classic methodology and putting a very modern high tech twist to it. What we’re doing is we are giving parents, well one of the parents, three objects at a time to play with their child. And we just say play with your child how you normally would at home with these objects. This one is called a zeebee, this one is called a dotie, this one is called a blicket; so they just have these made up names. Then we give the parents a cheat sheet so that while they’re playing with their kid if they forget the names that we’ve given them they can look at the cheat sheet. But we actually don’t instruct them at all to teach the kids the words. We want to really get them as naturalistic as possible, interacting with their child. And then seeing which objects the child, infant or toddler learns the novel lables for. We test the child afterwards to see which ones they learned and which ones they didn’t, then we go back and look at the interactions for the objects they learned the names for compared to the ones that they didn’t. Then the really high tech part of it is the way we look at the interactions. The child and the parent are both equipped with head mounted cameras, so we can see moment by moment where they’re looking from their own perspective. This already really advances on this more classic way of doing this kind of observational research of just having one or two sort of birds’ eye or stationary cameras looking at what’s going on during the scene. We’re actually getting it from the first person perspective; we’re getting a very precise look of what they’re looking at by having an eye tracker also mounted on the headgear that has the head mounted camera. So we see exactly where they’re looking within their visual scene.
You had mentioned that you give the toys totally made up names, would you explain why you do that in the study?
Houston: We give them names like blicket, zeebee, and dotie. These are words that they don’t already have any meaning associated with them, they are objects they’ve never seen before and names that we’ve made up. It’s our way of seeing if they can learn novel labels for novel objects.
You also mentioned about the importance of the head mounted camera and the eye tracking. If you could tell me again why is it important to have the eye tracking, to have sense really where the eyes are going for both people.
Houston: We want to know when the parent and the child are jointly engaged in the same objects or with each other. We’re really looking at the dynamics of their interaction. Does the child follow the parents gaze to the object? Does the parent follow the child’s gaze to the object and label the object after the child looks at it? Do they look back and forth at each other, and how do these interactions relate to whether or not the child learns the name of a particular object.
How important are these interactions when it comes to learning language?
Houston: That’s the context in which we learn language. Language isn’t explicitly taught to children, it emerges from these interactions, these social dynamic multi-model interactions.
When one of your senses is impacted, for example children who are deaf or have the implants, what kind of a difference are you finding in terms of the way in which they learn language?
Houston: That’s really what we want to understand and we’re just at the beginning of understanding or even investigating it. One of the things that has been noted in previous literature and in previous studies is that parents tend to be on average more directive with children who have hearing loss. So really one of the primary things we’re looking at is if this is true, and to the extent to which it is true; and we’ll able to capture that with our methodology. And also how that impacts novel word learning.
What are the implications for this kind of study?
Houston: You could imagine that if we learn that certain kinds of interactions are more successful for novel word learning than others, then this is information that we can give to parents to empower them to be able to help the development of language in their children.
Do you have any early indication that there are certain kind of interactions that may be beneficial for parents and kinds when they’re trying to teach language?
Houston: Well we know from our colleagues at Indiana University, who were the ones who developed this methodology, Chen Yew and his colleagues, that actually following the child’s attention and labeling objects after, very soon after the child looks at and/or touches an object is highly effective for learning novel words. And it’s much less effective to draw the child’s attention to an object and label it. But that’s with normal hearing typically developing children. We don’t know yet if that kind of strategy will also be effective for children with hearing loss, or if other kinds of strategies will be more effective.
Instead of parents taking the lead let the child take the lead as you’re trying to teach and instruct?
Houston: Yeah.
How long do you anticipate having this run, when do you think you will have findings and then where do you go from there?
Houston: Well we expect to have preliminary findings within a few months. We’ve tested about fifteen children so far with hearing loss. So we’re going through the analysis right now and we expect to have some findings coming out in the next few months. A more extensive study we’re proposing is a five year long-term study of parent child interaction and how it relates to both attention and novel word learning.
You said five year study, how many children would you enroll in that and would that be just at this center or is that a multicenter?
Houston: That would be just at this center. And we would probably shoot for about thirty.
What’s the ultimate goal?
Houston: The ultimate goal is to be able to have evidence-based education and therapy for children with hearing loss that maximizes their ability or potential to be able to learn spoken language successfully.
How long were the sessions when you were in the lab and would you go through new objects over time?
Houston: We use new objects each time they come in. A session lasts for about ten to fifteen minutes, not including filling out paperwork.
How many sessions?
Houston: That’s still to be determined.
Age range, did it matter?
Houston: From about one until about three and a half.
Is there anything else you would like people to know?
Houston: One thing that we didn’t talk about is that there’s more than just a head mounted camera, I mean one thing that’s important about this is that we’re capturing interaction in a multi-model way. We are seeing what the child and the parent are looking at throughout the session. We also code which objects the parent and child are touching at every moment throughout the experiment. Also our colleagues at Indiana University have developed automatic object recognition software, which allows us to track moment by moment where the objects are in space so that we can evaluate their relative saliency. For example, if one object is closer to the child than another object then it is visually more salient, because it’s occupying more of the visual space. All of that information plus what the parent and child are saying throughout the session all gets collected. Then our Indiana University colleagues have developed sophisticated algorithms to be able to mine these data to look for patterns of interactions. That’s where you know the real innovation is, taking all this multi-model dynamic data and looking for patterns of interactions that are more likely to lead to novel word learning.
You were talking about automatic tracking software, is it almost like a GPS? Or is it a fixed point?
Houston: How it works is we have the parent and the child actually wear white clothing over their regular clothing, and then all of our background is white also, but the objects are different colors. By having the contrast of the colored objects against the white background, this allows them to automatically track the objects in both attention and novel word writing. And it’s really about seeing from the child’s point of view, using the child’s head mounted camera how much of the visual scene each object is taking up at each point of time.
In terms of when you’re talking about your senses and the cochlear implants, is there an age in which it’s too early for kids to have the implants? What is the normal range?
Houston: The FDA approves cochlear implants down to twelve months of age in the U.S. There are surgeons in the U.S. who provide implants at younger ages, down to about six months. In some other countries, even younger; in Germany, Italy and Australia there have been several patients who have been four months of age and even one two month old. One constraints is that as you go younger there is increased risk associated with anesthesia. Now, very young infants are put under general anesthetic when there’s a severe life-threatening issue that needs to be dealt with, so it happens. But with cochlear implants the thought is there’s sort of the balancing act, where we’re at right now. How young do we think we should go relative to the increased risk of anesthesia at younger ages. It’s still not a very big risk, even to go down to two months of age, but the risk does increase a bit.
In terms of language formation and skills, at about what age do they begin to develop?
Houston: If you count basic speech perception, auditory skills, as part of language development, which I do, then you start before birth. So even before birth the fetus is starting to encode the rhythmic properties of his or her mother’s speech.
END OF INTERVIEW
This information is intended for additional research purposes only. It is not to be used as a prescription or advice from Ivanhoe Broadcast News, Inc. or any medical professional interviewed. Ivanhoe Broadcast News, Inc. assumes no responsibility for the depth or accuracy of physician statements. Procedures or medicines apply to different people and medical factors; always consult your physician on medical matters.
If you would like more information, please contact:
Derek Houston
Sign up for a free weekly e-mail on Medical Breakthroughs called First to Know by clicking here.