What is your research about?
Most of my work focuses on how children grasp the meaning of words. We know that children rely on certain strategies and mechanisms, depending on the specific features of a particular language. It is important to study these processes in different languages so that our theories are not based on just one language, culture, or society. So far, our theories are largely based on observations in so-called WEIRD societies: Western, Educated, Industrialized, Rich, and Democratic societies. But when we look at societies where these categories do not apply, we get a different picture.
Could you give an example to illustrate this?
If you open a textbook on child development, you will most likely find a photo of parents playing with their child when it comes to how children learn new things. In many parts of the world, adults do not do this. In many societies, children are simply present, play with other children, surrounded by a large community. The adults may be nomadic or spend a lot of time working in the fields, and children are simply involved in what the adult community is doing, not the other way around. That’s the socio-cultural aspect. Then the language itself comes into play. We investigate how children specifically learn verbs that they have never heard before.
Why verbs?
It is often more difficult to learn verbs. I can point to objects, but activities are usually dynamic and harder to capture. Nevertheless, young children learn such verbs at the age of one or two. We believe that children take advantage of the sentence structure in which the verb is used. An idea that goes back to the linguist Lila Gleitman, one of the luminaries in this field. For example, if I want to know what “throw” means, it helps that “ball” is often mentioned right after it – suggesting it is an action you can do with a ball. Pairs of meanings also help decipher the verb, for example, when I give something away, and this simultaneously means someone else will receive it. We believe that children pay special attention to the other words that are used around the verb to narrow down its meaning.
How exactly can you test this?
We can conduct experiments by showing children two videos: one showing someone doing something with an object, and the other showing two subjects doing something together. For very young children who are not yet able or willing to point specifically at something, we use eye tracking, basically a further development of the preferential looking method. And when they hear a sentence like “The boy pets the dog,” many toddlers actually point or look at the correct screen. There are numerous such experiments with children who learn English as their first language. We currently compare this with children who speak Korean and with German-speaking children.
Why these two languages?
In English, the usual sentence structure is subject – verb – object, whereas in Korean, it is subject – object – verb. Our previous studies indicate that it is more difficult for children to understand verbs in Korean than in English, where the action word is embedded in the middle of the sentence. In Korean, they have to try to relate the verb to these two nouns retroactively, so to speak. This is quite a challenge for the linguistic capacities of two-year-olds. We conducted this experiment in Potsdam with a language in which different sentence structures are possible and commonly used. German is very well suited for this. Here, the verb is sometimes at the beginning, sometimes in the middle, or at the end.
Which setting do you use for these experiments?
There is a really great facility here: the BabyLAB. Parents who live in the region bring their children for half an hour or an hour to watch a video together. The children receive a small token for their participation. Parents often not only feel good about contributing to science but also learn something about their child and his or her language acquisition at this age.
Do you encounter any myths regarding children's language development in your research?
There are sometimes fairly rigid ideas about when children should start speaking. In reality, this varies greatly. But one of the saddest misunderstandings I encounter is about multilingualism, which some parents, and sometimes even doctors, think would overwhelm or confuse a child’s language development. A fear that you primarily find in monolingual societies. The majority of the world’s population is multilingual. A similar problem are reservations about sign language. According to a common misconception, using sign language will prevent a hearing-impaired child to learn a spoken language later on. On the contrary: If you communicate with a deaf toddler using only spoken language, which they cannot access, this deprives them of the opportunity to acquire any language. Learning a first language later in life becomes extremely difficult, and for some, almost impossible.
Does it make a difference for first language acquisition whether the first language is grammatically complex or “difficult to learn”?
That’s a bit controversial. In linguistics, “difficulty” is not a criterion, we cannot objectively characterize a language as more or less difficult. When a language is complex in one way, it is usually the exact opposite in other respects. This has to do with the way we communicate: If a language has a complex structure with many rules, this leaves little room for ambiguity. Although it might be more difficult for speakers to learn, the content of the spoken word becomes all the easier for the addressee to understand. The bottom line is that all languages are similarly complex, just in different ways.
Among other things, you research and publish on ‘non-interactive’ learning. What exactly does this mean?
If the hypothesis is correct that children pay particular attention to sentence structure to learn verbs, then this should essentially also work without a social situation. So if a child is doing something alone, for example coloring pictures, and is hearing matching sentences from a loudspeaker, they should still experience a learning effect. We found that this is indeed the case, even for children with autism. When I started working on this, I didn’t know much about autism, but through this research, we now at least have a hypothesis about how children with autism learn a language in their own unique way.
In other words, is it all-clear for parents who don’t spend as much time with their children? After all, language input also comes from various devices.
In our experiments, the content is very limited and what the children are supposed to learn is extremely simplified. The children don’t even have to memorize it and repeat it later or anything like that. However, real learning requires more than just hearing a sentence. Television is particularly problematic in this context because there is no interaction at all that activates the child in any way. My own biography is a good example: I grew up hearing Tamil all the time at home and understand it quite well. But I can hardly speak it because I have never needed it or practiced it. Speaking is essential for the acquisition of a language, so television alone is not enough.
Sudha Arunachalam is a Professor of Communicative Sciences and Disorders at New York University (USA). Together with researchers across the globe – including Prof. Dr. Barbara Höhle from the University of Potsdam – she investigates how children learn languages other than English and how adult-child interaction can promote learning behavior.
For parents of young children, visiting and participating in these playful experiments at the BabyLAB is an ideal opportunity to learn something from developmental psychologists and linguists about the abilities of their children.
To sign up, go to www.uni-potsdam.de/de/babylab/index
Instragram: @babylab_unipotsdam