When we learn new knowledge, it is better to get the information from more than one channel.
For example, when we learn knowledge about a dog breed, we can not only view the color and size of a dog of that breed, but also listen to its bark and touch it with hands.
The visual, auditory, somatosensory, and motor information are integrated together to form rich memory traces of the dog.
When we learn a language, sensory and motor experiences play an important role too.
For instance, visual and motor information can be combined with speech sounds to enhance the memory trace of the speech.
In fact, researchers have found that such integration can help children learn their native language.
In a study newly published in Journal of Experimental Child Psychology, researchers examined whether motion aligned with speech could help 2-year-old children learn words.
If you are a parent, you may notice that a typical 2-year-old expands his/her vocabulary quite fast.
One explanation of this efficient learning is that children pay attention to audiovisual cues of word meaning in natural learning settings.
Speech with aligned motion is one important source of audiovisual cues.
In the study, researchers recruited 48 Dutch-learning toddlers and trained them to learn names.
Learning materials were videos in which two cartoon creatures showed backward/forward motion or rotation.
During pre-exposure, children saw each creature on its own while hearing its name.
During exposure, children saw both creatures moving side by side while hearing utterance about one creature. But only the motion of one creature aligned to the speech.
In an experimental condition, the pre-exposure and exposure phases cued the same label-reference mappings, whereas in a control condition, the pre-exposure and exposure phases cued different label-reference mappings.
After the exposure, children were given pictures of the two creatures and asked to look at one or the other.
The results showed that children spent longer time looking at the consistently cued creature, implying that they learnt the association between the name and the creature.
Researchers suggest that the alignment between utterance and motion can provide toddlers cues to infer the reference of the name, and that children can use multisensory information to learn their native language.
Citation: Jesse A, Johnson EK. (2016) Audiovisual alignment of co-speech gestures to speech supports word learning in 2-year-olds. Journal of Experimental Child Psychology, 145: 1-10. doi:10.1016/j.jecp.2015.12.002
Figure legend: This Knowridge.com image is for illustrative purposes only.