A sign language is a language using hand shapes, orientation and movement of the hands, arms or body to convey meaning.
It is usually used by the deaf and people who can hear but cannot speak.
Sign languages involve lots of body actions and they have linguistic properties. For example, a sign language has its specific word order and grammar rules.
When a hearing person learns a sign language, s/he needs to learn to produce the language with the body.
Since the person can also produce the language with the mouth, there is usually some multisensory integration (e.g., hearing speech sounds, doing actions, and seeing the actions) during language production.
Will the multisensory integration have an impact on their spoken language processing?
In a study published in Brain Research, researchers answered the question.
They recruited 17 hearing learners of American Sign Language (ASL) and 17 hearing non-signers to perform a language task.
The task needed participants to classify English spoken words such as band, born, bank, bend, and belt. All words were produced by an English speaker, whose face could be seen in videos.
The ASL learners performed the task twice: once during the first week of their ASL classes, and once during the last week of their ASL classes.
The result showed that the ASL learners’ task performance was improved with time compared with non-signers.
In addition, in the ASL learners, the brain area involved in speech perception showed stronger activity, and the activity correlated with the learners’ use of co-signing speech and lip-reading.
Researchers suggest that during sign language learning, learners gain experience of multisensory integration, which can help them improve speech processing in their native language.
Copyright © 2018 Knowridge Science Report. All rights reserved.