Language processing in cochlear implant users using fNIRS
Document TypePhD thesis
Access StatusOpen Access
© 2018 Dr. Xin Zhou
Cochlear implant (CI) users differ in their auditory speech understanding ability. This variability is partly due to variability in deafness history and pathology, and partly due to functional brain changes that are likely to occur during deafness and after implantation. By measuring cortical activity in CI users, a relation between functional changes in language associated regions of their brain and speech understanding may be revealed. However, when investigating cortical activity in CI users, commonly-used neuroimaging techniques have limitations. For example, EEG and fMRI may suffer from magnetic or electrical artefacts, and PET imaging is invasive for participants. The studies described in this thesis used a non-invasive technique – functional near-infrared spectroscopy (fNIRS) – to investigate cortical activity in CI users related to speech understanding and the integration of audio-visual speech cues. Compared to fMRI, fNIRS also has the advantages of being quiet (not suffering from the loud magnetic scanning noise) thus suitable for auditory-related tasks, and more tolerant of body movement. The first study determined whether fNIRS measures of cortical activity in post-lingually deafened CI users when listening to or watching speech are correlated with their auditory speech understanding. The fNIRS results showed that speech-evoked cortical activity in CI users that was not only different from normal-hearing listeners but also was negatively correlated with the speech understanding ability. That is, CI users who had poorer auditory speech understanding ability showed higher fNIRS activation in certain brain regions of interest when listening to or watching speech. The increased brain responses might be related to brain functional changes that occurred in CI users during deafness and after implantation for visual speech processing or more listening effort and more neural responses that were used by CI users to process auditory speech. The second study determined whether audio-visual (AV) integration of speech cues in post-lingually deafened CI users is different from that in their similar-aged normal-hearing adults. Participants’ reaction times, response accuracy, and cortical activity were measured when performing different speech identification tasks. A novel method was proposed that combined a probability model and a cue integration model to quantify the amount of AV integration based on response accuracy measures. Consistently, behavioural results using response accuracy and reaction time measures did not show better AV integration in CI users compared to people who had normal hearing. In addition, fNIRS measures of cortical activity did not show AV integration in either CI users or normal-hearing adults. The third study determined whether aging affects AV integration in people who have normal hearing when responding to speech using the same behavioural and fNIRS measures as in the second study. Again, fNIRS results did not show AV integration in either younger or older participants. Behavioural results found no significant difference in AV integration between the older and young participants using both reaction time and response accuracy measures. This thesis integrates knowledge from multisensory neuroscience and psychophysics and uses a novel brain imaging technique to measure cortical activity in CI users for language processing. Results in this thesis showed that this novel imaging technique – fNIRS – could be implemented to examine the variances in auditory speech understanding among CI users. It makes a new advance in the way that multisensory abilities are measured behaviourally, by combining models of optimal and minimum integration. Results in this thesis found that there was no significant difference between CI users and normal-hearing adults in the integration of audio-visual speech cues. Neither was there a significant effect of aging on AV integration.
Keywordscochlear implant; language processing; fNIRS; audio-visual integration
- Click on "Export Reference in RIS Format" and choose "open with... Endnote".
- Click on "Export Reference in RIS Format". Login to Refworks, go to References => Import References