Medical Bionics - Research Publications

Permanent URI for this collection

Search Results

Now showing 1 - 5 of 5
  • Item
    Thumbnail Image
    Cortical Processing Related to Intensity of a Modulated Noise Stimulus-a Functional Near-Infrared Study
    Weder, S ; Zhou, X ; Shoushtarian, M ; Innes-Brown, H ; McKay, C (SPRINGER, 2018-06)
    Sound intensity is a key feature of auditory signals. A profound understanding of cortical processing of this feature is therefore highly desirable. This study investigates whether cortical functional near-infrared spectroscopy (fNIRS) signals reflect sound intensity changes and where on the brain cortex maximal intensity-dependent activations are located. The fNIRS technique is particularly suitable for this kind of hearing study, as it runs silently. Twenty-three normal hearing subjects were included and actively participated in a counterbalanced block design task. Four intensity levels of a modulated noise stimulus with long-term spectrum and modulation characteristics similar to speech were applied, evenly spaced from 15 to 90 dB SPL. Signals from auditory processing cortical fields were derived from a montage of 16 optodes on each side of the head. Results showed that fNIRS responses originating from auditory processing areas are highly dependent on sound intensity level: higher stimulation levels led to higher concentration changes. Caudal and rostral channels showed different waveform morphologies, reflecting specific cortical signal processing of the stimulus. Channels overlying the supramarginal and caudal superior temporal gyrus evoked a phasic response, whereas channels over Broca's area showed a broad tonic pattern. This data set can serve as a foundation for future auditory fNIRS research to develop the technique as a hearing assessment tool in the normal hearing and hearing-impaired populations.
  • Item
    Thumbnail Image
    Assessing hearing by measuring heartbeat: The effect of sound level
    Shoushtarian, M ; Weder, S ; Innes-Brown, H ; McKay, CM ; Yasin, I (PUBLIC LIBRARY SCIENCE, 2019-02-28)
    Functional near-infrared spectroscopy (fNIRS) is a non-invasive brain imaging technique that measures changes in oxygenated and de-oxygenated hemoglobin concentration and can provide a measure of brain activity. In addition to neural activity, fNIRS signals contain components that can be used to extract physiological information such as cardiac measures. Previous studies have shown changes in cardiac activity in response to different sounds. This study investigated whether cardiac responses collected using fNIRS differ for different loudness of sounds. fNIRS data were collected from 28 normal hearing participants. Cardiac response measures evoked by broadband, amplitude-modulated sounds were extracted for four sound intensities ranging from near-threshold to comfortably loud levels (15, 40, 65 and 90 dB Sound Pressure Level (SPL)). Following onset of the noise stimulus, heart rate initially decreased for sounds of 15 and 40 dB SPL, reaching a significantly lower rate at 15 dB SPL. For sounds at 65 and 90 dB SPL, increases in heart rate were seen. To quantify the timing of significant changes, inter-beat intervals were assessed. For sounds at 40 dB SPL, an immediate significant change in the first two inter-beat intervals following sound onset was found. At other levels, the most significant change appeared later (beats 3 to 5 following sound onset). In conclusion, changes in heart rate were associated with the level of sound with a clear difference in response to near-threshold sounds compared to comfortably loud sounds. These findings may be used alone or in conjunction with other measures such as fNIRS brain activity for evaluation of hearing ability.
  • Item
    Thumbnail Image
    Temporal Coding of Voice Pitch Contours in Mandarin Tones
    Peng, F ; Innes-Brown, H ; McKay, CM ; Fallon, JB ; Zhou, Y ; Wang, X ; Hu, N ; Hou, W (FRONTIERS MEDIA SA, 2018-07-24)
    Accurate perception of time-variant pitch is important for speech recognition, particularly for tonal languages with different lexical tones such as Mandarin, in which different tones convey different semantic information. Previous studies reported that the auditory nerve and cochlear nucleus can encode different pitches through phase-locked neural activities. However, little is known about how the inferior colliculus (IC) encodes the time-variant periodicity pitch of natural speech. In this study, the Mandarin syllable /ba/ pronounced with four lexical tones (flat, rising, falling then rising and falling) were used as stimuli. Local field potentials (LFPs) and single neuron activity were simultaneously recorded from 90 sites within contralateral IC of six urethane-anesthetized and decerebrate guinea pigs in response to the four stimuli. Analysis of the temporal information of LFPs showed that 93% of the LFPs exhibited robust encoding of periodicity pitch. Pitch strength of LFPs derived from the autocorrelogram was significantly (p < 0.001) stronger for rising tones than flat and falling tones. Pitch strength are also significantly increased (p < 0.05) with the characteristic frequency (CF). On the other hand, only 47% (42 or 90) of single neuron activities were significantly synchronized to the fundamental frequency of the stimulus suggesting that the temporal spiking pattern of single IC neuron could encode the time variant periodicity pitch of speech robustly. The difference between the number of LFPs and single neurons that encode the time-variant F0 voice pitch supports the notion of a transition at the level of IC from direct temporal coding in the spike trains of individual neurons to other form of neural representation.
  • Item
    Thumbnail Image
    Auditory Brainstem Representation of the Voice Pitch Contours in the Resolved and Unresolved Components of Mandarin Tones
    Peng, F ; McKay, CM ; Mao, D ; Hou, W ; Innes-Brown, H (FRONTIERS MEDIA SA, 2018-11-16)
    Accurate perception of voice pitch plays a vital role in speech understanding, especially for tonal languages such as Mandarin. Lexical tones are primarily distinguished by the fundamental frequency (F0) contour of the acoustic waveform. It has been shown that the auditory system could extract the F0 from the resolved and unresolved harmonics, and the tone identification performance of resolved harmonics was better than unresolved harmonics. To evaluate the neural response to the resolved and unresolved components of Mandarin tones in quiet and in speech-shaped noise, we recorded the frequency-following response. In this study, four types of stimuli were used: speech with either only-resolved harmonics or only-unresolved harmonics, both in quiet and in speech-shaped noise. Frequency-following responses (FFRs) were recorded to alternating-polarity stimuli and were added or subtracted to enhance the neural response to the envelope (FFRENV) or fine structure (FFRTFS), respectively. The neural representation of the F0 strength reflected by the FFRENV was evaluated by the peak autocorrelation value in the temporal domain and the peak phase-locking value (PLV) at F0 in the spectral domain. Both evaluation methods showed that the FFRENV F0 strength in quiet was significantly stronger than in noise for speech including unresolved harmonics, but not for speech including resolved harmonics. The neural representation of the temporal fine structure reflected by the FFRTFS was assessed by the PLV at the harmonic near to F1 (4th of F0). The PLV at harmonic near to F1 (4th of F0) of FFRTFS to resolved harmonics was significantly larger than to unresolved harmonics. Spearman's correlation showed that the FFRENV F0 strength to unresolved harmonics was correlated with tone identification performance in noise (0 dB SNR). These results showed that the FFRENV F0 strength to speech sounds with resolved harmonics was not affected by noise. In contrast, the response to speech sounds with unresolved harmonics, which were significantly smaller in noise compared to quiet. Our results suggest that coding resolved harmonics was more important than coding envelope for tone identification performance in noise.
  • Item
    Thumbnail Image
    Cortical Speech Processing in Postlingually Deaf Adult Cochlear Implant Users, as Revealed by Functional Near-Infrared Spectroscopy
    Zhou, X ; Seghouane, A-K ; Shah, A ; Innes-Brown, H ; Cross, W ; Litovsky, R ; McKay, CM (SAGE PUBLICATIONS INC, 2018-07-19)
    An experiment was conducted to investigate the feasibility of using functional near-infrared spectroscopy (fNIRS) to image cortical activity in the language areas of cochlear implant (CI) users and to explore the association between the activity and their speech understanding ability. Using fNIRS, 15 experienced CI users and 14 normal-hearing participants were imaged while presented with either visual speech or auditory speech. Brain activation was measured from the prefrontal, temporal, and parietal lobe in both hemispheres, including the language-associated regions. In response to visual speech, the activation levels of CI users in an a priori region of interest (ROI)-the left superior temporal gyrus or sulcus-were negatively correlated with auditory speech understanding. This result suggests that increased cross-modal activity in the auditory cortex is predictive of poor auditory speech understanding. In another two ROIs, in which CI users showed significantly different mean activation levels in response to auditory speech compared with normal-hearing listeners, activation levels were significantly negatively correlated with CI users' auditory speech understanding. These ROIs were located in the right anterior temporal lobe (including a portion of prefrontal lobe) and the left middle superior temporal lobe. In conclusion, fNIRS successfully revealed activation patterns in CI users associated with their auditory speech understanding.