Medical Bionics - Research Publications

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 20
  • Item
    Thumbnail Image
    Human cortical processing of interaural coherence
    Luke, R ; Innes-Brown, H ; Undurraga, JA ; McAlpine, D (CELL PRESS, 2022-05-20)
    Sounds reach the ears as a mixture of energy generated by different sources. Listeners extract cues that distinguish different sources from one another, including how similar sounds arrive at the two ears, the interaural coherence (IAC). Here, we find listeners cannot reliably distinguish two completely interaurally coherent sounds from a single sound with reduced IAC. Pairs of sounds heard toward the front were readily confused with single sounds with high IAC, whereas those heard to the sides were confused with single sounds with low IAC. Sounds that hold supra-ethological spatial cues are perceived as more diffuse than can be accounted for by their IAC, and this is accounted for by a computational model comprising a restricted, and sound-frequency dependent, distribution of auditory-spatial detectors. We observed elevated cortical hemodynamic responses for sounds with low IAC, suggesting that the ambiguity elicited by sounds with low interaural similarity imposes elevated cortical load.
  • Item
    Thumbnail Image
    Analysis methods for measuring passive auditory fNIRS responses generated by a block-design paradigm
    Luke, R ; Larson, E ; Shader, MJ ; Innes-Brown, H ; Van Yper, L ; Lee, AKC ; Sowman, PF ; McAlpine, D (SPIE-SOC PHOTO-OPTICAL INSTRUMENTATION ENGINEERS, 2021-04)
    Significance: Functional near-infrared spectroscopy (fNIRS) is an increasingly popular tool in auditory research, but the range of analysis procedures employed across studies may complicate the interpretation of data. Aim: We aim to assess the impact of different analysis procedures on the morphology, detection, and lateralization of auditory responses in fNIRS. Specifically, we determine whether averaging or generalized linear model (GLM)-based analysis generates different experimental conclusions when applied to a block-protocol design. The impact of parameter selection of GLMs on detecting auditory-evoked responses was also quantified. Approach: 17 listeners were exposed to three commonly employed auditory stimuli: noise, speech, and silence. A block design, comprising sounds of 5 s duration and 10 to 20 s silent intervals, was employed. Results: Both analysis procedures generated similar response morphologies and amplitude estimates, and both indicated that responses to speech were significantly greater than to noise or silence. Neither approach indicated a significant effect of brain hemisphere on responses to speech. Methods to correct for systemic hemodynamic responses using short channels improved detection at the individual level. Conclusions: Consistent with theoretical considerations, simulations, and other experimental domains, GLM and averaging analyses generate the same group-level experimental conclusions. We release this dataset publicly for use in future development and optimization of algorithms.
  • Item
    Thumbnail Image
    Effects of Hearing Aid Noise Reduction on Early and Late Cortical Representations of Competing Talkers in Noise.
    Alickovic, E ; Ng, EHN ; Fiedler, L ; Santurette, S ; Innes-Brown, H ; Graversen, C (Frontiers Media SA, 2021)
    OBJECTIVES: Previous research using non-invasive (magnetoencephalography, MEG) and invasive (electrocorticography, ECoG) neural recordings has demonstrated the progressive and hierarchical representation and processing of complex multi-talker auditory scenes in the auditory cortex. Early responses (<85 ms) in primary-like areas appear to represent the individual talkers with almost equal fidelity and are independent of attention in normal-hearing (NH) listeners. However, late responses (>85 ms) in higher-order non-primary areas selectively represent the attended talker with significantly higher fidelity than unattended talkers in NH and hearing-impaired (HI) listeners. Motivated by these findings, the objective of this study was to investigate the effect of a noise reduction scheme (NR) in a commercial hearing aid (HA) on the representation of complex multi-talker auditory scenes in distinct hierarchical stages of the auditory cortex by using high-density electroencephalography (EEG). DESIGN: We addressed this issue by investigating early (<85 ms) and late (>85 ms) EEG responses recorded in 34 HI subjects fitted with HAs. The HA noise reduction (NR) was either on or off while the participants listened to a complex auditory scene. Participants were instructed to attend to one of two simultaneous talkers in the foreground while multi-talker babble noise played in the background (+3 dB SNR). After each trial, a two-choice question about the content of the attended speech was presented. RESULTS: Using a stimulus reconstruction approach, our results suggest that the attention-related enhancement of neural representations of target and masker talkers located in the foreground, as well as suppression of the background noise in distinct hierarchical stages is significantly affected by the NR scheme. We found that the NR scheme contributed to the enhancement of the foreground and of the entire acoustic scene in the early responses, and that this enhancement was driven by better representation of the target speech. We found that the target talker in HI listeners was selectively represented in late responses. We found that use of the NR scheme resulted in enhanced representations of the target and masker speech in the foreground and a suppressed representation of the noise in the background in late responses. We found a significant effect of EEG time window on the strengths of the cortical representation of the target and masker. CONCLUSION: Together, our analyses of the early and late responses obtained from HI listeners support the existing view of hierarchical processing in the auditory cortex. Our findings demonstrate the benefits of a NR scheme on the representation of complex multi-talker auditory scenes in different areas of the auditory cortex in HI listeners.
  • Item
    Thumbnail Image
    The Impact of Spatial Incongruence on an Auditory-Visual Illusion
    Innes-Brown, H ; Crewther, D ; Louis, M (PUBLIC LIBRARY SCIENCE, 2009-07-31)
    BACKGROUND: The sound-induced flash illusion is an auditory-visual illusion--when a single flash is presented along with two or more beeps, observers report seeing two or more flashes. Previous research has shown that the illusion gradually disappears as the temporal delay between auditory and visual stimuli increases, suggesting that the illusion is consistent with existing temporal rules of neural activation in the superior colliculus to multisensory stimuli. However little is known about the effect of spatial incongruence, and whether the illusion follows the corresponding spatial rule. If the illusion occurs less strongly when auditory and visual stimuli are separated, then integrative processes supporting the illusion must be strongly dependant on spatial congruence. In this case, the illusion would be consistent with both the spatial and temporal rules describing response properties of multisensory neurons in the superior colliculus. METHODOLOGY/PRINCIPAL FINDINGS: The main aim of this study was to investigate the importance of spatial congruence in the flash-beep illusion. Selected combinations of one to four short flashes and zero to four short 3.5 KHz tones were presented. Observers were asked to count the number of flashes they saw. After replication of the basic illusion using centrally-presented stimuli, the auditory and visual components of the illusion stimuli were presented either both 10 degrees to the left or right of fixation (spatially congruent) or on opposite (spatially incongruent) sides, for a total separation of 20 degrees. CONCLUSIONS/SIGNIFICANCE: The sound-induced flash fission illusion was successfully replicated. However, when the sources of the auditory and visual stimuli were spatially separated, perception of the illusion was unaffected, suggesting that the "spatial rule" does not extend to describing behavioural responses in this illusion. We also find no evidence for an associated "fusion" illusion reportedly occurring when multiple flashes are accompanied by a single beep.
  • Item
    Thumbnail Image
    Auditory Stream Segregation and Selective Attention for Cochlear Implant Listeners: Evidence From Behavioral Measures and Event-Related Potentials
    Paredes-Gallardo, A ; Innes-Brown, H ; Madsen, SMK ; Dau, T ; Marozeau, J (FRONTIERS MEDIA SA, 2018-08-21)
    The role of the spatial separation between the stimulating electrodes (electrode separation) in sequential stream segregation was explored in cochlear implant (CI) listeners using a deviant detection task. Twelve CI listeners were instructed to attend to a series of target sounds in the presence of interleaved distractor sounds. A deviant was randomly introduced in the target stream either at the beginning, middle or end of each trial. The listeners were asked to detect sequences that contained a deviant and to report its location within the trial. The perceptual segregation of the streams should, therefore, improve deviant detection performance. The electrode range for the distractor sounds was varied, resulting in different amounts of overlap between the target and the distractor streams. For the largest electrode separation condition, event-related potentials (ERPs) were recorded under active and passive listening conditions. The listeners were asked to perform the behavioral task for the active listening condition and encouraged to watch a muted movie for the passive listening condition. Deviant detection performance improved with increasing electrode separation between the streams, suggesting that larger electrode differences facilitate the segregation of the streams. Deviant detection performance was best for deviants happening late in the sequence, indicating that a segregated percept builds up over time. The analysis of the ERP waveforms revealed that auditory selective attention modulates the ERP responses in CI listeners. Specifically, the responses to the target stream were, overall, larger in the active relative to the passive listening condition. Conversely, the ERP responses to the distractor stream were not affected by selective attention. However, no significant correlation was observed between the behavioral performance and the amount of attentional modulation. Overall, the findings from the present study suggest that CI listeners can use electrode separation to perceptually group sequential sounds. Moreover, selective attention can be deployed on the resulting auditory objects, as reflected by the attentional modulation of the ERPs at the group level.
  • Item
    Thumbnail Image
    Neural Responses in Parietal and Occipital Areas in Response to Visual Events Are Modulated by Prior Multisensory Stimuli
    Innes-Brown, H ; Barutchu, A ; Crewther, DP ; Hamed, SB (PUBLIC LIBRARY SCIENCE, 2013-12-31)
    The effect of multi-modal vs uni-modal prior stimuli on the subsequent processing of a simple flash stimulus was studied in the context of the audio-visual 'flash-beep' illusion, in which the number of flashes a person sees is influenced by accompanying beep stimuli. EEG recordings were made while combinations of simple visual and audio-visual stimuli were presented. The experiments found that the electric field strength related to a flash stimulus was stronger when it was preceded by a multi-modal flash/beep stimulus, compared to when it was preceded by another uni-modal flash stimulus. This difference was found to be significant in two distinct timeframes--an early timeframe, from 130-160 ms, and a late timeframe, from 300-320 ms. Source localisation analysis found that the increased activity in the early interval was localised to an area centred on the inferior and superior parietal lobes, whereas the later increase was associated with stronger activity in an area centred on primary and secondary visual cortex, in the occipital lobe. The results suggest that processing of a visual stimulus can be affected by the presence of an immediately prior multisensory event. Relatively long-lasting interactions generated by the initial auditory and visual stimuli altered the processing of a subsequent visual stimulus.
  • Item
    Thumbnail Image
    Evidence for Enhanced Multisensory Facilitation with Stimulus Relevance: An Electrophysiological Investigation
    Barutchu, A ; Freestone, DR ; Innes-Brown, H ; Crewther, DP ; Crewther, SG ; Ben Hamed, S (PUBLIC LIBRARY SCIENCE, 2013-01-23)
    Currently debate exists relating to the interplay between multisensory processes and bottom-up and top-down influences. However, few studies have looked at neural responses to newly paired audiovisual stimuli that differ in their prescribed relevance. For such newly associated audiovisual stimuli, optimal facilitation of motor actions was observed only when both components of the audiovisual stimuli were targets. Relevant auditory stimuli were found to significantly increase the amplitudes of the event-related potentials at the occipital pole during the first 100 ms post-stimulus onset, though this early integration was not predictive of multisensory facilitation. Activity related to multisensory behavioral facilitation was observed approximately 166 ms post-stimulus, at left central and occipital sites. Furthermore, optimal multisensory facilitation was found to be associated with a latency shift of induced oscillations in the beta range (14-30 Hz) at right hemisphere parietal scalp regions. These findings demonstrate the importance of stimulus relevance to multisensory processing by providing the first evidence that the neural processes underlying multisensory integration are modulated by the relevance of the stimuli being combined. We also provide evidence that such facilitation may be mediated by changes in neural synchronization in occipital and centro-parietal neural populations at early and late stages of neural processing that coincided with stimulus selection, and the preparation and initiation of motor action.
  • Item
    Thumbnail Image
    The Effect of Visual Cues on Difficulty Ratings for Segregation of Musical Streams in Listeners with Impaired Hearing
    Innes-Brown, H ; Marozeau, J ; Blamey, P ; Goldreich, D (PUBLIC LIBRARY SCIENCE, 2011-12-15)
    BACKGROUND: Enjoyment of music is an important part of life that may be degraded for people with hearing impairments, especially those using cochlear implants. The ability to follow separate lines of melody is an important factor in music appreciation. This ability relies on effective auditory streaming, which is much reduced in people with hearing impairment, contributing to difficulties in music appreciation. The aim of this study was to assess whether visual cues could reduce the subjective difficulty of segregating a melody from interleaved background notes in normally hearing listeners, those using hearing aids, and those using cochlear implants. METHODOLOGY/PRINCIPAL FINDINGS: Normally hearing listeners (N = 20), hearing aid users (N = 10), and cochlear implant users (N = 11) were asked to rate the difficulty of segregating a repeating four-note melody from random interleaved distracter notes. The pitch of the background notes was gradually increased or decreased throughout blocks, providing a range of difficulty from easy (with a large pitch separation between melody and distracter) to impossible (with the melody and distracter completely overlapping). Visual cues were provided on half the blocks, and difficulty ratings for blocks with and without visual cues were compared between groups. Visual cues reduced the subjective difficulty of extracting the melody from the distracter notes for normally hearing listeners and cochlear implant users, but not hearing aid users. CONCLUSION/SIGNIFICANCE: Simple visual cues may improve the ability of cochlear implant users to segregate lines of music, thus potentially increasing their enjoyment of music. More research is needed to determine what type of acoustic cues to encode visually in order to optimise the benefits they may provide.
  • Item
    Thumbnail Image
    Neurophysiological correlates of configural face processing in schizotypy
    Batty, RA ; Francis, AJP ; Innes-Brown, H ; Joshua, NR ; Rossell, SL (FRONTIERS MEDIA SA, 2014-08-12)
    BACKGROUND: Face processing impairment in schizophrenia appears to be underpinned by poor configural (as opposed to feature-based) processing; however, few studies have sought to characterize this impairment electrophysiologically. Given the sensitivity of event-related potentials to antipsychotic medications, and the potential for neurophysiological abnormalities to serve as vulnerability markers for schizophrenia, a handful of studies have investigated early visual P100 and face-selective N170 in "at risk" populations. However, this is the first known neurophysiological investigation of configural face processing in a non-clinical schizotypal sample. METHODS: Using stimuli designed to engage configural processing in face perception (upright and inverted Mooney and photographic faces), P100 and N170 components were recorded in healthy individuals characterized by high (N = 14) and low (N = 14) schizotypal traits according to the Oxford-Liverpool Inventory of Feelings and Experiences. RESULTS: High schizotypes showed significantly reduced N170 amplitudes to inverted photographic faces. Typical N170 latency and amplitude inversion effects (delayed and enhanced N170 to inverted relative to upright photographic faces, and enhanced amplitude to upright versus inverted Mooney faces), were demonstrated by low, but not high, schizotypes. No group differences were shown for P100 analyses. CONCLUSIONS: The findings suggest that neurophysiological deficits in processing facial configurations (N170) are apparent in schizotypy, while the early sensory processing (P100) of faces appears intact. This work adds to the mounting evidence for analogous neural processing anomalies at the healthy end of the psychosis continuum.
  • Item
    Thumbnail Image
    Dichotic Listening Can Improve Perceived Clarity of Music in Cochlear Implant Users
    Vannson, N ; Innes-Brown, H ; Marozeau, J (SAGE PUBLICATIONS INC, 2015-08-26)
    Musical enjoyment for cochlear implant (CI) recipients is often reported to be unsatisfactory. Our goal was to determine whether the musical experience of postlingually deafened adult CI recipients could be enriched by presenting the bass and treble clef parts of short polyphonic piano pieces separately to each ear (dichotic). Dichotic presentation should artificially enhance the lateralization cues of each part and help the listeners to better segregate them and thus provide greater clarity. We also hypothesized that perception of the intended emotion of the pieces and their overall enjoyment would be enhanced in the dichotic mode compared with the monophonic (both parts in the same ear) and the diotic mode (both parts in both ears). Twenty-eight piano pieces specifically composed to induce sad or happy emotions were selected. The tempo of the pieces, which ranged from lento to presto covaried with the intended emotion (from sad to happy). Thirty participants (11 normal-hearing listeners, 11 bimodal CI and hearing-aid users, and 8 bilaterally implanted CI users) participated in this study. Participants were asked to rate the perceived clarity, the intended emotion, and their preference of each piece in different listening modes. Results indicated that dichotic presentation produced small significant improvements in subjective ratings based on perceived clarity and preference. We also found that preference and clarity ratings were significantly higher for pieces with fast tempi compared with slow tempi. However, no significant differences between diotic and dichotic presentation were found for the participants' preference ratings, or their judgments of intended emotion.