Graeme Clark Collection

Permanent URI for this collection

Search Results

Now showing 1 - 3 of 3
  • Item
    Thumbnail Image
    Signal processing in quiet and noise
    Dowell, R. C. ; Patrick, J. F. ; Blamey, P. J. ; Seligman, P. M. ; Money, D. K. ; Clark, Graeme M. ( 1987)
    It has been shown that many profoundly deaf patients using multichannel cochlear implants are able to understand significant amounts of conversational speech using the prosthesis without the aid of lipreading. These results are usually obtained under ideal acoustic conditions but, unfortunately, the environments in which the prostheses are most often used are rarely perfect. Some form of competing signal is always present in the urban setting, from other conversations, radio and television, appliances, traffic noise and so on. As might be expected, implant users in general find background noise to be the largest detrimental factor in their understanding of speech, both with and without the aid of lipreading. Recently, some assessment of implant patient performance with competing noise has been attempted using a four-alternative forced-choice spondee test (1) at Iowa University. Similar testing has been carried out at the University of Melbourne with a group of patients using the Nucleus multichannel cochlear prosthesis. This study formed part of an assessment of a two formant (F0/FI/F2) speech coding strategy (2). Results suggested that the new scheme provided improved speech recognition both in quiet and with competing noise. This paper reports on some more detailed investigations into the effects of background noise on speech recognition for multichannel cochlear implant users.
  • Item
    Thumbnail Image
    A formant-estimating speech processor for cochlear implant patients
    Blamey, P. J. ; Dowell, R. C. ; Brown, A. M. ; Seligman, P. M. ; Clark, Graeme M. (Speech Science and Technology Conference, 1986)
    A simple formant-estimating speech processor has been developed to make use of the "hearing" produced by electrical stimulation of the auditory nerve with a multiple-channel cochlear implant. Thirteen implant patients were trained and evaluated with a processor that presented the second formant frequency, fundamental frequency, and amplitude envelope of the speech. Nine patients were trained and evaluated with a processor that presented the first formant frequency and amplitude as well. The second group performed significantly better in discrimination tasks and word and sentence recognition through hearing alone. The second group also showed a significantly greater improvement when hearing and lipreading was compared with lipreading alone in a speech tracking task.
  • Item
    Thumbnail Image
    A model of auditory-visual speech perception
    Blamey, P. J. ; Clark, Graeme M. (Speech Science and Technology Conference, 1986)
    A mathematical model relating the probabilities of correctly recognizing speech features, phonemes, and words was tested using data from the clinical trial of a multiple-channel cochlear implant. A monosyllabic word test was presented to the patients in the conditions hearing alone (H), lipreading alone (L), and hearing plus lipreading (HL). The model described the data quite well in each condition. The model was extended to predict the HL scores from the feature recognition probabilities in the H and L conditions. The model may be useful for the evaluation of automatic speech recognition devices as well as hearing impaired people.