Graeme Clark Collection

Permanent URI for this collection

Search Results

Now showing 1 - 5 of 5
  • Item
    Thumbnail Image
    Pitch and vowel perception in cochlear implant users
    Blamey, Peter J. ; Parisi, Elvira S. ( 1994)
    Two methods of determining the pitch or timbre of electrical stimuli in comparison with acoustic stimuli are described. In the first experiment, the pitch of pure tones and electrical stimuli were compared directly by implant users who have residual hearing in the non-implanted ear. This resulted in a relationship between frequency in the non-implanted ear and position of the best-matched electrode in the implanted ear. In the second experiment, one- and two-formant synthetic vowels, with formant frequencies covering the range from 200 to 4000 Hz, were presented to the same implant users through their implant or through their hearing aid. The listeners categorised each stimulus according to the closest vowel from a set of eleven possibilities, and a vowel centre was calculated for each response category for each ear. Assuming that stimuli at the vowel centres in each ear sound alike, a second relationship between frequency and electrode position was derived. Both experiments showed that electrically-evoked pitch is much lower than that produced by pure tones at the corresponding cochlear location in normally-hearing listeners. This helps to explain why cochlear implants with electrode arrays that rarely extend beyond the basal turn of the cochlea have achieved high levels of speech recognition in postlinguistically deafened adults without major retraining or adaptation by the users. The techniques described also have potential for optimising speech recognition for individual implant users.
  • Item
    Thumbnail Image
    Using an automatic word-tagger to analyse the spoken language of children with impaired hearing
    Blamey, P. J. ; Grogan, M. L. ; Shields, M. B. ( 1994)
    The grammatical analysis and description of spoken language of children with impaired hearing is time-consuming, but has important implications for their habilitation and educational management. Word-tagging programs have achieved high levels of accuracy with text and adult spoken language. This paper investigates the accuracy of one automatic word tagger (AUTASYS 3.0 developed for the International Corpus of English project, ICE) on a small corpus of spoken language samples from children using a cochlear implant. The accuracy of the tagging and the usefulness of the results in comparison with more conventional analyses are discussed.
  • Item
    Thumbnail Image
    Signal processing in quiet and noise
    Dowell, R. C. ; Patrick, J. F. ; Blamey, P. J. ; Seligman, P. M. ; Money, D. K. ; Clark, Graeme M. ( 1987)
    It has been shown that many profoundly deaf patients using multichannel cochlear implants are able to understand significant amounts of conversational speech using the prosthesis without the aid of lipreading. These results are usually obtained under ideal acoustic conditions but, unfortunately, the environments in which the prostheses are most often used are rarely perfect. Some form of competing signal is always present in the urban setting, from other conversations, radio and television, appliances, traffic noise and so on. As might be expected, implant users in general find background noise to be the largest detrimental factor in their understanding of speech, both with and without the aid of lipreading. Recently, some assessment of implant patient performance with competing noise has been attempted using a four-alternative forced-choice spondee test (1) at Iowa University. Similar testing has been carried out at the University of Melbourne with a group of patients using the Nucleus multichannel cochlear prosthesis. This study formed part of an assessment of a two formant (F0/FI/F2) speech coding strategy (2). Results suggested that the new scheme provided improved speech recognition both in quiet and with competing noise. This paper reports on some more detailed investigations into the effects of background noise on speech recognition for multichannel cochlear implant users.
  • Item
    Thumbnail Image
    A formant-estimating speech processor for cochlear implant patients
    Blamey, P. J. ; Dowell, R. C. ; Brown, A. M. ; Seligman, P. M. ; Clark, Graeme M. (Speech Science and Technology Conference, 1986)
    A simple formant-estimating speech processor has been developed to make use of the "hearing" produced by electrical stimulation of the auditory nerve with a multiple-channel cochlear implant. Thirteen implant patients were trained and evaluated with a processor that presented the second formant frequency, fundamental frequency, and amplitude envelope of the speech. Nine patients were trained and evaluated with a processor that presented the first formant frequency and amplitude as well. The second group performed significantly better in discrimination tasks and word and sentence recognition through hearing alone. The second group also showed a significantly greater improvement when hearing and lipreading was compared with lipreading alone in a speech tracking task.
  • Item
    Thumbnail Image
    A model of auditory-visual speech perception
    Blamey, P. J. ; Clark, Graeme M. (Speech Science and Technology Conference, 1986)
    A mathematical model relating the probabilities of correctly recognizing speech features, phonemes, and words was tested using data from the clinical trial of a multiple-channel cochlear implant. A monosyllabic word test was presented to the patients in the conditions hearing alone (H), lipreading alone (L), and hearing plus lipreading (HL). The model described the data quite well in each condition. The model was extended to predict the HL scores from the feature recognition probabilities in the H and L conditions. The model may be useful for the evaluation of automatic speech recognition devices as well as hearing impaired people.