Graeme Clark Collection

Permanent URI for this collection

Search Results

Now showing 1 - 3 of 3
  • Item
    Thumbnail Image
    Habilitation issues in the management of children using the cochlear multiple-channel cochlear prosthesis
    Galvin Karyn L. ; Dawson Pam W. ; Hollow Rod. ; Dowell Richard C. ; Pyman B. ; Clark Graeme, M. ; Cowan, Robert S. C. ; Barker, Elizabeth J. ; Dettman, Shani J. ; Blamey, Peter J. ; RANCE, GARY ; Zarant, Julia Z. ( 1993)
    Since 1985, a significant proportion of patients seen In the Melbourne cochlear Implant clinic have been children. The children represent a diverse population, with both congenital and acquired hearing-impairment, a wide-range or hearing levels pre-Implant, and an age range from 2 years to 18 years. The habilitation programme developed for the overall group must be flexible enough to be tailored to the Individual needs of each child, and to adapt to the changing needs or children as they progress. Long-term data shows that children are continuing to show Improvements after 5-7 years of device use, particularly In their perception of open-set words and sentences. Habilitation programs must therefore be geared to the long-term needs of children and their families. Both speech perception and speech production need to be addressed In the specific content of the habilitation program for any Individual child. In addition, for young children, the benefits or Improved speech perception should have an Impact on development of speech and language, and the focus of the programme for this age child will reflect this difference In emphasis. Specific materials and approaches will vary for very young children, school-age and teenage children. In addition, educational selling will have a bearing on the Integration of listening and device use Into the classroom environment.
  • Item
    Thumbnail Image
    Signal processing in quiet and noise
    Dowell, R. C. ; Patrick, J. F. ; Blamey, P. J. ; Seligman, P. M. ; Money, D. K. ; Clark, Graeme M. ( 1987)
    It has been shown that many profoundly deaf patients using multichannel cochlear implants are able to understand significant amounts of conversational speech using the prosthesis without the aid of lipreading. These results are usually obtained under ideal acoustic conditions but, unfortunately, the environments in which the prostheses are most often used are rarely perfect. Some form of competing signal is always present in the urban setting, from other conversations, radio and television, appliances, traffic noise and so on. As might be expected, implant users in general find background noise to be the largest detrimental factor in their understanding of speech, both with and without the aid of lipreading. Recently, some assessment of implant patient performance with competing noise has been attempted using a four-alternative forced-choice spondee test (1) at Iowa University. Similar testing has been carried out at the University of Melbourne with a group of patients using the Nucleus multichannel cochlear prosthesis. This study formed part of an assessment of a two formant (F0/FI/F2) speech coding strategy (2). Results suggested that the new scheme provided improved speech recognition both in quiet and with competing noise. This paper reports on some more detailed investigations into the effects of background noise on speech recognition for multichannel cochlear implant users.
  • Item
    Thumbnail Image
    A formant-estimating speech processor for cochlear implant patients
    Blamey, P. J. ; Dowell, R. C. ; Brown, A. M. ; Seligman, P. M. ; Clark, Graeme M. (Speech Science and Technology Conference, 1986)
    A simple formant-estimating speech processor has been developed to make use of the "hearing" produced by electrical stimulation of the auditory nerve with a multiple-channel cochlear implant. Thirteen implant patients were trained and evaluated with a processor that presented the second formant frequency, fundamental frequency, and amplitude envelope of the speech. Nine patients were trained and evaluated with a processor that presented the first formant frequency and amplitude as well. The second group performed significantly better in discrimination tasks and word and sentence recognition through hearing alone. The second group also showed a significantly greater improvement when hearing and lipreading was compared with lipreading alone in a speech tracking task.