Graeme Clark Collection

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 11
  • Item
    Thumbnail Image
    Speech perception and spoken language in children with impaired hearing
    Clark, Graeme M. ; Wright, M. ; Tooher, T. ; Psarron, C. ; Godwin, G. ; Rennie, M. ; Meskin, T. ; Blamey, P. ; Sarant, J. ; Serry, T. ; Wales, R. ; James, C. ; Barry, J. ( 1998)
    Fifty seven children with impaired hearing aged 4-12 years were evaluated with speech perception and language measures as the first stage of a longitudinal study. The Clinical Evaluation of Language Fundamentals (CELF) and Peabody Picture Vocabulary Test (PPVT) were used to evaluate the children's spoken language. Regression analyses indicated that scores on both tests were significantly correlated with chronological age, but delayed relative to children with normal hearing. Performance increased at 45% of the rate expected for children with normal hearing for the CELF, and 62% for the PPVT. Perception scores were not significantly correlated with chronological age, but were highly correlated with results on the PPVT and CELF. The data suggest a complex relationship whereby hearing impairment reduces speech perception, which slows language development, which has a further adverse effect on speech perception.
  • Item
    Thumbnail Image
    Rehabilitation strategies for adult cochlear implant users
    Dowell, R. C. ; Blamey, P. J. ; Clark, Graeme M. (Monduzzi Editore, 1997)
    This paper summarizes open-set speech perception results using audition alone for a large group of adult Nucleus cochlear implant users in Melbourne. The results show wide variation in performance but significant improvement over the years from 1982 to 1995. Analysis of these results shows that speech processor developments have made the major contribution to this improvement over this time. Recent results for patients using the SPECTRA-SPEAK processor show !hat most subjects obtain good speech perception within six months of implantation and the need for intensive auditory training is minimal for many of these patients. Postoperative care should encourage consistent device use by providing opportunities for success and providing long term technical support for implant users. In some cases, including elderly patients, those with long term profound deafness, and those with special needs, there will still be a need for additional rehabilitation and auditory training support.
  • Item
    Thumbnail Image
    Factors affecting outcomes in children with cochlear implants
    Dowell, R. C. ; Blamey, P. J. ; Clark, Graeme M. (Monduzzi Editore, 1997)
    Open-set speech perception tests were completed for a group of 52 children and adolescents who were long-term users of the Nucleus multiple channel cochlear prosthesis. Results showed mean scores for the group of 32.4% for open-set BKE sentences and 48.1% for phonemes in open-set monosyllabic words. Over 80% of the group performed significantly on these tas1cs. Age at implantation was identified as a significant factor affecting speech perception performance with improved scores for children implanted early. This factor was evident in the results at least down to the age of three years. Duration.. of profound hearing loss, progressive hearing loss, educational program and preoperative residual hearing were also identified as significant factors that may affect speech perception performance.
  • Item
    Thumbnail Image
    The effect of language knowledge on speech perception in children with impaired hearing
    Sarant, J. Z. ; Blamey, P. J. ; Clark, Graeme M. ( 1996)
    Open-set words and sentences were used to assess auditory speech perception of three hearing-impaired children aged 9 to 15 years using the Nucleus 22channel cochlear implant. Vocabulary and syntax used in the tests were assessed following the initial perception tests. Remediation was given in specific vocabulary and syntactic areas, chosen separately for each child, and the children were reassessed. Two children showed a significant post-remediation improvement in their overall scores on the syntactic test and both perception measures. The third child who was older, had the best language knowledge and the lowest auditory speech perception scores, showed no significant change on any of the measures. Language remediation in specific areas of weakness may be the quickest way to enhance speech perception for some children with impaired hearing in this age range.
  • Item
    Thumbnail Image
    Speech perception, production and language results in a group of children using the 22-electrode cochlear implant
    Busby, P. A. ; Brown, A. M. ; DOWELL, RICHARD ; Rickards, Field W. ; Dawson, Pam W. ; Blamey, Peter J. ; Rowland, L.C. ; Dettman, Shani J. ; Altidis, P. M. ; Clark, Graeme M. ( 1989)
    Paper presented at the 118th Meeting of the Acoustical Society of America
  • Item
    Thumbnail Image
    Speech processing strategies in an electrotactile aid for hearing-impaired adults and children
    Cowan, Robert S. C. ; Blamey, Peter J. ; Sarant, Julia Z. ; Galvin, Karyn L. ; Clark, Graeme M. (Australian Speech Science and Technology Association, 1990)
    An electrotactile speech processor (Tickle Talker) for hearing-impaired children and adults has been developed and tested. Estimates of second format frequency, fundamental frequency and speech amplitude are extracted from the speech input, electrically encoded and presented to the user through eight electrodes located over the digital nerve bundles on the fingers of the non-dominant hand. Clinical results with children and adults confirm that tactually-encoded speech features can be recognized, and combined with input from vision or residual audition to improve recognition of words in isolation or in sentences. Psychophysical testing suggests that alternative encoding strategies using multiple-electrode stimuli are feasible. Preliminary results comparing encoding of consonant voiced/voiceless contrasts with new encoding schemes are discussed.
  • Item
    Thumbnail Image
    Combining tactile, auditory and visual information for speech perception
    Blamey, P. J. ; Clark, Graeme M. ( 1988)
    Four normally hearing subjects were trained and tested with all combinations of a highly degraded auditory input, a visual input via lipreading, and a tactile input using a multichannel electrotactile speech processor. When the visual input was added to any combination of other inputs, a significant improvement occurred for every test. Similarly, the auditory input produced a significant improvement for all tests except closed-set vowel recognition. The tactile input produced scores that were significantly greater than chance in isolation, but combined less effectively with the other modalities. The less effective combination might be due to lack of training with the tactile input, or to more fundamental limitations in the processing of multimodal stimuli.
  • Item
    Thumbnail Image
    Signal processing in quiet and noise
    Dowell, R. C. ; Patrick, J. F. ; Blamey, P. J. ; Seligman, P. M. ; Money, D. K. ; Clark, Graeme M. ( 1987)
    It has been shown that many profoundly deaf patients using multichannel cochlear implants are able to understand significant amounts of conversational speech using the prosthesis without the aid of lipreading. These results are usually obtained under ideal acoustic conditions but, unfortunately, the environments in which the prostheses are most often used are rarely perfect. Some form of competing signal is always present in the urban setting, from other conversations, radio and television, appliances, traffic noise and so on. As might be expected, implant users in general find background noise to be the largest detrimental factor in their understanding of speech, both with and without the aid of lipreading. Recently, some assessment of implant patient performance with competing noise has been attempted using a four-alternative forced-choice spondee test (1) at Iowa University. Similar testing has been carried out at the University of Melbourne with a group of patients using the Nucleus multichannel cochlear prosthesis. This study formed part of an assessment of a two formant (F0/FI/F2) speech coding strategy (2). Results suggested that the new scheme provided improved speech recognition both in quiet and with competing noise. This paper reports on some more detailed investigations into the effects of background noise on speech recognition for multichannel cochlear implant users.
  • Item
    Thumbnail Image
    A formant-estimating speech processor for cochlear implant patients
    Blamey, P. J. ; Dowell, R. C. ; Brown, A. M. ; Seligman, P. M. ; Clark, Graeme M. (Speech Science and Technology Conference, 1986)
    A simple formant-estimating speech processor has been developed to make use of the "hearing" produced by electrical stimulation of the auditory nerve with a multiple-channel cochlear implant. Thirteen implant patients were trained and evaluated with a processor that presented the second formant frequency, fundamental frequency, and amplitude envelope of the speech. Nine patients were trained and evaluated with a processor that presented the first formant frequency and amplitude as well. The second group performed significantly better in discrimination tasks and word and sentence recognition through hearing alone. The second group also showed a significantly greater improvement when hearing and lipreading was compared with lipreading alone in a speech tracking task.
  • Item
    Thumbnail Image
    A model of auditory-visual speech perception
    Blamey, P. J. ; Clark, Graeme M. (Speech Science and Technology Conference, 1986)
    A mathematical model relating the probabilities of correctly recognizing speech features, phonemes, and words was tested using data from the clinical trial of a multiple-channel cochlear implant. A monosyllabic word test was presented to the patients in the conditions hearing alone (H), lipreading alone (L), and hearing plus lipreading (HL). The model described the data quite well in each condition. The model was extended to predict the HL scores from the feature recognition probabilities in the H and L conditions. The model may be useful for the evaluation of automatic speech recognition devices as well as hearing impaired people.