Prediction of aided and unaided audiograms using sound-field auditory steady-state evoked responses

Rafi Shemesh, Joseph Attias, Hassan Magdoub, Ben I. Nageris

Research output: Contribution to journalArticlepeer-review

Abstract

Objective: To assess sound field auditory thresholds of hearing-impaired adults by using auditory steady-state evoked responses (ASSRs). Design: ASSRs were recorded to carrier frequencies of 500, 1000, 2000, and 4000 Hz, each uniquely modulated at a single frequency of 80100 Hz. ASSR thresholds were compared to behavioral auditory thresholds. Study sample: Twenty adults (11 male, age 35.6 years) with moderate-severe sensorineural hearing loss who had used hearing aids, and 10 normal-hearing subjects (mean age 22.4 years). Results: For most frequencies, behavioral sound-field thresholds were slightly lower than ASSR thresholds in both aided and unaided conditions, with a significant correlation between them. Differences between ASSR and behavioral thresholds ranged between 516 dB in the unaided and between 516 dB in the aided condition. The ASSR amplitude growth function to 2000 Hz was steeper in both the aided and unaided conditions than in the normal-hearing group. Conclusions: Sound-field ASSRs can predict behavioral auditory thresholds in both the unaided and aided condition, as well as behavioral functional gains. The ASSR growth function for 2000 Hz is suggested to reflect an underlying mechanism of intensity encoding common to abnormal loudness perception frequently reported in cases of cochlear hearing loss.

Original languageEnglish
Pages (from-to)746-753
Number of pages8
JournalInternational Journal of Audiology
Volume51
Issue number10
DOIs
StatePublished - Oct 2012

Keywords

  • Amplitude growth function
  • Auditory steady state evoked response
  • Hearing aids
  • Objective fitting
  • Sound field ASSR

ASJC Scopus subject areas

  • Language and Linguistics
  • Linguistics and Language
  • Speech and Hearing

Fingerprint

Dive into the research topics of 'Prediction of aided and unaided audiograms using sound-field auditory steady-state evoked responses'. Together they form a unique fingerprint.

Cite this