Auditory brainstem response

From LIMSWiki
Revision as of 22:29, 29 February 2016 by Admin (talk | contribs) (Transcluded, per John)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

Graph showing a typical Auditory Brainstem Response

The auditory brainstem response (ABR), also called brainstem evoked response audiometry (BERA) or brainstem auditory evoked potentials (BAEPs) or brainstem auditory evoked responses (BAERs)[1][2] is an auditory evoked potential extracted from ongoing electrical activity in the brain and recorded via electrodes placed on the scalp. The measured recording is a series of six to seven vertex positive waves of which I through V are evaluated. These waves, labeled with Roman numerals in Jewett and Williston convention, occur in the first 10 milliseconds after onset of an auditory stimulus. The ABR is considered an exogenous response because it is dependent upon external factors.[3][4][5]

The auditory structures that generate the auditory brainstem response are believed to be as follows:[4][6]

History of research

In 1967, Sohmer and Feinmesser were the first to publish ABRs recorded with surface electrodes in humans which showed that cochlear potentials could be obtained non-invasively. In 1971, Jewett and Williston gave a clear description of the human ABR and correctly interpreted the later waves as arriving from the brainstem. In 1977, Selters and Brackman published landmark findings on prolonged inter-peak latencies in tumor cases (greater than 1 cm). In 1974, Hecox and Galambos showed that the ABR could be used for threshold estimation in adults and infants. In 1975, Starr and Achor were the first to report the effects on the ABR of CNS pathology in the brainstem.[4]

Long and Allen were the first to report the abnormal brainstem auditory evoked potentials (BAEPs) in an alcoholic woman who recovered from acquired central hypoventilation syndrome. These investigators hypothesized that their patient's brainstem was poisoned, but not destroyed, by her chronic alcoholism.[8]

Measurement techniques

Recording parameters

  • Electrode montage: most performed with a vertical montage (high forehead [active or positive], earlobes or mastoids [reference right & left or negative], low forehead [ground]
  • Impedance: 5 kΩ or less (also equal between electrodes)
  • Filter settings: 30–1500 Hz bandwidth
  • Time window: 10ms (minimum)
  • Sampling rate: usually high sampling rate of ca 20 kHz
  • Intensity: usually start at 70 dBnHL
  • Stimulus type: click (100 us long), chirp or toneburst
  • Transducer type: insert, bone vibrator, sound field, headphones
  • Stimulation or repetition rate: 21.1 (for example)
  • Amplification: 100–150K
  • n (# of averages/ sweeps): 1000 minimum (1500 recommended)
  • Polarity: rarefaction or alternating recommended

Use

The ABR is used for newborn hearing screening, auditory threshold estimation, intraoperative monitoring, determining hearing loss type and degree, and auditory nerve and brainstem lesion detection, and in development of cochlear implants.

Advanced techniques

Stacked ABR

History

One use of the traditional ABR is site-of-lesion testing and it has been shown to be sensitive to large acoustic tumors. However, it has poor sensitivity to tumors smaller than 1 centimeter in diameter. In the 1990s, there were several studies that concluded that the use of ABRs to detect acoustic tumors should be abandoned. As a result, many practitioners only use MRI for this purpose now.[9]

The reason the ABR does not identify small tumors can be explained by the fact that ABRs rely on latency changes of peak V. Peak V is primarily influenced by high-frequency fibers, and tumors will be missed if those fibers aren't affected. Although the click stimulates a wide frequency region on the cochlea, phase cancellation of the lower-frequency responses occurs as a result of time delays along the basilar membrane.[10] If a tumor is small, it is possible those fibers won't be sufficiently affected to be detected by the traditional ABR measure.

Primary reasons why it is not practical to simply send every patient in for an MRI are the high cost of an MRI, its impact on patient comfort, and limited availability in rural areas and third-world countries. In 1997, Dr. Manuel Don and colleagues published on the Stacked ABR as a way to enhance the sensitivity of the ABR in detecting smaller tumors. Their hypothesis was that the new ABR-stacked derived-band ABR amplitude could detect small acoustic tumors missed by standard ABR measures.[11] In 2005, he stated that it would be clinically valuable to have available an ABR test to screen for small tumors.[9] In a 2005 interview in Audiology Online, Dr. Don of House Ear Institute defined the Stacked ABR as "...an attempt to record the sum of the neural activity across the entire frequency region of the cochlea in response to a click stimuli."[6]

Stacked ABR defined

The stacked ABR is the sum of the synchronous neural activity generated from five frequency regions across the cochlea in response to click stimulation and high-pass pink noise masking.[9] The development of this technique was based on the 8th cranial nerve compound action potential work done by Teas, Eldredge, and Davis in 1962.[12]

Methodology

The stacked ABR is a composite of activity from ALL frequency regions of the cochlea – not just high frequency.[6]

  • Step 1: obtain Click-evoked ABR responses to clicks and high-pass pink masking noise (ipsilateral masking)
  • Step 2: obtain derived-band ABRs (DBR)
  • Step 3: shift & align the wave V peaks of the DBR – thus, "stacking" the waveforms with wave V lined up
  • Step 4: add the waveforms together
  • Step 5: compare the amplitude of the Stacked ABR with the click-evoked ABR from the same ear

When the derived waveforms are representing activity from more apical regions along the basilar membrane, wave V latencies are prolonged because of the nature of the traveling wave. In order to compensate for these latency shifts, the wave V component for each derived waveform is stacked (aligned), added together, and then the resulting amplitude is measured.[10] In 2005, Don explains that in a normal ear, the sum of the Stacked ABR will have the same amplitude as the Click-evoked ABR. But, the presence of even a small tumor results in a reduction in the amplitude of the Stacked ABR in comparison with the Click-evoked ABR.

Application and effectiveness

With the intent of screening for and detecting the presence of small (less than or equal to 1 cm) acoustic tumors, the Stacked ABR is:[11]

  • 95% Sensitivity
  • 83% Specificity

(Note: 100% sensitivity was obtained at 50% specificity)

In a 2007 comparative study of ABR abnormalities in acoustic tumor patients, Montaguti and colleagues mention the promise of and great scientific interest in the Stacked ABR. The article suggests that the Stacked ABR could make it possible to identify small acoustic neuromas missed by traditional ABRs.[13]

The Stacked ABR is a valuable screening tool for the detection of small acoustic tumors because it is sensitive, specific, widely available, comfortable, and cost-effective.

Tone-burst ABR

Tone-burst ABR is used to obtain thresholds for children who are too young to otherwise reliably respond behaviorally to frequency-specific sound stimuli. The most common frequencies tested at 500, 1000, 2000, and 4000 Hz, as these frequencies are generally thought to be necessary for hearing aid programming.

Auditory steady-state response (ASSR)

ASSR defined

Auditory steady-state response is an auditory evoked potential, elicited with modulated tones that can be used to predict hearing sensitivity in patients of all ages. It is an electrophysiologic response to rapid auditory stimuli and creates a statistically valid estimated audiogram (evoked potential used to predict hearing thresholds for normal hearing individuals and those with hearing loss). The ASSR uses statistical measures to determine if and when a threshold is present and is a "cross-check" for verification purposes prior to arriving at a differential diagnosis.

History

In 1981, Galambos and colleagues reported on the "40 Hz auditory potential" which is a continuous 400 Hz tone sinusoidally 'amplitude modulated' at 40 Hz and at 70 dB SPL. This produced a very frequency specific response, but the response was very susceptible to state of arousal. In 1991, Cohen and colleagues learned that by presenting at a higher rate of stimulation than 40 Hz (>70 Hz), the response was smaller but less affected by sleep. In 1994, Rickards and colleagues showed that it was possible to obtain responses in newborns. In 1995, Lins and Picton found that simultaneous stimuli presented at rates in the 80 to 100 Hz range made it possible to obtain auditory thresholds.[3]

Methodology

The same or similar to traditional recording montages used for ABR recordings are used for the ASSR. Two active electrodes are placed at or near vertex and at ipsilateral earlobe/mastoid with ground at low forehead. If collecting from both ears simultaneously, a two-channel pre-amplifier is used. When single channel recording system is used to detect activity from a binaural presentation, a common reference electrode may be located at the nape of the neck. Transducers can be insert earphones, headphones, a bone oscillator, or sound field and it is preferable if patient is asleep. Unlike ABR settings, the high pass filter might be approximately 40 to 90 Hz and low pass filter might be between 320 and 720 Hz with typical filter slopes of 6 dB per octave. Gain settings of 10,000 are common, artifact reject is left "on", and it is thought to be advantageous to have manual "override" to allow the clinician to make decisions during test and apply course corrections as needed.[14]

ABR vs. ASSR

Similarities:

  • Both record bioelectric activity from electrodes arranged in similar recording arrays.
  • Both are auditory evoked potentials.
  • Both use acoustic stimuli delivered through inserts (preferably).
  • Both can be used to estimate threshold for patients who cannot or will not participate in traditional behavioral measures.

Differences:

  • ASSR looks at amplitude and phases in the spectral (frequency) domain rather than at amplitude and latency.
  • ASSR depends on peak detection across a spectrum rather than across a time vs. amplitude waveform.
  • ASSR is evoked using repeated sound stimuli presented at a high rep rate rather than an abrupt sound at a relatively low rep rate.
  • ABR typically uses click or tone-burst stimuli in one ear at a time, but ASSR can be used binaurally while evaluating broad bands or four frequencies (500, 1k, 2k, & 4k) simultaneously.
  • ABR estimates thresholds basically from 1-4k in typical mild-moderate-severe hearing losses. ASSR can also estimate thresholds in the same range, but offers more frequency specific info more quickly and can estimate hearing in the severe-to-profound hearing loss ranges.
  • ABR depends highly upon a subjective analysis of the amplitude/latency function. The ASSR uses a statistical analysis of the probability of a response (usually at a 95% confidence interval).
  • ABR is measured in microvolts (millionths of a volt) and the ASSR is measured in nanovolts (billionths of a volt).[14]

Analysis is mathematically based and dependent upon the fact that related bioelectric events coincide with the stimulus rep rate. The specific method of analysis is based on the manufacturer's statistical detection algorithm. It occurs in the spectral domain and is composed of specific frequency components that are harmonics of the stimulus repetition rate. Early ASSR systems considered the first harmonic only, but newer systems also incorporate higher harmonics in their detection algorithms.[14] Most equipment provides correction tables for converting ASSR thresholds to estimated HL audiograms and are found to be within 10 dB to 15 dB of audiometric thresholds. Although there are variances across studies. Correction data depends on variables such as: equipment used, frequencies collected, collection time, age of subject, sleep state of subject, stimulus parameters.[15]

Hearing aid fittings

In certain cases where behavioral thresholds cannot be attained, ABR thresholds can be used for hearing aid fittings. New fitting formulas such as DSL v5.0 allow the user to base the settings in the hearing aid on the ABR thresholds. Correction factors do exist for converting ABR thresholds to behavioral thresholds, but vary greatly. For example, one set of correction factors involves lowering ABR thresholds from 1000 to 4000 Hz by 10 dB and lowering the ABR threshold at 500 Hz by 15 to 20 dB.[16] Previously, brainstem audiometry has been used for hearing aid selection by using normal and pathological intensity-amplitude functions to determine appropriate amplification.[17] The principal idea of the selection and fitting of the hearing instrument was based on the assumption that amplitudes of the brainstem potentials were directly related to loudness perception. Under this assumption, the amplitudes of brainstem potentials stimulated by the hearing devices should exhibit close-to-normal values. ABR thresholds do not necessarily improve in the aided condition.[18] ABR can be an inaccurate indicator of hearing aid benefit due to difficulty processing the appropriate amount of fidelity of the transient stimuli used to evoke a response. Bone conduction ABR thresholds can be used if other limitations are present, but thresholds are not as accurate as ABR thresholds recorded through air conduction.[19]

Advantages of hearing aid selection by brainstem audiometry include the following applications:

  • evaluation of loudness perception in the dynamic range of hearing (recruitment)
  • determination of basic hearing aid properties (gain, compression factor, compression onset level)
  • cases with middle ear impairment (contrary to acoustic reflex methods)
  • non-cooperative subjects even in sleep
  • sedation or anesthesia without influence of age and vigilance (contrary to cortical evoked responses).

Disadvantages of hearing aid selection by brainstem audiometry include the following applications:

  • in cases of severe hearing impairment including no or only poor information as to loudness perception
  • no control of compression setting
  • no frequency-specific compensation of hearing impairment

Cochlear implantation and central auditory development

There are about 188,000 people around the world who have received cochlear implants. In the United States alone, there are about 30,000 adults and over 30,000 children who are recipients of cochlear implants.[20] This number continues to grow as cochlear implantation is becoming more and more accepted. In 1961, Dr. William House began work on the predecessor for today's cochlear implant. William House is an Otologist and is the founder of House ear institute in Los Angeles, California. This groundbreaking device, which was manufactured by 3M company was approved by the FDA in 1984.[21] Although this was a single channel device, it paved the way for future multi channel cochlear implants. Currently, as of 2007, the three cochlear implant devices approved for use in the U.S. are manufactured by Cochlear, Med-El, and Advanced Bionics. The way a cochlear implant works is sound is received by the cochlear implant's microphone, which picks up input that needs to be processed to determine how the electrodes will receive the signal. This is done on the external component of the cochlear implant called the sound processor. The transmitting coil, also an external component transmits the information from the speech processor through the skin using frequency modulated radio waves. The signal is never turned back into an acoustic stimulus, unlike a hearing aid. This information is then received by the cochlear implant's internal components. The receiver stimulator delivers the correct amount of electrical stimulation to the appropriate electrodes on the array to represent the sound signal that was detected. The electrode array stimulates the remaining auditory nerve fibers in the cochlea, which carry the signal on to the brain, where it is processed.

One way to measure the developmental status and limits of plasticity of the auditory cortical pathways is to study the latency of cortical auditory evoked potentials (CAEP). In particular, the latency of the first positive peak (P1) of the CAEP is of interest to researchers. P1 in children is considered a marker for maturation of the auditory cortical areas (Eggermont & Ponton, 2003; Sharma & Dorman, 2006; Sharma, Gilley, Dorman, & Baldwin, 2007).[22][23][24] The P1 is a robust positive wave occurring at around 100 to 300 ms in children. P1 latency represents the synaptic delays throughout the peripheral and central auditory pathways (Eggermont, Ponton, Don, Waring, & Kwong, 1997).[25]

P1 latency changes as a function of age, and is considered an index of cortical auditory maturation (Ceponiene, Cheour, & Naatanen, 1998).[26] P1 latency and age has a strong negative correlation, decrease in P1 latency with increasing age. This is most likely due to more efficient synaptic transmission over time. The P1 waveform also becomes broader as we age. The P1 neural generators are thought to originate from the thalamo-cortical portion of the auditory cortex. Researchers believe that P1 may be the first recurrent activity in the auditory cortex (Kral & Eggermont, 2007).[27] The negative component following P1 is called N1. N1 is not consistently seen in children until 12 years or age.

In 2006 Sharma & Dorman measured the P1 response in deaf children who received cochlear implants at different ages to examine the limits of plasticity in the central auditory system.[23] Those who received cochlear implant stimulation in early childhood (younger than 3.5 years) had normal P1 latencies. Children who received cochlear implant stimulation late in childhood (younger than seven years) had abnormal cortical responses latencies. However, children who received cochlear implant stimulation between the ages 3.5 and 7 years revealed variable latencies of the P1. Sharma also studied the waveform morphology of the P1 response in 2005 [28] and 2007.[24] She found that in early implanted children the P1 waveform morphology was normal. For late implanted children, the P1 waveforms were abnormal and had lower amplitudes when compared to normal waveform morphology. In 2008 Gilley and colleagues used source reconstruction and dipole source analysis derived from high density EEG recordings to estimate generators for the P1 in three groups of children: normal hearing children, children receiving a cochlear implant before the age of four, and children receiving a cochlear implant after the age of seven. Findings concluded that the waveform morphology of normal hearing children and early implanted children were very similar.[29]

See also

References

  1. ^ "Auditory Brainstem Response (ABR) Evaluation". www.hopkinsmedicine.org. 2022-05-27. Retrieved 2024-02-16.
  2. ^ Young, Allen; Cornejo, Jennifer; Spinner, Alycia (2024), "Auditory Brainstem Response", StatPearls, Treasure Island (FL): StatPearls Publishing, PMID 33231991, retrieved 2024-02-16
  3. ^ a b Eggermont, Jos J.; Burkard, Robert F.; Manuel Don (2007). Auditory evoked potentials: basic principles and clinical application. Hagerstwon, MD: Lippincott Williams & Wilkins. ISBN 978-0-7817-5756-0. OCLC 70051359.
  4. ^ a b c Hall, James W. (2007). New handbook of auditory evoked responses. Boston: Pearson. ISBN 978-0-205-36104-5. OCLC 71369649.
  5. ^ Moore, Ernest J (1983). Bases of auditory brain stem evoked responses. New York: Grune & Stratton. ISBN 978-0-8089-1465-5. OCLC 8451561.
  6. ^ a b c DeBonis, David A.; Donohue, Constance L. (2007). Survey of Audiology: Fundamentals for Audiologists and Health Professionals (2nd ed.). Boston, Mass: Allyn & Bacon. ISBN 978-0-205-53195-0. OCLC 123962954.
  7. ^ Møsller, Aage R.; Jannetta, Peter J.; Møsller, Margareta B. (November 1981). "Neural Generators of Brainstem Evoked Potentials Results from Human Intracranial Recordings". Annals of Otology, Rhinology & Laryngology. 90 (6): 591–596. doi:10.1177/000348948109000616. ISSN 0003-4894. PMID 7316383. S2CID 11652964.
  8. ^ Long, K.J.; Allen, N. (October 1984). "Abnormal brain-stem auditory evoked potentials following Ondine's curse". Arch. Neurol. 41 (10): 1109–10. doi:10.1001/archneur.1984.04050210111028. PMID 6477223.
  9. ^ a b c Don M, Kwong B, Tanaka C, Brackmann D, Nelson R (2005). "The stacked ABR: a sensitive and specific screening tool for detecting small acoustic tumors". Audiol. Neurootol. 10 (5): 274–90. doi:10.1159/000086001. PMID 15925862. S2CID 43009634.
  10. ^ a b Prout, T (2007). "Asymmetrical low frequency hearing loss and acoustic neuroma". Audiologyonline.
  11. ^ a b Don M, Masuda A, Nelson R, Brackmann D (September 1997). "Successful detection of small acoustic tumors using the stacked derived-band auditory brain stem response amplitude". Am J Otol. 18 (5): 608–21, discussion 682–5. PMID 9303158.
  12. ^ Teas, Donald C. (1962). "Cochlear Responses to Acoustic Transients: An Interpretation of Whole-Nerve Action Potentials". The Journal of the Acoustical Society of America. 34 (9B): 1438–1489. Bibcode:1962ASAJ...34.1438T. doi:10.1121/1.1918366. ISSN 0001-4966.
  13. ^ Montaguti M, Bergonzoni C, Zanetti MA, Rinaldi Ceroni A (April 2007). "Comparative evaluation of ABR abnormalities in patients with and without neurinoma of VIII cranial nerve". Acta Otorhinolaryngol Ital. 27 (2): 68–72. PMC 2640003. PMID 17608133.
  14. ^ a b c Beck, DL; Speidel, DP; and Petrak, M. (2007) Auditory Steady-State Response (ASSR): A Beginner's Guide. The Hearing Review. 2007; 14(12):34-37.
  15. ^ Picton TW, Dimitrijevic A, Perez-Abalo MC, Van Roon P (March 2005). "Estimating audiometric thresholds using auditory steady-state responses". Journal of the American Academy of Audiology. 16 (3): 140–56. doi:10.3766/jaaa.16.3.3. PMID 15844740.
  16. ^ Hall JW, Swanepoel DW (2010). Objective Assessment of Hearing. San Diego = Arch. Neurol: Plural Publishing Inc.
  17. ^ Kiebling J (1982). "Hearing Aid Selection by Brainstem Audiometry". Scandinavian Audiology. 11 (4): 269–275. doi:10.3109/01050398209087478. PMID 7163771.
  18. ^ Billings CJ, Tremblay K, Souza PE, Binns MA (2007). "Stimulus Intensity and Amplification Effects on Cortical Evoked Potentials". Audiol Neurotol. 12 (4): 234–246. doi:10.1159/000101331. PMID 17389790. S2CID 2120101.
  19. ^ Rahne T, Ehelebe T, Rasinski C, Gotze G (2010). "Auditory Brainstem and Cortical Potentials Following Bone-Anchored Hearing Aid Stimulation". Journal of Neuroscience Methods. 193 (2): 300–306. doi:10.1016/j.jneumeth.2010.09.013. PMID 20875458. S2CID 42869487.
  20. ^ Jennifer Davis (2009-10-29), Peoria Journal Star, According to the U.S. Food and Drug Administration, about 188,000 people worldwide have received implants as of April 2009.
  21. ^ W.F. House (2009), Annals of Otology, Rhinology, and Laryngology, vol. 85, pp. 1–93, Cochlear implants
  22. ^ Eggermont, J. J.; Ponton, C. W. (2003), Acta Oto-Laryngologica, vol. 123, pp. 249–252, Auditory-evoked potential studies of cortical maturation in normal hearing and implanted children: Correlations with changes in structure and speech perception.
  23. ^ a b Sharma, A.; Dorman, M. F. (2006), Advances in Oto-Laryngologica, Central auditory development in children with cochlear implants: Clinical implications.
  24. ^ a b Sharma, A.; Gilley, P. M.; Dorman, M. F.; Baldwin, R. (2007), International Journal of Audiology, vol. 46, pp. 494–499, Deprivation-induced cortical reorganization in children with cochlear implants.
  25. ^ Eggermont, J. J.; Ponton, C. W.; Don, M.; Waring, M. D.; Kwong, B. (1997), Acta Oto-Laryngologica, vol. 117, pp. 161–163, Deprivation-induced cortical reorganization in children with cochlear implants.
  26. ^ Ceponiene, R.; Cheour, M.; Naatanen, R. (1998), Electroencephalography and Clinical Neurophysiology, vol. 108, pp. 345–354, Interstimulus interval and auditory event-related potentials in children: Evidence for multiple generators.
  27. ^ Kral, A.; Eggermont, J. J. (2007), Brain Res. Rev., vol. 56, pp. 259–269, What's to lose and what's to learn: development under auditory deprivation, cochlear implants and limits of cortical plasticity.
  28. ^ Sharma, A. (2005), "Audiol", Journal of the American Academy of Audiology, 16 (8): 564–573, doi:10.3766/jaaa.16.8.5, PMID 16295243, P1 latency as a biomarker for central auditory development in children with hearing impairment
  29. ^ Gilley, P. M., Sharma, A., & Dorman, M. F. (2008). Cortical reorganization in children with cochlear implants. Brain Research.

Further reading

Notes

This article is a direct transclusion of [Auditory brainstem response the Wikipedia article] and therefore may not meet the same editing standards as LIMSwiki.