>Current Research

>Laboratory History

The Cochlear Implant Laboratory is funded by two large grants from The National Institutes of Health (NIH) and by smaller grants from manufacturers of cochlear implants.

Cochlear Implants: Our experience with cochlear implants dates to the early 1980’s when Professor Michael Dorman joined Luke Smith at Symbion, Inc. in Salt Lake City, Utah to explore the performance of the first multichannel cochlear-implant (CI) manufactured in the United States – the 4-channel, Ineraid cochlear implant. That project evolved and moved, first, to the University of Utah School of Medicine and then to Arizona State University. Currently, in one of our NIH-funded research efforts we

• assess the relative efficacy of unilateral, bimodal, and bilateral stimulation with a CI with the goal of providing a tool to clinicians that will enable them to determine whether the ear opposite the implant is best fit with a hearing aid or a second CI,

• assess the benefits of hearing preservation surgery, i.e., surgery which preserves low-frequency hearing in the implanted ear, and

• assess CI performance in realistic listening environments, e.g., environments in which noise – as in restaurant or cocktail party -- surrounds the listener

In our other NIH-supported research project, a subcontract from Professor Philip Loizou at the University of Texas at Dallas, we

• use a wearable, personal digital assistant (PDA) as the signal processor for a cochlear implant and study issues involving both bimodal stimulation and bilateral stimulation.

Collaboration with industry: We provide beta-testing for new signal processing schemes for Advanced Bionics.

A Little History:

     Our early work focused on the performance of patients fit with the Ineraid cochlear implant -- a 4-channel, analogue cochlear device manufactured by Symbion, Inc. in Salt Lake City. The device was unique because (i) it was attached to the patient by a percutaneous plug, (ii) it contained no internal electronics and (iii) it employed simultaneous, analogue stimulation to four of six electrodes in the scala tympani. For more than 10 years the first author commuted between Phoenix, Arizona and Salt Lake City, Utah to conduct research, first at Symbion, Inc., then in the Department of Otolaryngology, Head and Neck Surgery at the University of Utah School of Medicine. The head of that program was James Parkin, M.D. -- one of the pioneers of cochlear implant surgery and research. Some of the early experiments on multichannel electrical stimulation (e.g., Eddington et al., 1978) and the efficacy of multi-channel cochlear implants (Eddington , 1980) were with Parkin’s patients. Eddington’s 1978 paper is an extraordinary one and should be required reading for students of cochlear implants.
     In the middle to late 1980s the principal question driving our research was (i) the level of performance that could be achieved with a few (4) channels of simultaneous, analog stimulation and (ii) how high levels of performance, e.g., a 73 % CNC word score, could be achieved with such a sparse input. Our early work on outcomes and mechanisms underlying those outcomes can be found in Dorman et al. (1988,1989). The performance of our highest functioning patients may have influenced one implant company to offer a simultaneous analogue stimulation (SAS) option for their implant (Osberger and Fisher, 1999).
In 1990 we began a long running collaboration with Blake Wilson and colleagues at Research Triangle Institute (RTI), North Carolina by sending our Ineraid patients to RTI. Wilson had been developing new processing strategies for cochlear implants since the early 1980s under contracts from the Neural Prosthesis Branch of the NIH. Because our Ineraid patients had a percutaneous plug instead of an implanted receiver, Wilson could test new signal processing strategies without the limitations imposed by the electronics in the implanted receiver. Several of our Ineraid patients participated in the development and initial tests of the continuous interleaved sampling (CIS) processor (Wilson et al., 1991). Following one of our many visits to Wilson’s laboratory to test his novel signal processors, we published the first (non-Progress Report) description of the improvement in frequency resolution that could be obtained with an extra channel produced by ‘current steering’ or “virtual channel” processor (Dorman et al., 1996; Wilson et al., 1992). This type of processing strategy is now implemented in the Advanced Bionics HiRes 120 device.
     A major change in the work of the laboratory was introduced by Dr. Philip Loizou, a post doctoral fellow in 1995-1996. Motivated by the work of Dr. Robert Shannon with a noise-band simulation of a cochlear implant, Loizou developed a sine-wave simulation of a cochlear implant, i.e., one which output a sine wave at the center frequency of each input filter instead of a noise band centered at the center frequency. Our motivation for this processor was simple: In our experience no implant patient had ever reported that his/her implant sounded like a hissy, gravelly, noise-band simulation. Instead, patients usually reported a ‘thin’, tonal (sometimes ‘mechanical’ or ‘electronic’) percept. Loizou’s sine wave simulation captured that sound quality to some degree and allowed us to conduct many experiments (e.g., Dorman et al., 1997a,b; Dorman and Loizou, 1997). Simulations, both noise band and sine wave, have become a very useful tool for researchers because factors that normally vary across patients, e.g., depth of insertion, number of active electrodes, the length of auditory deprivation, can be held constant in a simulation and the effect of an independent variable can be seen without uncontrolled interactions.
     In 2003, Dr. Tony Spahr initiated a series of experiments that set the stage for our recent work. Following years of claims that all widely used, multichannel, cochlear implant systems allowed similar levels of speech understanding (in spite of different signal processing), we decided to put the claim to the test. All of the manufacturers were concerned that, no matter what sentence material we used, the patients of the “other” manufacturer would have heard the sentences more often than their patients and that familiarity would bias the results of the experiment. To get around this problem we developed a new set of sentence materials – the AzBio sentences.
     In 2005, prompted by the first reports of the success of hearing preservation surgery in Frankfurt, Warsaw and at the University of Iowa (v. Ilberg et al. 1999; Skarzynski et al. 2004; Gantz and Turner, 2004), Drs. Tony Spahr and Rene Gifford were dispatched to the International Center for Hearing and Speech in Warsaw to obtain estimates of nonlinear cochlear processing in the region of preserved hearing for the hearing preservation patients. In hearing preservation surgery, the surgeon attempts to preserve low-frequency hearing apical to the tip of an electrode array that is inserted 10 to 20 mm into the cochlea. The patients in Warsaw had 20- mm insertions. At issue were the reports of nearly perfect conservation of residual hearing in several of the patients. We were able to confirm the reports (Gifford et al., 2008a).
     Today, our work focuses on the relative merits of bilateral CIs, bimodal CIs and hearing preservation surgery – the later project with Dr. Rene Gifford at Vanderbilt University. Our most recent innovation is to test patients in a realistic listening environment – the patient sits in the center of an 8-loudspeaker array surrounded by restaurant noise or the noise from a cocktail party. At issue is whether performance in this environment is different from that in standard booth with a single loudspeaker.