Increased Input vs. Decreased Input — Why Can Both Resolve Hyperacusis?

Aaron91

Member
Author
Podcast Patron
Benefactor
Ambassador
Advocate
May 27, 2020
556
Tinnitus Since
2007
Cause of Tinnitus
Loud music/headphones/concerts - Hyperacusis from motorbike
Over the last couple of days, it occurred to me that two, seemingly polar opposite processes can resolve hyperacusis, so I thought I'd start an open discussion to see if we can find out anything more about the underlying pathology. I'll start by sharing some of my own thoughts.

There have been some anecdotal reports of people with hyperacusis seeing their symptoms improve after receiving a cochlea implant, the deduction being that increased input must somehow reverse the maladaptive plasticity that has occurred. Conversely, I've seen several cases on this forum and others where hyperacusis has resolved after sufferers have experienced (further) hearing loss, the deduction being that decreased input has also reversed this same maladaptive plasticity. Why is this?

As many of us already know, research has suggested that the sensitisation of type II afferents through ATP leakage and/or an increase in synapses in the OHCs (in conjunction with loss of synapses in IHCs) are what are causing the maladaptive plasticity. But if leaking OHCs are indeed the issue here, how would the increased input arising from a cochlea implant help with what is seemingly a molecular/biological process? Surely the OHCs are compromised, CI or no CI. Equally, if hearing loss occurs, one would expect further ATP leakage from newly compromised OHCs, leading to further sensitisation - unless of course this hearing loss is the result of already compromised OHCs dying off completely, therefore reducing ATP leakage altogether - as well as a further increase in the number of OHC synapses.

I can understand how a drug like FX-322 might work with hyperacusis because new, structurally sound OHCs are replenishing the dead/damaged OHCs therefore decreasing ATP leakage, but in the case of a cochlea implant, it is simply boosting overall input to the existing pool of OHCs and IHCs, which makes me wonder: if we know type I afferents are the ones responsible for transmitting and processing sound meaningfully, and given that it is IHCs that innervate predominantly to the type I afferents, is it possible that something else is going on somewhere along the auditory pathway?

Conversely, I also recall the knock-out mouse study, where they genetically engineered mice to be deaf but were still able to induce hyperacusis in said mice, effectively suggesting that pain can be experienced even in the presence of no input, although I wouldn't infer that restoring input wouldn't alleviate the hyperacusis.

I suppose I am still none the wiser after writing all this lol, but I welcome anyone else's thoughts on this topic.
 
Good thread and well timed, my thinking is steering the same way recently.

As we know the extra OHC ribbons research means that an OHC has to be present for this to even happen. I'm just starting to wonder if once a type II is sensitized with an intact OHC with extra ribbons, the OHC is now acting as a sensory input booster for pain.

The decreased vs. increased input argument you put forward I think is quite easy to explain, particularly with the condition you make for each. I don't know much about cochlear implants but what I do understand is that they damage what's left of the inner ear cells as it gets inserted. Apparently there are shorter ones now that preserve lower frequencies (because they don't get inserted so far), but as far as I'm aware what they do pass as they are entered gets largely destroyed. Hearing loss is obviously the same thing happening naturally and the end result I think in both cases, is no OHCs and possibly is also the destruction of the type IIs. So the mechanism for pain is being wiped out either way, and deafness is being invoked either way even though it is now being replaced artificially in the case of cochlear implants.

In the case of natural hearing loss even if the type IIs remain, is the fact that there is no OHC anymore enough to effectively terminate the sensitized type II if it still does remain?

In the case of the cochlear implants that have now wiped out the hair cells and maybe the sensitized type IIs as well there is maybe now a pain free pathway for the artificially generated signal.

It makes me wonder more and more if the extra ribbons in OHCs, in conjunction with the new afferent endings, are what's responsible for pain. For example whether or not we assume that sensitized type IIs remain, either with natural or forced OHC death, the various anecdotes from hearing loss sufferers and cochlea implant receivers suggests that the pain is relieved to varying degrees whether they are deafer or now have artificial hearing, and the common factor seems to be OHC death or destruction. Is this a reduction in OHCs along with their extra ribbons?

Again, I'm only starting to understand cochlear implants but I don't think they communicate with existing OHCs. The recent story suggested that the sufferer had lost mid range hearing that was somewhat restored with the implants (there were presumably no OHCs present). I think implants communicate with nerve structures rather than whatever hair cells might be left behind. And I actually think that any remnants of pain could well be down to any leftover OHCs, particularly if they have extra ribbons as they would be still be receiving noise signals and transmitting on the original pathway.

In the case of the knock out mouse study you're referring to. Didn't they say they were able to sensitize type IIs in deaf mice (I can't remember the whole thing without digging it out). I still don't think they can actually quantify that pain is being felt though. I'm a bit unsure about this but I think this is what I took away from it. What would be interesting is to regenerate the OHCs in the mice, with and without extra ribbons, and then see if pain was being felt in each case. (Poor things, its easy to forget what they go though in all of this research).

It's now apparent there are at least 2 routes at play with regard to sensitization of type IIs, either via ATP or by extra ribbons. I believe that while they may both need to apply to a type II, only one of these is actually key in what causes pain, and after seeing how OHCs decrease seems to be the common factor for pain relief in your examples, I'm sort of starting to think its down to the extra ribbons route. Above all else raw noise is what causes the pain and that is an OHC thing.
 
I don't know if my experience helps or just confuses things. I suffered severe high frequency hearing loss in one ear while using headphones (it wasn't even loud!). I soon developed hyperacusis in the ear with hearing loss, and within about a month I started developing hyperacusis in my other ear that has perfect hearing after more noise exposure. So, I have both: an ear with hearing loss above 6 kHz and hyperacusis and an ear with no measurable loss and hyperacusis. I'm guessing that if I had an extended audiogram my loss in the very high frequencies would be tragic.

I'm convinced that my hearing loss and hyperacusis is a result of noise despite doctors trying to blame it on a virus. Prior to my hearing loss event, I frequently used headphones (although never really loud), attended many concerts, and blasted music in my car.
 
Good thread and well timed, my thinking is steering the same way recently.

As we know the extra OHC ribbons research means that an OHC has to be present for this to even happen. I'm just starting to wonder if once a type II is sensitized with an intact OHC with extra ribbons, the OHC is now acting as a sensory input booster for pain.
It's an interesting thought. What I want to know is what is happening here on a molecular level. Do these extra ribbons translate to more ATP being produced? My understanding was that the ATP comes from the cell itself. This is probably a silly analogy, but if you imagine the OHC as a bucket and the ATP as water, I simply see these ribbons as extra "holes" that allow the ATP to drain "faster" onto the type II afferent - the amount of water (ATP) doesn't actually increase, but maybe I'm wrong.
As far as I'm aware what they do pass as they are entered gets largely destroyed. Hearing loss is obviously the same thing happening naturally and the end result I think in both cases, is no OHCs and possibly is also the destruction of the type IIs. So the mechanism for pain is being wiped out either way, and deafness is being invoked either way even though it is now being replaced artificially in the case of cochlear implants.
How sure are you about this? My understanding was that a cochlea implant boosts what natural hearing the ear has left in a very crude way - that's why a lot of people still complain about a lack of clarity even after they've received the implant. If someone had zero functional hair cells left I don't think a cochlea implant would work? I'm aware CIs do cause damage when they are implanted, but I would be surprised if it was to the extent of somehow destroying all OHCs. What was interesting for me to read earlier today was that CIs cause a huge amount of inflammation when they are inserted, which of course raises a lot of questions as to whether inflammation is a cause of hyperacusis, because there are clearly cases of people who receive CIs, have hyperacusis and their hyperacusis improves in spite of that additional inflammation.
In the case of natural hearing loss even if the type IIs remain, is the fact that there is no OHC anymore enough to effectively terminate the sensitized type II if it still does remain?
If the SGN type II afferent only responds to input and there is no input, I can't see how that SGN could transmit a pain signal, unless the ear is capable of some kind of phantom limb pain, which wouldn't be out of the question.
In the case of the cochlear implants that have now wiped out the hair cells and maybe the sensitized type IIs as well there is maybe now a pain free pathway for the artificially generated signal.
For this to make sense, you would need those OHCs with the extra ribbons specifically to die off and the others that are still alive without the extra ribbons to stay alive. I suppose by definition though, OHCs which have extra ribbons have those extra ribbons in the first place because of exposure to unhealthy levels of noise, so they would be the first to die off.
It makes me wonder more and more if the extra ribbons in OHCs, in conjunction with the new afferent endings, are what's responsible for pain. For example whether or not we assume that sensitized type IIs remain, either with natural or forced OHC death, the various anecdotes from hearing loss sufferers and cochlea implant receivers suggests that the pain is relieved to varying degrees whether they are deafer or now have artificial hearing, and the common factor seems to be OHC death or destruction. Is this a reduction in OHCs along with their extra ribbons?
This is a really powerful point and I think you may have hit the nail on the head. My only reservation is how much damage a cochlea implant brings to our OHCs - would it be widespread enough to bring the destruction required to stop the sensitisation?

I suppose if you are correct, one would have the rather sobering thought that increased input in a non-destructive form, such as from FX-322, wouldn't resolve the problem. How would the progenitor cells help with this extra synapse problem? I do recall Frequency Therapeutics saying that their drug was a synaptic drug insofar as that if the synapse is gone, FX-322 will grow a new one but only in those places, meaning: the synapses that have grown as a result of acoustic trauma and which are responsible for hyperacusis will not disappear in the presence of a drug such as FX-322. I've only just dropped this bomb on myself and I hope I'm wrong.

I also recall Frequency Therapeutics saying their drug will replace damaged cells. Does this mean that the synapses would be replaced as well? Perhaps a question for the next time Frequency Therapeutics is on the Tinnitus Talk Podcast.
Again, I'm only starting to understand cochlear implants but I don't think they communicate with existing OHCs. The recent story suggested that the sufferer had lost mid range hearing that was somewhat restored with the implants (there were presumably no OHCs present).
As I mentioned above, I don't think it's possible for a CI to be effective if there are no OHCs, but maybe I'm wrong. Isn't this why many people who are born deaf never get a CI because they are not candidates for one?
I think implants communicate with nerve structures rather than whatever hair cells might be left behind. And I actually think that any remnants of pain could well be down to any leftover OHCs, particularly if they have extra ribbons as they would be still be receiving noise signals and transmitting on the original pathway.
Yes, I think you are right here. My understanding is that the cochlea implant bypasses the peripheral part of the auditory system but boosts whatever signal the peripheral is trying to send to the brain.
In the case of the knock out mouse study you're referring to. Didn't they say they were able to sensitize type IIs in deaf mice (I can't remember the whole thing without digging it out). I still don't think they can actually quantify that pain is being felt though. I'm a bit unsure about this but I think this is what I took away from it. What would be interesting is to regenerate the OHCs in the mice, with and without extra ribbons, and then see if pain was being felt in each case. (Poor things, its easy to forget what they go though in all of this research).
I would need to find the study as I had a similar question as well, but what I remember reading/discovering is that mice exhibit some very obvious behaviours when they are in pain, and when any lab is trying to tell whether a mouse is in pain or not from whatever it is they are trying to do they use said method.
It's now apparent there are at least 2 routes at play with regard to sensitization of type IIs, either via ATP or by extra ribbons. I believe that while they may both need to apply to a type II, only one of these is actually key in what causes pain, and after seeing how OHCs decrease seems to be the common factor for pain relief in your examples, I'm sort of starting to think its down to the extra ribbons route. Above all else raw noise is what causes the pain and that is an OHC thing.
I wonder if there is a third route, or at least a mechanism to explain the second route of extra ribbons more thoroughly. I keep coming back to this zebrafish study I read a few weeks ago that I just can't let go. It made some very interesting observations, one of them being that in the mammalian auditory system, ribbon size is correlated with differences in afferent activity. The second point is that larger ribbons within inner hair cells are innervated by afferent fibers with higher thresholds of activation and lower rates of spontaneous activity. Why is this important? Well, Liberman's work showed that IHCs in the high frequency region of the mouse cochlea have enlarged ribbons immediately after noise, followed by synapse loss. Let's now simplify all these processes into what this means. The below is just me speculating based on what I've shared above:
  • Noise exposure -----> larger ribbons in IHCs
  • Larger ribbons in IHCs -----> Innervation to fibres with higher thresholds of activation
  • Higher threshold of activation -----> Less less input to type I afferents (sound information), compounded by eventual loss of IHC synapses
  • Less input to type I afferents ------> multiplication of ribbons in OHCs in response to lower input from IHCs
 

Log in or register to get the full forum benefits!

Register

Register on Tinnitus Talk for free!

Register Now