This is maybe a golden find
@Aaron91. It helps me join 2 maybe major dots together in my own theory. I've been working on something as you know recently and spent a long time focusing purely on the mechanism of the tip link MET channels. In normal hair cell stereocilia there's a consistent stable regulation of K+ to the MET channels depending on whether the stereocilia are hyperpolarized (closed) ~-60mV, resting (semi-closed I believe) ~-40mV, or depolarized (fully open - which I understand triggers the influx of Ca2+) ~-20mV. When focusing on damaged hair cells however I wondered how different levels and type of damage may interfere with the regulation of K+ into the the MET channels. i.e. if tip links are permanently broken in extreme cases (they are capable of repairing themselves under certain / normal circumstances), do the MET channels remain permanently closed or severely dysfunctional? And also the reverse scenario where maybe a bent stereocilia with an intact tip link may force a MET channel permanently open - this is all highly speculative btw.
I need to think about this because I'm still a bit vague on the details but in a way it doesn't matter because if an enlarged ribbon is due to reduced Ca2+ influx either because the MET channels are not opening upon depolarization (or for some perverse reason due to them permanently open during hyperpolarization and resting), then that indicates OHC damage, and if OHCs are damaged then they are more likely to respond to FX-322 PCA. A huge worry I have had (probably my biggest) is that extra ribbon OHCs are not damaged, or at least are not damaged enough to respond to FX-322 PCA, and would require some type of targeted ablation prior to FX-322. A study concept I'd hate to speculate is even happening yet.
@Aaron91 please can you link the zebrafish study again, the particular one you always talk about. Thx.
Very interesting
@100Hz, I need to read more. In the meantime, here's the zebrafish study:
Synaptic mitochondria regulate hair-cell synapse size and function
And
here were my initial comments prior to the ones I posted above:
"Great question
@serendipity1996 and I've been having exactly the same thoughts since you shared that study. I have since read
this recent study by Wong et al., which looks at synaptic ribbon regulation in zebrafish. It has some interesting parallels with the Paul Fuchs study and raises some interesting questions. Here are just a few quotes below from that study and my thoughts on them:
"in the mammalian auditory system, ribbon size is correlated with differences in afferent activity".
"Compared to smaller ribbons, larger ribbons within inner hair cells are innervated by afferent fibers with higher thresholds of activation and lower rates of spontaneous activity"
"Functionally, compared to controls, hair cells with enlarged ribbons were associated with afferent neurons with lower spontaneous activity"
So my first obvious takeaway here is that there is relationship between ribbon size and afferent activity. My second takeaway is that the larger the ribbon size, the more likely there is to be an innervation between the inner hair cell and the afferent type I fibres, although I can't be sure on this and would like to have this confirmed. My third takeaway, assuming my second takeaway is correct, is that once you have that innervation because of the enlarged ribbons, there is less spontaneous activity. Now, I really emphasise the word spontaneous, because as we have all read before, the argument goes that it is spontaneous "activation" of neurons in the type II afferents that causes hyperacusis. This would confirm my thoughts, and I believe what
@serendipity1996 was also trying to get at:
that there is an inverse relationship in the number and activity between the type I and type II afferent fibres respectively.
So this is all really enlightening and I'm sure some of you are now wondering what can we do to affect ribbon sizes. This is where the news isn't so great:
"After a 1 hr treatment with 100 µM NAD+, we found that the ribbons in developing hair cells were significantly larger compared to controls. In contrast, after a 1 hr treatment with 5 mM NADH, ribbons were significantly smaller compared to controls. Neither exogenous NAD+ nor NADH were able to alter ribbon size in mature hair cells. These concentrations of NAD+ and NADH altered neither the number of synapses per hair cell nor postsynapse size in developing or mature hair cells. These results suggest that in developing hair cells, NAD+ promotes while NADH inhibits Ribeye-Ribeye interactions or Ribeye localization to the ribbon. Overall these results support the idea that during development, the levels of NAD+ and NADH can directly regulate ribbon size in vivo"
The long and short of it here is that NAD+/NADH do seem to affect ribbon sizes, but not when they come from exogenous sources. It goes without saying this absolutely sucks, because it means not even an NAD+ or NADH supplement could help us, although it is very unclear to me which of the two we would need because they seem to have the opposite effect. A larger ribbon size induced from NAD+ would help with IHC innervation to the type I afferents, but does this mean it would also help with innervating OHCs to the type II afferents, which presumably is something we don't want? Conversely, a smaller ribbon size would reduce the chance of OHC innervation to the type II afferents, but then also do the same for IHCs and type I afferents. This sucks because the relationship here, as I said above, is an inverse one. If one goes up, the other must come down.
Finally, I feel that these two studies have perhaps answered a question I've had for a long time: why are there some children with seemingly perfectly healthy cochleas that have not been exposed to noise damage but have hyperacusis?
Well, here's a quote from the Paul Fuchs study:
"The number of ribbons in OHCs declines soon after birth, with that change essentially complete in the first postnatal week"
And here's a quote from the 2019 Wong study:
"Interestingly, in mice differences in ribbon size can be distinguished just after the onset of hearing. This timing suggests that similar to our data, activity during development may help determine ribbon size"
And here's one more quote from a study quoted by
@100Hz.
"Of interest in this context is the previous report that sensitivity to ATP is reduced in type II afferents after the onset of hearing"
This for me says a lot. The first quote says that OHCs decline soon after birth and the last one says that sensitivity to ATP is reduced after the onset of hearing (i.e. birth), implying that there is a direct correlation between upregulation of pain receptors due to excess ATP and the number of ribbons. I would therefore guess that kids who are born with hyperacusis don't 'shed' their excess OHC ribbons, leaving the type II afferents prone to sensitivity. The question then is: how we can shed our excess OHC ribbons, which we seem to have to gained following noise exposure?"