Frequency Therapeutics — Hearing Loss Regeneration

Keep your shares, Jack. There's no way of knowing how this will all turn out, and although I may come across like a negative Nancy on here, I am completely neutral and only motivated by what can be proven.

If Phase 2a had shown good/promising results, I would have been cartwheeling around my living room along with everyone else.

I think the blame for any sabotage has to lie with Frequency Therapeutics. They must have known it was a possibility to fake the results if that is indeed what happened.
I agree. At first I was a Frequency Therapeutics shill and told myself that they got screwed. After some thought, it shouldn't have been that hard to get WR stability between >=6 month and screening.

Where I would feel bad for them is if we found out that patients got into the trial with stability, but then tanked their baselines in order to prove that the drug worked. I doubt it happened, but there also people who have ingested bleach so... the human race is uniquely creative at being stupid.
 
This is interesting - regarding the relationship between IHCs and LGR5+ cells, does this mean that each IHC has more than 1 corresponding LGR5+ progenitor cell? I'm just going over the Tinnitus Talk Podcast transcript with Carl LeBel again and he states that "Whatever is missing [either OHCs or IHCs], that's how we believe it's sort of programmed, because it tends to be a 1:1 relationship, every hair cell has their sort of partner progenitor cell." But we know that IHCs have more LGR5+ cells surrounding them. I may be completely misinterpreting this.
Every single schematic I have ever seen for this shows more LGR5+ cells around IHCs. But there aren't any hair cells without at least one neighbor. So maybe that's what he was getting at? I'm not sure what he meant by that quote.

Even the one used McLean's paper shows this:

Diagram-of-Cochlear-Lgr5-Cell-Culture.png

 
I agree. At first I was a Frequency Therapeutics shill and told myself that they got screwed. After some thought, it shouldn't have been that hard to get WR stability between >=6 month and screening.

Where I would feel bad for them is if we found out that patients got into the trial with stability, but then tanked their baselines in order to prove that the drug worked. I doubt it happened, but there also people who have ingested bleach so... the human race is uniquely creative at being stupid.
Word scores weren't used to determine stability, though. They didn't have to be consistent for the trial (the screeners just blindly follow metrics too to be impartial). In fact if they were consistent, they would not have seen the "discrepancy" they reported.
 
Word scores weren't used to determine stability, though. They didn't have to be consistent for the trial (the screeners just blindly follow metrics too to be impartial). In fact if they were consistent, they would not have seen the "discrepancy" they reported.
That's totally unforgivable for a drug that is primarily marketed for hidden hearing loss. Why use the Thornton Raffin CI approach for the trial, but not for the stability analysis (which it is equally intended for)? I'm not impressed.

Actually, it would be that much better if they did. Because if a bear beat up the Binomial assumption, they would really have to explain why it would be a problem to consistently use the same method pre-trial and during the trial.
 
The example you provide compares two different pieces of equipment (it is quite obvious from the readout that the testing was done in different places). This is a separate case.

Even if there is a margin of error of ±10 dB, testing was done using the same equipment will still have the capability of demonstrating an improvement within the margin of error (provided enough data points are collected). Suppose a person gets a treatment such as FX-322 and experience a threshold shift – that improvement will be picked up by audiometry as a set of data points distributed around a new average:

View attachment 44299

The chance of not demonstrating an improvement (when in fact there was one) is indicated by the four red "X"-marks. The more data points you have (from the combined number of patients in a trial) makes it increasingly unlikely that everyone would end up scoring red "X"-marks in a situation where the treatment worked. It's similar to flipping a coin and expecting a head (vs. tails). In a one toss scenario, the probability of heads is 1/2. Expecting 4 consecutive heads is (1/2)^4 = 1/16 = 6.25%. The chance of 10 consecutive heads = 0,1%. The probability of landing a specific scenario when there are many data points collected quickly moves towards zero.

I'll leave my own two before/after diagrams that actually were done using the same testing equipment and environment 2 months apart (the improvement of 25 dB at 8 kHz was not a coincidence...):

View attachment 44300
My tests were done using the same equipment (with a calibration date) and in a booth. They should be compatible. I have many others that are stored electronically, and they are all different (taken at the same place). Audiograms just aren't very accurate and that's the problem. This is also the opinion of my mother-in-law who is a senior audiologist.

The pre-determined frequencies that an audiogram targets miss significant auditory micro-structure. It's been observed that there can be a difference of as much as 15 dB HL in just 1/10 of an octave. When hearing damage occurs, the neighbouring cells are known to take up some of the slack, and if they aren't being measured, then it muddies the waters somewhat. Until there's a better and more accurate way of testing, the audiogram is all we have, and it's a rather rudimentary and inaccurate way to go about it.

Frequency Therapeutics acknowledged that the 10 dB gain was not clinically significant. No matter how you spin it, audiograms are antiquated.
 
That's totally unforgivable for a drug that is primarily marketed for hidden hearing loss. Why use the Thornton Raffin CI approach for the trial, but not for the stability analysis (which it is equally intended for)? I'm not impressed.

Actually, it would be that much better if they did. Because if a bear beat up the Binomial assumption, they would really have to explain why it would be a problem to consistently use the same method pre-trial and during the trial.
I agree. A huge part of the blame for the muddied results lies in not accounting for this.
 
I still don't understand how WR scores can increase in all groups but it "didn't work".
The reason that the interim analysis showed the study failed (even though WR scores improved) is that when WR scores (also) improved in the placebo group (which serves as a reference point), it makes it that much harder for the drug arm of the trial (i.e. those getting FX-322) to demonstrate an improvement. Kind of like trying to score a goal with moving goalposts (if that makes sense)...
 
The pre-determined frequencies that an audiogram targets miss significant auditory micro-structure. It's been observed that there can be a difference of as much as 15 dB HL in just 1/10 of an octave. When hearing damage occurs, the neighbouring cells are known to take up some of the slack, and if they aren't being measured, then it muddies the waters somewhat. Until there's a better and more accurate way of testing, the audiogram is all we have, and it's a rather rudimentary and inaccurate way to go about it.
It doesn't matter that those deficits are present – they will be omnipresent and will show up in both the before and after scenario thereby allowing you to "subtract" two population averages from each other.

The way you phrase it seems to suggest that a certain deficit will only show up in one average but not the other. Selectivity does not apply to a large set of data points.

I am not sure you understand it, however...
 
It doesn't matter that those deficits are present – they will be omnipresent and will show up in both the before and after scenario thereby allowing you to "subtract" two population averages from each other.

The way you phrase it seems to suggest that a certain deficit will only show up in one average but not the other. Selectivity does not apply to a large set of data points.

I am not sure you understand it, however...
You two are making different valid points. @Ed209 is making the point that the actual measuring device is outdated for measuring hearing loss. If I understand correctly, the point you are making is that even with perfect equipment and i.i.d. data, there is simply a probability distribution associated with the data entirely due to randomness. This cannot be avoided under any test. Both are correct.
 
It doesn't matter that those deficits are present – they will be omnipresent and will show up in both the before and after scenario thereby allowing you to "subtract" two population averages from each other.

The way you phrase it seems to suggest that a certain deficit will only show up in one average but not the other. Selectivity does not apply to a large set of data points.

I am not sure you understand it, however...
I am saying that 10 dB is not significant enough. This is a well established fact in the field of audiology.

Are you suggesting that the 4 people who gained 10 dB at 8 kHz in Phase I/II showed a clinical improvement?

I'm not sure what point you are trying to prove here.
 
How come there was never any discussion (as far as I'm aware) about the concentration of the FX-322 formula used in any of the trials? Otonomy for ex. tried different milligram amounts of their compound in their trials.
 
I'll leave my own two before/after diagrams that actually were done using the same testing equipment and environment 2 months apart (the improvement of 25 dB at 8 kHz was not a coincidence...):

What was it then? If you are hinting at the fact that you know how to regenerate hair cells, fix synapses or repair nerve damage, then I think you need to inform Frequency Therapeutics as they clearly haven't cracked this yet.

You previously claimed that it was Dr Wilden and his LLLT treatment that caused the improvement, but I am highly dubious of this claim. There is no evidence it works, and he took a lot of money from people.
 
How come there was never any discussion (as far as I'm aware) about the concentration of the FX-322 formula used in any of the trials? Otonomy for ex. tried different milligram amounts of their compound in their trials.
Maybe this will help. From Phase 1/2:

upload_2021-3-26_12-16-14.png


If I understand correctly, they can't alter the concentrations within the gel, as that would alter the formulations. What they can (and did) do is give low and high volumes of the same formulations. They did this in Phase 1/2, and my understanding is that it wasn't noteworthy.
 
How come there was never any discussion (as far as I'm aware) about the concentration of the FX-322 formula used in any of the trials? Otonomy for ex. tried different milligram amounts of their compound in their trials.
There was, in the first trial. There was a low dose and a high dose. Both were deemed safe, they stuck with the high dose since then.

https://clinicaltrials.gov/ct2/show/NCT03616223
 
I am saying that 10 dB is not significant enough. This is a well established fact in the field of audiology.
Hmmmm... wrong... both the FDA and the EMA say that a 10 dB improvement is clinically significant:

"With regard to Sonsuvi , the FDA and EMA have indicated that a 10 dB improvement in hearing thresholds is clinically significant, in line with clinical practice. However, no product has been approved for marketing based upon such guidance and we cannot be certain that Sonsuvi will be approved even if it were to demonstrate such result in further Phase 3 trials"

Link: https://ir.aurismedical.com/static-files/8d74fd2e-b09c-4bc5-b2aa-82cc2b4bb1f1
 
Regarding the +-10 dB margin for error, would it not depend on the average across both cohorts though? On a case by basis the margin of error of +-10 dB might be meaningless but if the general trend was that the drug treated cohort shifted to the + side of the margin for error post trial while leaving the placebo cohort as it was (assuming there was no bluffing by anyone to get into the trial), that potential +10 dB across the group could be very indicative of an overall improvement.
 
Regarding the +-10 dB margin for error, would it not depend on the average across both cohorts though? On a case by basis the margin of error of +-10 dB might be meaningless but if the general trend was that the drug treated cohort shifted to the + side of the margin for error post trial while leaving the placebo cohort as it was (assuming there was no bluffing by anyone to get into the trial), that potential +10 dB across the group could be very indicative of an overall improvement.
I think the point being made is that if the stability was reliable, the FDA considers >=10 dB PTA to be clinically meaningful. We know this was not demonstrated at the group average levels, but it could be of interest at the individual level to look for super responders. It would be especially interesting if none of the placebos were >=10 dB, but some of the treaters were.

Does this mean anything for getting the drug through? Not really. Just provides scientific information.

Personally, 10 dB doesn't rock my world. If it was 15+ and it wasn't observed in the placebo group at all, I am interested. Some of this was shown to exist in the Phase 1/2 trial, but there were only 8 placebos and double the amount of treaters in the study. It would be far more interesting if in the Phase 2a and 1b remaining trials, all placebos were consistently stagnant, but with enough superstar responders to raise some eyebrows.
 
Okay, I have caught up. I think everyone needs to slow way down.
  1. Margin of error is a thing for a reason. We can't say that if someone improved by 10 dB, it must be the drug, but if they were in the placebo arm, it was probably the equipment. PTA is a total wash for this trial. Just give up on it. Bias on PTA wasn't a thing like WR (possibly) and they saw no groupwide improvements (even at EHF!) with plenty large enough sample sizes.
  2. Let's say the "lying" was a mix of low screen, normal baseline and low screen, low baseline. Cases of the former are less concerning because the people at least did the trial right. All it means is that the filter was a little out of whack, but the data is accurate. In other words, in this case, people would start closer to the ceiling so there would be less space for separation between groups. But, assuming there are also cases of the latter, this would be offset by "fake" super responders, which we can only assume is proportional in placebo and treatment groups.
  3. We need to slow waaaay down on the lawn theory. Even if McLean showed 12 days was required in the lab, it's totally different in vivo. I have made the point that "7 days" is a stupid scientific number, but we still don't know the degree to which this impacted everything.

    Just days ago, we were completely sold on their theory that multiple doses would push the 8 kHz barrier. Now we are acting like the lawn theory must be true. There are other things to consider.

    For example, there's an apples to oranges time comparison problem; let me explain. So in all 4 cohorts, the last injection of anything (lawn theory relevant) is at day 28. If we measure 90 days from this date (day 118), then the groups obviously have different amounts of time with FX-322 in them. If we measure 90 days from the date of the last FX-322 shot (so different absolute days for different groups), then we are comparing groups with different "lawn effect" levels because they were all injected with something on the same days. This problem is not nearly as straightforward to make sense of.

    As an aside, does anyone know which of the two they actually did? I know "day 90" doesn't mean absolute day 90, but I'm not sure otherwise. It's still a problem.
  4. It's not sexy to say this, but the remaining trials are much more valuable than the anecdotes. Even then, all they will really do is help us on an emotional level. They will have to redo Phase 2. I would be blown away otherwise.
  5. If they get unblinded and they see individually disproportionate baseline discrepancies in the placebo group by chance, the performances in the treatment groups would have to be damn strong to override that. It doesn't appear like that's the case.
tl;dr: There will be another Phase 2. Sorry.
I've been thinking about this a bit more and have since asked myself: what would the bare minimum readout had to have been for Phase 2 to have had grounds to go to Phase 3?

Consider this: if Phase 2 only showed statistically significant results in the moderately severe group and for only one dose, would that have been enough to go to Phase 3?

Almost certainly yes. The reality is, we don't actually need ALL dosage groups to produce statistically significant results to get to Phase 3. Equally, we don't need ALL hearing loss groups to produce statistically significant results either to get to Phase 3. The only consequences of such a readout, as far as getting the drug to market is concerned, would have been commercial in nature only (market cap).

I then took this a step further and asked myself: how easy would it have been to skew the results, say for the moderately severe group (the one @FGG theorises to be the most responsive) and the placebo group, given what Frequency Therapeutics say they suspect has happened? What kind of distribution of unreliable patients would one need? Again, @Zugzug has crunched a lot of numbers here. He doesn't think it's that many. The thing, if you accept what I've written above - that you only need one group to respond - I'm not sure it even has to be many, but certain assumptions may have to be made. Keep in mind, I'm not talking about overall efficacy of the drug for all patients. I'm talking about what it would take to get to Phase 3.

BEWARE - MATHS AHEAD

This next bit is going to take some mental gymnastics so read slowly and enjoy the ride. The point of this exercise is to actually help myself better visualise @Zugzug's probability argument and see what holes can be punched into that under certain assumptions. It may help others too.

Let's say, for the sake of argument, that 25% of all 96 patients who enter the trial - 24 in total - tanked their word scores during screening. What are the chances that this tanking behaviour would have been equally distributed across all types of patients - 8 patients in mild, 8 patients in moderate and 8 patients in moderately severe? Chance would dictate that this is the most likely scenario, but this to me would appear to be a very generous assumption, because much to FGG's point, if your hearing is already pretty terrible, it's not as if you're going to be consciously trying to tank your score. Your score is already tanked for you, lol. Let's assume though for the sake of being faithful to probability, while making no assumptions such as the one I've just described, that the split was still equal and tanking was equal across all groups.

Mild group: 32 patients, 24 of whom with consistent historical records and screening/baseline tests, 8 with better than mild hearing and inconsistent records.

Moderate group: 32 patients, 24 of whom with consistent historical records and screening/baseline tests, 8 with better than moderate hearing and inconsistent records.

Moderately severe: 32 patients, 24 of whom with consistent historical records and screening/baseline tests, 8 with with better than moderately severe hearing and inconsistent records.

Let's now assume that these patients are equally distributed across the dosage groups. Remember, all we care about are the moderately severe patients and how many of them fall into the treatment group and how many of them fall into the placebo group.

Chance would dictate that, based on the assumptions above, 25% of patients of the placebo group - 6 out of 24 (2 from each category) - do not have a genuine baseline, regardless of what category they fall into. If we assume that placebo has no "true" effect on a patient in this group, any gains in the placebo group can only come from the 6/24 patients who have artificially low baselines, regardless of what severity category they fall in to during screening. The question then becomes: how big would the changes have to be to lift the entire group? Let's say the average improvement of these 6 patients was 60% because that is, on average, how much each patient tanked their true baseline during screening. I have picked this number somewhat arbitrarily, but for demonstration purposes it's also the average % improvement of the 6 responders from the original Phase 1/2 trial who we believe "truly" responded to FX-322. If you also assume that the average improvement of the remaining 18/24 placebo patients is 0% (some improve, some get worse, some stay the same), the average improvement of all patients across the entire placebo group is 15%. In other words, it would only take 6 patients from the placebo group to demonstrate an equal improvement to the FX-322 Phase 1 super-responders - the moderately severe group - to achieve an average improvement that is above Frequency Therapeutics' own definition of a responder: 10% improvement. This is kind of a tangential question, but what are the consequences here regarding trial design? If Frequency Therapeutics have already determined a responder to be a patient that improves by 10%, it would seem to me that they are in a mess given they are seeing patients "respond" in the placebo group. The only thing they could do is look at the average improvement of the other groups, individually or collectively, and see:

1) How much bigger their improvement is
2) Is that difference statistically significant?

I imagine this is exactly what happened and the maths checked out as non statistically significant i.e. the improvement % was very close.

Using these same assumptions, let's now come to what may be happening in the moderately severe patients who received FX-322. Recall that there are 32 patients defined as being moderately severe. 8 patients will, by trial design, end up in the placebo group. 2 of those 8 will, on the balance of probability, have artificially deflated scores. That leaves us with 24 patients who received FX-322, 6 of whom also have artificially deflated scores. Consider this: FX-322 massively underperforms because of the multi-dosing issues - when you reseed the lawn, you have to stay off the grass. We know from Frequency Therapeutics that there were no discernible differences even between treatment groups, so dosage considerations are not even worth mentioning here. Let's assume the 6 fakers that are still in this group improve by 60%, as we also assumed of those same types of patients in the placebo group. What would the average improvement of the remaining 18 patients (who are truly moderately severe) have to be to end up with an average improvement of just 10% for the whole group and match the placebo average? Zero. You would need 18 patients in the moderately severe FX-322 group, regardless of dose, improving an average of 0%.

In other words, multi-dosing would have to be SERIOUSLY detrimental to the treated patients - effectively equal to placebo. This raises a serious question in that not only would one have to believe that multi-dosing has a dampening effect, but that it effectively negates all effect FX-322 has in the first place - even in the case of 4 consecutive FX-322 doses.

Now that we've visualised the probability thesis, let's try to punch some holes in it. There are two obvious ones. The first one is that, as I mentioned and as @FGG keeps saying, if your hearing is already pretty terrible, it's not as if you're going to be consciously trying to tank your score. That work is done for you, by you. It be would fair to suggest then that the moderately severe group I described in the example above has more patients who are true to baseline. This has two knock-on effects. Firstly, given that the ratio of mild:moderate:severe tankers is now skewed towards mild/moderate, the average improvement of the placebo group will be much higher - if, of course, you assume that the less severe your condition is, the more you will tank your score to make sure you get into the trial. Perhaps this is false logic, but this would seem to me to be somewhat logical if you assume that the moderately severe patient is less likely to tank their score, or if they do tank their score, they would tank their score less (because there's less headroom to tank). But it also means that the moderately-severe FX-322 treatment group now also has more headroom to improve (and therefore more likely to reach statistically significant) because it is made up of more genuine responders. Except it doesn't. The argument, if you wish to believe it, is that multi-dosing dampens the effect of the drug. So what you end up with is an over-inflated placebo response due to distribution of tankers and an underwhelming treatment response.

Why is any of this important, why do we care and why did you just come down this rabbithole with me? The point is that, if Frequency Therapeutics, following accumulation of the individual data, find that there's a certain numbers of outliers in the placebo group - even just 6 - and they can verify this through historical records, they may well have grounds to throw these 6 patients out of the analysis and work with the other 18 patients. Assuming those 18 patients had a true placebo response of 0%, you should then be able to make some really fair comparisons. The only question then remains: has the multi-dosing dampened the effect so much that they did not even reach an average of 10%? The communication we've seen from Frequency Therapeutics so far would suggest that there was some kind of response. It just wasn't enough above placebo.

Perhaps this is all wishful thinking, but I do think there is a scenario, especially if Frequency Therapeutics can get hold of individual records, where Frequency Therapeutics could have grounds to go to the FDA and ask for certain patients to be excluded from their analysis. Given that such a scenario would have arisen from the very fact that Frequency Therapeutics chose to blind themselves in good faith, I can't see how this would not at least be considered.
I am almost positive this is the same patient I have been in communication with. The thing that bothers me is that they told me that they have no more follow-ups with the clinic. This is a little confusing to me. Wouldn't they be in contact with the patient going forward before the end of the readout? I agree this person does have a documented history of being on an online forum related to hearing loss going back a couple of years. They told me in a private message that the tinnitus had faded unprompted.
I believe it's a different patient. I recall someone posting a month or two ago (presumably you @Gb3) about said patient and I recall the photo of the screenshotted conversation - this is a different person. In fact, I believe that patient is the third patient I've managed to track down.
 
FX-322 is the cute girl who sent ambivalent texts after the first date and we are all trying to figure out if there will be a second.
 
On the IHC Theory:

What I believe had been discussed earlier this week was that it was observed in developing cochleas in human fetuses & in other mammals, that the IHC begin to develop slightly sooner than the OHC. There are images of both IHC and OHC developing simultaneously. The process has been observed to start with a "flat" cochlear epithelium, and then the next observation is the stereocilia emerging from the surface. (Almost like a lawn growing /sarcasm). It is also believed that the human IHC/OHC in utero can begin detecting some sounds before the cells are fully formed. This indicates that the cells are synapsed to the nerve really early on in utero, but as growth of the stereocilia takes place, the cells become more sensitive.

"Developmentally, cholinergic efferents synapse onto both IHCs and OHCs prior to the onset of hearing (Simmons 2002). Efferent synapses develop first on IHCs, where they are initially excitatory due to an absence of Ca2+-activated K+ channels (SK channels); efferent synapses appear several days later (P6–P8) on OHCs" (Roux et al. 2011).​

Since human IHC/OHC progenitors are only active one time in utero, we can only hypothesize that the environment and process that the biology must undertake with the FX-322/PCA approach follows the same in utero.

We know from both Will McLean and Carl LeBel that it has been observed in vitro (in a lab) progenitors from donated cochlea creating both IHC and OHC. They have said it separate times, multiple times, years apart. One example was in an email exchange from a Tinnitus Talk member to Will McLean.

Screen Shot 2020-04-28 at 8.13.00 AM.png


What is not well understood if there are biological triggers that create an order of operations for the generation of IHC/OHC (or I cannot find clear evidence) in utero as follows:
  • Is the starting growth of an IHC a predecessor to starting an adjacent OHC?

  • Perhaps the IHC needs to synapse first, before signaling that the OHC can start?
If the above is understood, perhaps it can shed a little light on the theory that when FX-322 is applied, that the body prefers it regrow IHC first, since it is trying to replicate a process that is only done one-way, in utero.

A little more on the IHCs:

Since there is obviously little human research on the performance of regenerated cochlear hair cells (aside from FX-322), the best I can do is reverse what is observed with hair cells are damaged or die. When it comes to hair cell death, as it relates to hearing performance. The outcomes of IHC death align very well with what might be considered the reverse in regeneration.

What is agreed upon by my own independent IHC research:

Data indicate that survival of only 20% of IHCs is sufficient for maintaining auditory sensitivity under quiet conditions. However, the IHC-loss appeared to affect listening in more challenging, noisy environments. (Lobarinas et al., 2016).

The data indicate that a small number of IHCs are sufficient for maintaining the auditory sensitivity at least in the quiet environment, which may be due in part to central compensation of reduced peripheral input. Therefore, an ear with a certain number of IHCs missing but with all other cells being intact may show a normal pure tone audiogram.

"The cochleas of eutherian mammals comprise one row of primary sensory hair cells (inner hair cells, IHCs) and three rows of modulatory hair cells with little or no afferent function (outer hair cells, OHCs). Each inner hair cell receives afferent synapses from 10 to 15 primary afferent nerve fibers, which amounts to 90–95% of the primary afferent fibers."

Sensory Hair Cells: An Introduction to Structure and Physiology

On IHC Damage:

"Moreover, chinchillas with large IHC lesions have surprisingly normal thresholds in quiet until IHC losses exceeded 80%, suggesting that only a few IHC are needed to detect sounds in quiet. However, behavioral thresholds in broadband noise are elevated significantly and tone-in-narrow band noise masking patterns exhibit greater remote masking. These results suggest the auditory system is able to compensate for considerable loss of IHC/type I neurons in quiet but not in difficult listening conditions."

Inner Hair Cell Loss Disrupts Hearing and Cochlear Function Leading to Sensory Deprivation and Enhanced Central Auditory Gain

What I am reading here, is that 90-95% of hearing "data" to the brain comes from those IHC. One interpretation of these results is that the pure tone audiogram is very poor at detecting small to moderate sized IHC damage and that thresholds in quiet only begin to rise after the vast majority of IHC have been destroyed.

What is not clear:
  • Can a human get by with 20% of IHC?

  • Does IHC performance at specific frequencies correlate with their location in the cochlea?

  • Is it possible that IHCs are super redundant in terms of frequency coverage since so many can be lost before they show up on an audiogram?
What is clear:
  • A VAST majority of IHC destruction will start to show up on the audiogram.
What's this have to do with Regeneration & FX-322?

So, if we reverse this process to assume regeneration of IHC "undoes" some hearing loss, a VAST majority of regeneration will be needed to show up on the audiogram. As we have discussed already, IHC are crucial to understanding word score, so perhaps it takes less IHC regenerated as a percent of those remaining to start picking up words. When considering that FX-322 only really hits about 10-15% of the depth of the cochlea. It might be reasonable that only IHC regrown in that outer region is enough to increase word score, but ONLY IHC regrown doesn't do a thing for the audiogram.
 
I don't mean to disrupt an incredibly interesting thread (wonder if FREQ people are reading for their education and/or amusement!), but on the topic of 'stepping on the lawn', wouldn't the last dose that the FX-322 4-shot cohort received be free of this effect? After that last shot, whatever FX-322 was in there should've been free to do its stuff unhindered?

Obviously that was not the case, but wondering why...
 
FX-322 is the cute girl who sent ambivalent texts after the first date and we are all trying to figure out if there will be a second.
At least the sex was insane while it lasted. If they get their shit together, they will tempt us again, while we are all married to SPI-1005.
 
I don't mean to disrupt an incredibly interesting thread (wonder if FREQ people are reading for their education and/or amusement!), but on the topic of 'stepping on the lawn', wouldn't the last dose that the FX-322 4-shot cohort received be free of this effect? After that last shot, whatever FX-322 was in there should've been free to do its stuff unhindered?

Obviously that was not the case, but wondering why...
The idea I believe is that the pressure or similar fluid effect was possibly inflammatory or inhibitory in some way. And that's not necessarily something that would have resolved with the last injection.
 
On the IHC Theory:

What I believe had been discussed earlier this week was that it was observed in developing cochleas in human fetuses & in other mammals, that the IHC begin to develop slightly sooner than the OHC. There are images of both IHC and OHC developing simultaneously. The process has been observed to start with a "flat" cochlear epithelium, and then the next observation is the stereocilia emerging from the surface. (Almost like a lawn growing /sarcasm). It is also believed that the human IHC/OHC in utero can begin detecting some sounds before the cells are fully formed. This indicates that the cells are synapsed to the nerve really early on in utero, but as growth of the stereocilia takes place, the cells become more sensitive.

"Developmentally, cholinergic efferents synapse onto both IHCs and OHCs prior to the onset of hearing (Simmons 2002). Efferent synapses develop first on IHCs, where they are initially excitatory due to an absence of Ca2+-activated K+ channels (SK channels); efferent synapses appear several days later (P6–P8) on OHCs" (Roux et al. 2011).​

Since human IHC/OHC progenitors are only active one time in utero, we can only hypothesize that the environment and process that the biology must undertake with the FX-322/PCA approach follows the same in utero.

We know from both Will McLean and Carl LeBel that it has been observed in vitro (in a lab) progenitors from donated cochlea creating both IHC and OHC. They have said it separate times, multiple times, years apart. One example was in an email exchange from a Tinnitus Talk member to Will McLean.

View attachment 44305

What is not well understood if there are biological triggers that create an order of operations for the generation of IHC/OHC (or I cannot find clear evidence) in utero as follows:
  • Is the starting growth of an IHC a predecessor to starting an adjacent OHC?

  • Perhaps the IHC needs to synapse first, before signaling that the OHC can start?
If the above is understood, perhaps it can shed a little light on the theory that when FX-322 is applied, that the body prefers it regrow IHC first, since it is trying to replicate a process that is only done one-way, in utero.

A little more on the IHCs:

Since there is obviously little human research on the performance of regenerated cochlear hair cells (aside from FX-322), the best I can do is reverse what is observed with hair cells are damaged or die. When it comes to hair cell death, as it relates to hearing performance. The outcomes of IHC death align very well with what might be considered the reverse in regeneration.

What is agreed upon by my own independent IHC research:

Data indicate that survival of only 20% of IHCs is sufficient for maintaining auditory sensitivity under quiet conditions. However, the IHC-loss appeared to affect listening in more challenging, noisy environments. (Lobarinas et al., 2016).

The data indicate that a small number of IHCs are sufficient for maintaining the auditory sensitivity at least in the quiet environment, which may be due in part to central compensation of reduced peripheral input. Therefore, an ear with a certain number of IHCs missing but with all other cells being intact may show a normal pure tone audiogram.

"The cochleas of eutherian mammals comprise one row of primary sensory hair cells (inner hair cells, IHCs) and three rows of modulatory hair cells with little or no afferent function (outer hair cells, OHCs). Each inner hair cell receives afferent synapses from 10 to 15 primary afferent nerve fibers, which amounts to 90–95% of the primary afferent fibers."

Sensory Hair Cells: An Introduction to Structure and Physiology

On IHC Damage:

"Moreover, chinchillas with large IHC lesions have surprisingly normal thresholds in quiet until IHC losses exceeded 80%, suggesting that only a few IHC are needed to detect sounds in quiet. However, behavioral thresholds in broadband noise are elevated significantly and tone-in-narrow band noise masking patterns exhibit greater remote masking. These results suggest the auditory system is able to compensate for considerable loss of IHC/type I neurons in quiet but not in difficult listening conditions."

Inner Hair Cell Loss Disrupts Hearing and Cochlear Function Leading to Sensory Deprivation and Enhanced Central Auditory Gain

What I am reading here, is that 90-95% of hearing "data" to the brain comes from those IHC. One interpretation of these results is that the pure tone audiogram is very poor at detecting small to moderate sized IHC damage and that thresholds in quiet only begin to rise after the vast majority of IHC have been destroyed.

What is not clear:
  • Can a human get by with 20% of IHC?

  • Does IHC performance at specific frequencies correlate with their location in the cochlea?

  • Is it possible that IHCs are super redundant in terms of frequency coverage since so many can be lost before they show up on an audiogram?
What is clear:
  • A VAST majority of IHC destruction will start to show up on the audiogram.
What's this have to do with Regeneration & FX-322?

So, if we reverse this process to assume regeneration of IHC "undoes" some hearing loss, a VAST majority of regeneration will be needed to show up on the audiogram. As we have discussed already, IHC are crucial to understanding word score, so perhaps it takes less IHC regenerated as a percent of those remaining to start picking up words. When considering that FX-322 only really hits about 10-15% of the depth of the cochlea. It might be reasonable that only IHC regrown in that outer region is enough to increase word score, but ONLY IHC regrown doesn't do a thing for the audiogram.
I'm so glad you are back. I tried to say a lot of this but you are so much more articulate than me.
 
On the Phase 2A, and future outlook.

On Frequency Therapeutics:

This is 100% on them, and they need to own fixing it. While I'm disappointed that fakers got in, Frequency Therapeutics is FULL of veterans in Clinical Trial practice, from Carl LeBel on down. Take a look at LinkedIn. This trial cost multiple millions of dollars. They own the responsibility to keep a lid on filtering criteria that shouldn't be publicly known, and the controls to keep people out. Now they need to make it right for wasting investors money and precious time for all stakeholders.

The Phase 1/2 and other Open-Label Phase 1b had legitimate recruiting/participants. No fakers/no liars. Frequency Therapeutics clearly went back and checked, and wouldn't have reproduced the data in the March deck. That's why they happened to reiterate those trials in conjunction with sharing the bad Phase 2A news. To show that something is still legitimate. There is no conspiracy / no smoking gun.

Post Phase 2A:

I checked with a former associate that worked in clinical trials at a local university on what can be done with the Phase 2A trial data. They suggested that the safety data is still legitimate as a primary outcome. And since the goal of the Phase 2A was to confirm multi-dosing, since it didn't work, the outcome is that it's fit as a single dose (they didn't know Frequency Therapeutics said that, just offered it up.) We then discussed that they had an issue with recruiting. It's completely possible that they can exclude from placebo / 1-dose the "fakers" to salvage some results.

Expectations are that they show the "good" data from the trial, but plan to throw it away. Except safety.

They need to reveal more data on the open-label trial.

Phase 1B for age-related hearing loss may give them the opportunity to regain some trust. Will be interesting to see how much it differs from the other Phase 1Bs.

Phase 1B for severe hearing loss is important for this drug to have any chance. I would expect this to be packed with responders, and it's pretty hard to fake having garbage hearing; especially since they should have a good history with it.

I seriously want them to do a "Phase 2B" that is a single-dose "catch all" for all the different Phase 1B designations. IE: Mild-Severe SNHL/NIHL, Mild-Moderately Severe ARHL. Recruit a few hundred patients. 1 Dose, follow-up over 6 months. There needs to be larger efficacy data for each class of patient over a longer period. If this is the route, it may not take long to fill up recruiting.

Not a popular opinion, but it is the best path to proving the drug is effective, identifying a population, proving safety, modeling a pivotal trial.
 
I seriously want them to do a "Phase 2B" that is a single-dose "catch all" for all the different Phase 1B designations. IE: Mild-Severe SNHL/NIHL, Mild-Moderately Severe ARHL. Recruit a few hundred patients. 1 Dose, follow-up over 6 months. There needs to be larger efficacy data for each class of patient over a longer period. If this is the route, it may not take long to fill up recruiting.

I think it's a good way to remove the bias.
Do you draw lots if there is a flood of applicants?
 
BEWARE - MATHS AHEAD

This next bit is going to take some mental gymnastics so read slowly and enjoy the ride. The point of this exercise is to actually help myself better visualise @Zugzug's probability argument and see what holes can be punched into that under certain assumptions. It may help others too.

Let's say, for the sake of argument, that 25% of all 96 patients who enter the trial - 24 in total - tanked their word scores during screening. What are the chances that this tanking behaviour would have been equally distributed across all types of patients - 8 patients in mild, 8 patients in moderate and 8 patients in moderately severe? Chance would dictate that this is the most likely scenario, but this to me would appear to be a very generous assumption, because much to FGG's point, if your hearing is already pretty terrible, it's not as if you're going to be consciously trying to tank your score. Your score is already tanked for you, lol. Let's assume though for the sake of being faithful to probability, while making no assumptions such as the one I've just described, that the split was still equal and tanking was equal across all groups.

Mild group: 32 patients, 24 of whom with consistent historical records and screening/baseline tests, 8 with better than mild hearing and inconsistent records.

Moderate group: 32 patients, 24 of whom with consistent historical records and screening/baseline tests, 8 with better than moderate hearing and inconsistent records.

Moderately severe: 32 patients, 24 of whom with consistent historical records and screening/baseline tests, 8 with with better than moderately severe hearing and inconsistent records.

Let's now assume that these patients are equally distributed across the dosage groups. Remember, all we care about are the moderately severe patients and how many of them fall into the treatment group and how many of them fall into the placebo group.

Chance would dictate that, based on the assumptions above, 25% of patients of the placebo group - 6 out of 24 (2 from each category) - do not have a genuine baseline, regardless of what category they fall into. If we assume that placebo has no "true" effect on a patient in this group, any gains in the placebo group can only come from the 6/24 patients who have artificially low baselines, regardless of what severity category they fall in to during screening. The question then becomes: how big would the changes have to be to lift the entire group? Let's say the average improvement of these 6 patients was 60% because that is, on average, how much each patient tanked their true baseline during screening. I have picked this number somewhat arbitrarily, but for demonstration purposes it's also the average % improvement of the 6 responders from the original Phase 1/2 trial who we believe "truly" responded to FX-322. If you also assume that the average improvement of the remaining 18/24 placebo patients is 0% (some improve, some get worse, some stay the same), the average improvement of all patients across the entire placebo group is 15%. In other words, it would only take 6 patients from the placebo group to demonstrate an equal improvement to the FX-322 Phase 1 super-responders - the moderately severe group - to achieve an average improvement that is above Frequency Therapeutics' own definition of a responder: 10% improvement. This is kind of a tangential question, but what are the consequences here regarding trial design? If Frequency Therapeutics have already determined a responder to be a patient that improves by 10%, it would seem to me that they are in a mess given they are seeing patients "respond" in the placebo group. The only thing they could do is look at the average improvement of the other groups, individually or collectively, and see:

1) How much bigger their improvement is
2) Is that difference statistically significant?

I imagine this is exactly what happened and the maths checked out as non statistically significant i.e. the improvement % was very close.

Using these same assumptions, let's now come to what may be happening in the moderately severe patients who received FX-322. Recall that there are 32 patients defined as being moderately severe. 8 patients will, by trial design, end up in the placebo group. 2 of those 8 will, on the balance of probability, have artificially deflated scores. That leaves us with 24 patients who received FX-322, 6 of whom also have artificially deflated scores. Consider this: FX-322 massively underperforms because of the multi-dosing issues - when you reseed the lawn, you have to stay off the grass. We know from Frequency Therapeutics that there were no discernible differences even between treatment groups, so dosage considerations are not even worth mentioning here. Let's assume the 6 fakers that are still in this group improve by 60%, as we also assumed of those same types of patients in the placebo group. What would the average improvement of the remaining 18 patients (who are truly moderately severe) have to be to end up with an average improvement of just 10% for the whole group and match the placebo average? Zero. You would need 18 patients in the moderately severe FX-322 group, regardless of dose, improving an average of 0%.

In other words, multi-dosing would have to be SERIOUSLY detrimental to the treated patients - effectively equal to placebo. This raises a serious question in that not only would one have to believe that multi-dosing has a dampening effect, but that it effectively negates all effect FX-322 has in the first place - even in the case of 4 consecutive FX-322 doses.

Now that we've visualised the probability thesis, let's try to punch some holes in it. There are two obvious ones. The first one is that, as I mentioned and as @FGG keeps saying, if your hearing is already pretty terrible, it's not as if you're going to be consciously trying to tank your score. That work is done for you, by you. It be would fair to suggest then that the moderately severe group I described in the example above has more patients who are true to baseline. This has two knock-on effects. Firstly, given that the ratio of mild:moderate:severe tankers is now skewed towards mild/moderate, the average improvement of the placebo group will be much higher - if, of course, you assume that the less severe your condition is, the more you will tank your score to make sure you get into the trial. Perhaps this is false logic, but this would seem to me to be somewhat logical if you assume that the moderately severe patient is less likely to tank their score, or if they do tank their score, they would tank their score less (because there's less headroom to tank). But it also means that the moderately-severe FX-322 treatment group now also has more headroom to improve (and therefore more likely to reach statistically significant) because it is made up of more genuine responders. Except it doesn't. The argument, if you wish to believe it, is that multi-dosing dampens the effect of the drug. So what you end up with is an over-inflated placebo response due to distribution of tankers and an underwhelming treatment response.

Why is any of this important, why do we care and why did you just come down this rabbithole with me? The point is that, if Frequency Therapeutics, following accumulation of the individual data, find that there's a certain numbers of outliers in the placebo group - even just 6 - and they can verify this through historical records, they may well have grounds to throw these 6 patients out of the analysis and work with the other 18 patients. Assuming those 18 patients had a true placebo response of 0%, you should then be able to make some really fair comparisons. The only question then remains: has the multi-dosing dampened the effect so much that they did not even reach an average of 10%? The communication we've seen from Frequency Therapeutics so far would suggest that there was some kind of response. It just wasn't enough above placebo.

Perhaps this is all wishful thinking, but I do think there is a scenario, especially if Frequency Therapeutics can get hold of individual records, where Frequency Therapeutics could have grounds to go to the FDA and ask for certain patients to be excluded from their analysis. Given that such a scenario would have arisen from the very fact that Frequency Therapeutics chose to blind themselves in good faith, I can't see how this would not at least be considered.
Okay, my bear arms are swiping down so fast and so hard that I'm borderline psychopathic (all in fun <3). Let me start with the first half, to give you an idea of what the calculations would involve:

Firstly, before we even get to the math, there's absolutely no way it's 25% fakers. That number is almost conspiratorial to me. I think 10% is actually generous. What is far more likely to me is that there were a few fakers, and by chance, the placebo group performance happened to be closer to the top of the 95% confidence interval. Then the fakers blew it over. In other words, without the fakers, it's a believable, strong performing placebo group. With a faker or two, it became glaring. We'll see.

Also, there's no way the distribution of fakers (assume 25%) is close to split evenly between mild, moderate, and severe. Also, it's highly doubtful that their n=96 consisted of close to an even split across mild, moderate, and severe. You have alluded to this, but it's a really important part of this.

Okay. Regarding the mathematical approach, there's a couple of big problems. Let's use the same numbers you used for illustration. While the expected value of the distributions that you calculated is correct, really you would want to put some confidence intervals around those numbers. Conceptually, we would want to consider the sum of a bunch of probabilities, conditioned on distributions close to 8, 8 ,8 (mild, moderate, moderate-severe). For example, 7, 7, 10 or 7, 8, 9, etc. and all permutations of those numbers (i.e. there's one way to have 8, 8, 8, but three ways 7, 7, 10 can be arranged, etc.). It's a long and tedious process, but all of these probabilities can be calculated.

Then, for each one individually, you would multiply by the probabilities involved in all of your calculations, conditioned on the group distributions. Even then, you really should condition again on distributions entering the 4 cohorts respectively. Trust me, this is really, really difficult to the point where you won't get the satisfaction that you're looking for. Even if you did roll through all of the calculations, there would still be a ton of speculation in the assumptions.

Okay, next order of business. Statistical significance of the placebo group. This is nontrivial because it requires a MMRM (Mixed Model for Repeated Measures). There are various choices for this approach. For the life of me, I'm not sure what they did. From Phase 1/2, the only statistical inference I couldn't make sense of from the paper is the group wide percentage (ratio) improvements across time periods. The reason why this is nontrivial is because there' a correlation between time periods in all of the calculations. I would have to see their details, and I'm not privy to those details, unfortunately.

Anyways, you're sort of right that you're comparing some average to 10% (assuming that's the number), but the comparison is far more complicated than just seeing if it clears 10%. It has to clear some upper threshold U > 10%, where I don't know how they calculated U.

Another point is that I don't even like the 10% number (assuming you mean 10% absolute WR improvement). I think the Thornton Raffin confidence intervals (harder to clear) are much more conservative (and correctly so). This is why I was so bullish on the 3 big responders in Phase 1/2 because they sky rocketed over these confidence intervals, which aren't super easy to clear. Clearing them by a lot is not realistic for just a placebo effect.

Re seeding: I think this is mostly PR currently. I say currently because it could end up being something, but has to be proven. Didn't they say (can't listen to the webcast) that the improvements overlapped? In other words, all treatment groups sort of looked similar? I guess the thought here is that maybe the FX-322 over 4 doses was balanced out by the lawn effect. There is so much conjecture here. Occam's Razor: Multiple doses does nothing, delivery sucks, and maybe there's some lawn effect too. This is just positive spin.

There is no way this goes to Phase 3 from the Phase 2 results. Think of it this way. If you are right that the numbers are 25% fakers, there's a downside to this. As I've outlined, it's embarrassing for the company. They didn't even calculate confidence intervals comparing WR >=6 month to screening (as far as we know). Are they really going to tell the FDA that they are so incompetent that their whole trial is messed up? But magically, this same company did everything else perfectly?

It's a horrible look to mess something so simple up. Maybe if the Phase 1b trials are successful, they save some face with the actual drug. This is going to repeat Phase 2.

I'm sorry if I seem harsh here, I'm actually trying to be kind by killing false hope. If I'm wrong, I will change my avatar to the words "Aaron91 is a god." lol.
 
Or the Great Pumpkin or Q waiting for signs of the re-inauguration of Trump. The parallels are endless.
Um. No. Girls and FX-322 are both real things.

It only seems like numerology if you just aren't interested in why Word Scores could dramatically increase without corresponding audiogram changes.

If your understanding is still that audiograms are the full measure of hearing then I can see why this seems like that to you but the information cited in these discussions is from the company themselves and journal articles, not conspiracy sites.
 

Log in or register to get the full forum benefits!

Register

Register on Tinnitus Talk for free!

Register Now