Frequency Therapeutics — Hearing Loss Regeneration

Was this a scam by the way? I mean this whole electricity through the tongue while the headphones are outputting random noises? I emailed Neuromod once asking for some proof, and now they are just bombarding me with emails in German language which I can't even understand.
The Germans know technology... Maybe they found a cure?
 
I haven't posted on this thread in a long time. Just thought I'd drop in to say that they didn't drop the 210-day readout today, even though it's 30 June and end of Q2. Given we already had a disappointing 90-day readout and they've already announced another Phase 2 trial, I expect the data is probably so bad they want to bury it later tonight after the market closes. Will be interesting though if we have some kind of unexpected delay.
 
I haven't posted on this thread in a long time. Just thought I'd drop in to say that they didn't drop the 210-day readout today, even though it's 30 June and end of Q2. Given we already had a disappointing 90-day readout and they've already announced another Phase 2 trial, I expect the data is probably so bad they want to bury it later tonight after the market closes. Will be interesting though if we have some kind of unexpected delay.
The Phase 2a was a failure. We all know this. It really doesn't matter when they release the full readout. At this point, we are concentrating on the results of the one dose administered for the severe hearing loss trial, with results either in August or early September. I think they also plan on another trial this year with only mild/moderate hearing loss sufferers.
 
I haven't posted on this thread in a long time. Just thought I'd drop in to say that they didn't drop the 210-day readout today, even though it's 30 June and end of Q2. Given we already had a disappointing 90-day readout and they've already announced another Phase 2 trial, I expect the data is probably so bad they want to bury it later tonight after the market closes. Will be interesting though if we have some kind of unexpected delay.
This could be true, though there tends to always be a lag between when the trial terminates and when the results get published.
 
This could be true, though there tends to always be a lag between when the trial terminates and when the results get published.
I think you're confusing academic publication with biotech press releases on data readouts. If a biotech company commits to a readout date (e.g., end of Q2) it will almost always stick to that date unless it has a really good reason not to, otherwise it risks losing investor/shareholder confidence. I haven't seen any unusual trading activity like we did before the 90 day read-out, so I'm expecting a press release later tonight with confirmation that no signal was found after 210 days.
 
I haven't posted on this thread in a long time. Just thought I'd drop in to say that they didn't drop the 210-day readout today, even though it's 30 June and end of Q2. Given we already had a disappointing 90-day readout and they've already announced another Phase 2 trial, I expect the data is probably so bad they want to bury it later tonight after the market closes. Will be interesting though if we have some kind of unexpected delay.
According to @Diesel's memo, wasn't the announcement postponed to the second half of this year?

Will it be announced in August / September?
Notes from the Goldman Q&A today:

Overview:
  • Over 200 patients + data points have received FX-322
  • Largest known database of patients receiving a treatment for hearing loss

Phase 2A details:
  • Plan to disclose in 2H/2021

Phase 1B studies (ARHL/severe):
  • Exploratory focus to understand patient populations for future Phase 2/3 design
  • All have lead-in measures to eliminate bias seen in Phase 2A

Specific patient etiologies that appear to be best candidate to respond to drug with single dose:
  • Moderate -> Moderately Severe hearing loss (Severe TBD)
  • Permanent NIHL or SSNHL
  • Association with a noise trauma
  • Analysis of 200+ patient database is ongoing to better define makeup of responders for inclusion in future trials

What to expect from future Phase 2/3 design:
  • Lead-in period with multiple baseline measures
  • Focus on single dose
  • Speech perception
  • Using other long-term measures not currently used in trials (didn't disclose what those are)

On expanding the Hearing Program and team:
  • Continued support from Langer + Board + Clinical advisory panel to proceed with FX-322 single dose trials
  • Continued support from external experts and consultants in the field
  • Two studies that show signal are enough to proceed - "No doubt they should expand clinical trial."
  • Continuing to expand Frequency team focusing on Drug Delivery + Audiology

ENT/audiologist/community awareness:
  • Many are following work closely
  • Shift to useage in the clinical setting is a focus of the firm
  • They are still working on creating inroads with Audiologist field
  • Voice of patient is becoming front-and-center

Other hearing related PCA approach efforts:
  • Other efforts in-discovery in the hearing program (TBD: 2H/2021)
  • Focusing on other pathways+Drug delivery

R&D Day is 2H/2021? Assuming this will be like Tesla Battery Day?

Capital to fund the firm:
  • Have 2 years of cash
  • Can still get cash payment milestones from Astellas
  • May leverage dilution to get additional funding if needed
 
I think you're confusing academic publication with biotech press releases on data readouts. If a biotech company commits to a readout date (e.g., end of Q2) it will almost always stick to that date unless it has a really good reason not to, otherwise it risks losing investor/shareholder confidence. I haven't seen any unusual trading activity like we did before the 90 day read-out, so I'm expecting a press release later tonight with confirmation that no signal was found after 210 days.
Enjoy:

https://investors.frequencytx.com/static-files/a26c03b8-d699-48a5-9bcd-ad980554065b
 
I'm not going to bore everyone with numbers, I'll just say this:

The fact that they didn't have a lead-in baseline is unthinkable -- and not just in hindsight. I wasn't sure exactly how they did things at screening, but I thought the issue was that they weren't strict enough comparing WR scores from >6 months ago to those at screen.

They seriously did the following: "Hey everyone, we're focused on clarity. We want to see people like the super responders from Phase 1/2. Show me your audiogram from >6 months ago. Oh, thanks. Now take this WR test for baseline. If your score is too high, you won't get in."

It's soul-crushingly incompetent, to the point where it never crossed my mind that they would do things this way.

Now, how much of the overall problem were outright liars (e.g. heard "cat" clear as day and said "not sure") as opposed to some people who unethically stretched the truth in the form of very, very low effort to the point of extreme dishonesty (e.g. heard "cat," but legitimately wasn't sure if it was "cat" or "cap" so said "not sure").

Either way, I pin this on the company. It's a clinical trial for a revolutionary drug and they failed on an extremely elementary trial design aspect.

Regarding the performance itself in Phase 2a: Very poor. Even taking into consideration that they injected four shots into the cochlea in too quick of succession, the drug did not succeed. The responder data looks pretty random. Even with this pitiful trial design error, surely come of these "low effort" participants landed in the treatment group as well. It failed the trial.

I very much hope that the super responders from Phase 1/2 were legit. If so, I think there's some hope for the drug being "good enough" in the next Phase 2, consisting of only single injections.
 
You perceive an analogy by seeing an exaggerated optimism in both threads
There are more parallels than that. There was also data being spun in both threads.

Neuromod was not without their own charts and data, and then the Tinnitus Talk staff crunched their own extensive report out of the user experiences. Then we all argued over who should be deemed an "improver" or not. Optimists counted anyone who had anything even remotely positive to say in their reports as an improver, no matter how wishy-washy or transitory.

And here, the closest parallel is the emphasis on (admittedly flawed) word recognition scores over audiograms and all this OHC vs. IHC apologia.

You are right that the two treatments are quite different but the hope/hype cycle on display here is following a similar pattern of some people spinning the data in such a way to see what they want to see and disregard what they don't want to see--and then serving that up as if we should all treat it as objective reality. So I wouldn't necessarily lean too heavily on "but this time it's different."™️
 
I'm not going to bore everyone with numbers, I'll just say this:

The fact that they didn't have a lead-in baseline is unthinkable -- and not just in hindsight. I wasn't sure exactly how they did things at screening, but I thought the issue was that they weren't strict enough comparing WR scores from >6 months ago to those at screen.

They seriously did the following: "Hey everyone, we're focused on clarity. We want to see people like the super responders from Phase 1/2. Show me your audiogram from >6 months ago. Oh, thanks. Now take this WR test for baseline. If your score is too high, you won't get in."

It's soul-crushingly incompetent, to the point where it never crossed my mind that they would do things this way.

Now, how much of the overall problem were outright liars (e.g. heard "cat" clear as day and said "not sure") as opposed to some people who unethically stretched the truth in the form of very, very low effort to the point of extreme dishonesty (e.g. heard "cat," but legitimately wasn't sure if it was "cat" or "cap" so said "not sure").

Either way, I pin this on the company. It's a clinical trial for a revolutionary drug and they failed on an extremely elementary trial design aspect.

Regarding the performance itself in Phase 2a: Very poor. Even taking into consideration that they injected four shots into the cochlea in too quick of succession, the drug did not succeed. The responder data looks pretty random. Even with this pitiful trial design error, surely come of these "low effort" participants landed in the treatment group as well. It failed the trial.

I very much hope that the super responders from Phase 1/2 were legit. If so, I think there's some hope for the drug being "good enough" in the next Phase 2, consisting of only single injections.
Good point. There is a signal, meaning it works, yet, it is still unclear to understand what it is we are missing with FX-322. I guess one could say that the cochlea is the 'final frontier' of medicine. It's a difficult journey to the answer, but a step forward has been made. There is now hope at the very least.
 
I'm not going to bore everyone with numbers, I'll just say this:

The fact that they didn't have a lead-in baseline is unthinkable -- and not just in hindsight. I wasn't sure exactly how they did things at screening, but I thought the issue was that they weren't strict enough comparing WR scores from >6 months ago to those at screen.

They seriously did the following: "Hey everyone, we're focused on clarity. We want to see people like the super responders from Phase 1/2. Show me your audiogram from >6 months ago. Oh, thanks. Now take this WR test for baseline. If your score is too high, you won't get in."

It's soul-crushingly incompetent, to the point where it never crossed my mind that they would do things this way.

Now, how much of the overall problem were outright liars (e.g. heard "cat" clear as day and said "not sure") as opposed to some people who unethically stretched the truth in the form of very, very low effort to the point of extreme dishonesty (e.g. heard "cat," but legitimately wasn't sure if it was "cat" or "cap" so said "not sure").

Either way, I pin this on the company. It's a clinical trial for a revolutionary drug and they failed on an extremely elementary trial design aspect.

Regarding the performance itself in Phase 2a: Very poor. Even taking into consideration that they injected four shots into the cochlea in too quick of succession, the drug did not succeed. The responder data looks pretty random. Even with this pitiful trial design error, surely come of these "low effort" participants landed in the treatment group as well. It failed the trial.

I very much hope that the super responders from Phase 1/2 were legit. If so, I think there's some hope for the drug being "good enough" in the next Phase 2, consisting of only single injections.
I agree it's bizarre they did no validation of baseline data.They reported only the responder rate. I wish they had put out some more metrics on the extent of improvement as well. Maybe nothing worthy of reporting there, else they might have put it out.

On Table 2 there seems to be an improving responder rate trend in the treatment arms (especially 2X and 4X) but the placebo seems quite consistent. Not sure how much to read into it. Could it be the lawn effect diminishing with time? The baseline issue probably shows up very early in the data.
 
I have nothing to say about their scientists, but Frequency Therapeutics is run by incompetent people, these people are parasites in the world of biotechnology.

I hope that they will have the intelligence to learn from their mistakes, because with their leader it's not won.

Thinking a little less about the money and a little more about the science makes sense for a business at this clinical stage.
 
I guess no one can really say with confidence the drug "works" or "doesn't work" after that data was released.
I hope that the detailed disclosure of Phase 2a in the second half of this year will provide information showing that the drug is working.

This 210 days of information is too little.

I would like to see data on the magnitude of improvement in WR and dB at the individual level.
 
@Diesel

"The FX-322-111 study was not placebo controlled, though the study analysis of untreated ears also showed zero percent (0%) exceeding 95% CI from the baseline to day 90 for the WR test"

Can you please advise for the study FX-322-111, did the candidates get treated in 1 ear only? And did the candidate know which ear was receiving the drug?

I just want to ensure there was no bias in the study.

If the candidate didn't know which ear the drug was injected and the results of the untreated ear was 0% improvement, and 10% or greater improvement in the treated ear, then we are safe to say the study was 100% accurate.

Can you please clarify?
 
@Diesel

"The FX-322-111 study was not placebo controlled, though the study analysis of untreated ears also showed zero percent (0%) exceeding 95% CI from the baseline to day 90 for the WR test"

Can you please advise for the study FX-322-111, did the candidates get treated in 1 ear only? And did the candidate know which ear was receiving the drug?

I just want to ensure there was no bias in the study.

If the candidate didn't know which ear the drug was injected and the results of the untreated ear was 0% improvement, and 10% or greater improvement in the treated ear, then we are safe to say the study was 100% accurate.

Can you please clarify?
Yes, the candidates did get treated in 1 ear only. I am not aware of any information that indicates whether the patient did or did not know which ear was treated.
 
@Diesel

"The FX-322-111 study was not placebo controlled, though the study analysis of untreated ears also showed zero percent (0%) exceeding 95% CI from the baseline to day 90 for the WR test"

Can you please advise for the study FX-322-111, did the candidates get treated in 1 ear only? And did the candidate know which ear was receiving the drug?

I just want to ensure there was no bias in the study.

If the candidate didn't know which ear the drug was injected and the results of the untreated ear was 0% improvement, and 10% or greater improvement in the treated ear, then we are safe to say the study was 100% accurate.

Can you please clarify?
It means whether they are injecting placebo into the candidate's untreated ears.

By the way, in placebo-controlled clinical trials, I think that the only way to eliminate the bias is to select candidates by lottery.
 
It should have been done to both ears (one drug and the other placebo) in order to cross check and verify if the candidate was lying…

In the 111 study open label, the candidate knows exactly which ear is being treated and which not… so how is this going to be validated? You hope that the candidate tells the truth? How can you trust this trial?
 
It should have been done to both ears (one drug and the other placebo) in order to cross check and verify if the candidate was lying…

In the 111 study open label, the candidate knows exactly which ear is being treated and which not… so how is this going to be validated? You hope that the candidate tells the truth? How can you trust this trial?
Probably not. They treated 1 ear. The claim is that they improved their patient lead-in measures for this trial, so the baselines were more reliable.
 
@Diesel

"The FX-322-111 study was not placebo controlled, though the study analysis of untreated ears also showed zero percent (0%) exceeding 95% CI from the baseline to day 90 for the WR test"

Can you please advise for the study FX-322-111, did the candidates get treated in 1 ear only? And did the candidate know which ear was receiving the drug?

I just want to ensure there was no bias in the study.

If the candidate didn't know which ear the drug was injected and the results of the untreated ear was 0% improvement, and 10% or greater improvement in the treated ear, then we are safe to say the study was 100% accurate.

Can you please clarify?
tl;dr: The company realized that saving the science was more important than saving the egos of management so their whole approach is to show investors that they fucked up trial design in Phase 2a by encouraging people to have low screening scores (which were then used for baselines. wtf!), but the science and reliability for future single-dose studies is still something to be optimistic about. Whether you agree with that or not is up to you.
------------------------------------------------------------------------------------
Detailed explanation:

I will try to help you out with the last paragraph. As @Diesel alluded to, in open-label trial 111, there is no available information (that I'm aware of as well) on whether or not the patients knew which ear was injected. My guess is that they did and that it was the worse ear, but that's just a guess.

Anyways, what do they mean by this picture (and the remark about trial 111)?

upload_2021-7-1_19-34-34.png


When people do the WR test, they have a baseline and then a later date data point (in this case day 90). Call the baseline score X and the day 90 score Y. In theory, if someone received a placebo (ignoring many other factors and assuming the person is essentially a robot taking the same test twice), then X and Y should be close. With X given, a 95% confidence interval is a range of numbers which should encapsulate Y about 95% of the time. Though an abuse of mathematical language, roughly speaking, 95% of the time, Y should land in this interval. Anyways, there's a process to calculating these 95% confidence intervals for Y, given each possible baseline score X.

Okay, so what are they pointing out here? A 95% confidence interval is constructed so that if L is the minimum and U is the maximum of the interval, then approximately 2.5% of the time, by pure chance, Y should exceed U and 2.5% of the time, Y should be less than L.

What ended up happening in the Phase 2a and why they are calling red flags on themselves (total mismanagement) is that we "should" (I put that in quotes because the placebo sample size was only n=21, so it's only a rough approximation -- requires math to understand) see about 2.5% of the placebo patients exceed the upper bound U of their respective 95% confidence interval (the confidence intervals are different for different baseline scores X across the placebo patients). They saw 16% of the patients exceed U. In other words, it's unbelievable that, by chance, placebos just happened to improve that much. This is their whole point -- that study design was very poor.

In essence, they are trying to say "the science is/might be okay; we just really fucked up the trial." This is a smart move, given the braindead decision-making to not have a lead-in from the start.

Finally, how are they comparing to trial 111, where there were no true placebos, but the untreated ear was being treated as a placebo data point? They are saying, basically, for reference (strengthening their argument that the trial was fucked), 0% of the untreated (placebo-like) ears in trial 111 saw improvements that exceeded the upper bound U of the 95% confidence interval.

If you look at the May 13 press release (see below), they point out that in the trials where they correctly utilized a lead-in (i.e. they had to take a screening test to verify that their score at screen was similar to another one >6 months ago to verify that the person really had stable word scores. The baseline was then assessed later when the incentive was gone.), shit did not hit the fan.

upload_2021-7-1_19-57-52.png
 

Attachments

  • upload_2021-7-1_19-51-42.png
    upload_2021-7-1_19-51-42.png
    151.7 KB · Views: 14
This makes my eyes roll. "Other long-term measures" like, maybe, um... an audiogram? You know, the go-to metrics for audiologists to determine how well you hear??? Is that too much to ask?

View attachment 45294

This avoidance of audiograms as a metric is my #1 red-flag for this company.

They're beating around the bush because they know that audiograms will show little to no hearing improvement (extended audiogram or no).
A lot of tinnitus sufferers have decent audiograms. As with static visual acuity tests in vision, they only tell part of the story.
 
tl;dr: The company realized that saving the science was more important than saving the egos of management so their whole approach is to show investors that they fucked up trial design in Phase 2a by encouraging people to have low screening scores (which were then used for baselines. wtf!), but the science and reliability for future single-dose studies is still something to be optimistic about. Whether you agree with that or not is up to you.
------------------------------------------------------------------------------------
Detailed explanation:

I will try to help you out with the last paragraph. As @Diesel alluded to, in open-label trial 111, there is no available information (that I'm aware of as well) on whether or not the patients knew which ear was injected. My guess is that they did and that it was the worse ear, but that's just a guess.

Anyways, what do they mean by this picture (and the remark about trial 111)?

View attachment 45393

When people do the WR test, they have a baseline and then a later date data point (in this case day 90). Call the baseline score X and the day 90 score Y. In theory, if someone received a placebo (ignoring many other factors and assuming the person is essentially a robot taking the same test twice), then X and Y should be close. With X given, a 95% confidence interval is a range of numbers which should encapsulate Y about 95% of the time. Though an abuse of mathematical language, roughly speaking, 95% of the time, Y should land in this interval. Anyways, there's a process to calculating these 95% confidence intervals for Y, given each possible baseline score X.

Okay, so what are they pointing out here? A 95% confidence interval is constructed so that if L is the minimum and U is the maximum of the interval, then approximately 2.5% of the time, by pure chance, Y should exceed U and 2.5% of the time, Y should be less than L.

What ended up happening in the Phase 2a and why they are calling red flags on themselves (total mismanagement) is that we "should" (I put that in quotes because the placebo sample size was only n=21, so it's only a rough approximation -- requires math to understand) see about 2.5% of the placebo patients exceed the upper bound U of their respective 95% confidence interval (the confidence intervals are different for different baseline scores X across the placebo patients). They saw 16% of the patients exceed U. In other words, it's unbelievable that, by chance, placebos just happened to improve that much. This is their whole point -- that study design was very poor.

In essence, they are trying to say "the science is/might be okay; we just really fucked up the trial." This is a smart move, given the braindead decision-making to not have a lead-in from the start.

Finally, how are they comparing to trial 111, where there were no true placebos, but the untreated ear was being treated as a placebo data point? They are saying, basically, for reference (strengthening their argument that the trial was fucked), 0% of the untreated (placebo-like) ears in trial 111 saw improvements that exceeded the upper bound U of the 95% confidence interval.

If you look at the May 13 press release (see below), they point out that in the trials where they correctly utilized a lead-in (i.e. they had to take a screening test to verify that their score at screen was similar to another one >6 months ago to verify that the person really had stable word scores. The baseline was then assessed later when the incentive was gone.), shit did not hit the fan.

View attachment 45395
Does that mean FX-322 isn't dead yet?

You still have hope, right?

If so, I will be happy and cry.
 
Does that mean FX-322 isn't dead yet?

You still have hope, right?

If so, I will be happy and cry.
I'm pretty agnostic on the drug in its current form and delivery. The underlying science is impressive so I'm very pro "don't give up on the drug."

I will say this: the trial was definitely messed up. Now, of course, that doesn't mean "Phase 2a would have been a success if they had a lead-in WR screen." Honestly, I don't think it would have succeeded with multi-dosing, just looking at the data. Though true, blaming the trial design is somewhat of an out.

Will it work in the next Phase 2 with single injections? Definitely not with audiograms, in my opinion. With clarity and speech, they do have two successful trials (Phase 1/2 and open-label) and one (properly run) failed trial (Age-related hearing loss).

The open-label study counts as like half of a trial to me because it wasn't placebo controlled, and the Phase 1/2 trial had small sample sizes, as well as imbalanced data that greatly favored the treatment group (i.e. as a group, the n=15 treaters had less ceiling effect than the n=8 placebos).

My big question to decide on whether I believe in the current formulation in the drug:

They clearly didn't have a lead-in WR test for the Phase 1/2 trial either (it's a safety trial with exploratory efficacy standards so if they missed it in Phase 2a, they certainly missed it in Phase 1/2). Hence, there's a chance that at least some of the "super responders" were also people that deflated word scores to help get in.

In one regard, the motivation was much less because there was less pressure for low WR to get into Phase 1/2. There also wasn't the same social medial presence and coaching about what the company wanted from the ideal participant. In another regard, they only selected for n=23 total participants, so human nature would motivate people to want to look hard of hearing.

Putting this together, I guess I wouldn't be surprised if at least 1 of the 3 super responders was due to an inaccurate baseline score. Assuming it's not all 3 that were like this, I do have hope that the current formulation can improve IHC function enough.

Due to a lack of precedence, the treatment group just has to beat the placebos in WR in the next Phase 2 to likely move on to Phase 3. I'm cautiously hopeful that this can happen, but not sold.
 

Log in or register to get the full forum benefits!

Register

Register on Tinnitus Talk for free!

Register Now