Frequency Therapeutics — Hearing Loss Regeneration

With current formulation and delivery method no audiogram improvement was ever recorded till date.


giphy.gif


Screen Shot 2021-05-25 at 3.44.53 PM.png
 
I feel not everyone is understanding what's happening, here is an example:

Michael and Laura are desperate to get FX-322, so they both apply for the Phase 2a trials:

View attachment 45477

Michael in reality can hear 30 words out of 50, but as he is desperate, he fakes his initial screening to get into the trial by telling the doctor he can only hear 15 words out of 50.

Laura in reality can hear 35 words out of 50, but tells the doctor she can only hear 20 out of 50:

View attachment 45478

The doctor is happy he found good candidates for the trial. He gives Michael the placebo, and Laura gets the drug (FX-322):

View attachment 45479

After the trial is completed, the doctor gets the results:

View attachment 45480

The doctor saw that Laura improved! She used to hear 20 words out of 50 and now after getting the drug, she can hear 35 words out of 50! The doctor is happy!

Michael was never given the drug but also improved? Michael now hears 30 words out of 50.

So what does the doctor do now?

He basically just claims that Michael was a bias... and he is happy that Laura improved! :)

Are you guys getting the point?

Michael and Laura never improved, their real scores stayed the same the whole time.
Here's where I agree and disagree with you:

I disagree with you that current evidence (2 successful single-dose studies, failed ARHL trial, and failed multi-dose trial) suggests that the drug surely does absolutely nothing at all for clarity. Obviously, I'm critical of the open-label study because it's not placebo controlled; I'm critical of the Phase 1/2 study because the treatment group had a huge advantage due to (by chance) being far less prone to the ceiling effect, as well as the fact that there weren't lead-in baseline WR scores, although the incentive was far less for that trial. Nonetheless, the evidence of the drug helping clarity at the single-dose level is unclear, but certainly not nothing.

Where I agree with you is that the company is, in some ways, using the terrible Phase 2a trial design to gloss over the fact that multi-dose truly failed epically. Granted, they do admit that multi-dose failed and that four injections in weekly time intervals created a damaging environment. They do admit that it just didn't work.

However, from my perspective, there is sort of a vibe of like:

"Multi-dosing kind of failed, but we really messed up the trial. Can you believe that a patient in the placebo group said 'not sure' 22 times at the baseline and only 3 times at day 90?! How horrible! But ultimately it's our fault due to trial design and not accounting for this bias. Ah, it's all so confusing. Multi-dosing kind of failed, there were cheaters, but it's our fault."

The multi-dosing simply failed. Actually, it's sort of fortunate that they made that mistake in the trial where the multi-dosing would have failed anyways because: (a) They can learn better for next time and (b) they get to muddy the waters a bit.

With all of this being said, I definitely think there were real super responders in Phase 1/2 at the single dose level. This drug isn't dead -- we shouldn't give up on it. It's okay to admit that the drug needs a lot of work. But something is going on in vivo for some patients, which is a wonderful thing.
 
@ThomasRobert, bruh you need to chill out. Have you ever done a word recognition test? You can't just say you didn't hear the word. I think it would be very easy to tell if someone was making the words up. There would be a lot of hesitation.
 
Here's where I agree and disagree with you:

I disagree with you that current evidence (2 successful single-dose studies, failed ARHL trial, and failed multi-dose trial) suggests that the drug surely does absolutely nothing at all for clarity. Obviously, I'm critical of the open-label study because it's not placebo controlled; I'm critical of the Phase 1/2 study because the treatment group had a huge advantage due to (by chance) being far less prone to the ceiling effect, as well as the fact that there weren't lead-in baseline WR scores, although the incentive was far less for that trial. Nonetheless, the evidence of the drug helping clarity at the single-dose level is unclear, but certainly not nothing.

Where I agree with you is that the company is, in some ways, using the terrible Phase 2a trial design to gloss over the fact that multi-dose truly failed epically. Granted, they do admit that multi-dose failed and that four injections in weekly time intervals created a damaging environment. They do admit that it just didn't work.

However, from my perspective, there is sort of a vibe of like:

"Multi-dosing kind of failed, but we really messed up the trial. Can you believe that a patient in the placebo group said 'not sure' 22 times at the baseline and only 3 times at day 90?! How horrible! But ultimately it's our fault due to trial design and not accounting for this bias. Ah, it's all so confusing. Multi-dosing kind of failed, there were cheaters, but it's our fault."

The multi-dosing simply failed. Actually, it's sort of fortunate that they made that mistake in the trial where the multi-dosing would have failed anyways because: (a) They can learn better for next time and (b) they get to muddy the waters a bit.

With all of this being said, I definitely think there were real super responders in Phase 1/2 at the single dose level. This drug isn't dead -- we shouldn't give up on it. It's okay to admit that the drug needs a lot of work. But something is going on in vivo for some patients, which is a wonderful thing.
Sorry but the Phase 1/2 was done before fixing the bias glitch, so we can consider this study as bias also...

For me, the severe trial will give us a FINAL conclusion about this whole thing...
 
Sorry to break it down for you but 10 dB Improvement is not an accurate measurement. 10 dB is not a tangible gap.

30 dB and above can be considered as an improvement bearing in mind how these tests are being measured...
Source?
 
@ThomasRobert, bruh you need to chill out. Have you ever done a word recognition test? You can't just say you didn't hear the word. I think it would be very easy to tell if someone was making the words up. There would be a lot of hesitation.
It's easy to fake a word recognition test.

And isn't that the basis for why many here think the trial failed? Because people faked the test to get in?

The part I find confusing is: if you believe audiogram improvements are not necessary to demonstrate improvement in hearing, and WR tests can be faked, I'm not sure what measurement can be used to demonstrate drug effectiveness. Which means the trial will always either fail (because of audiograms) or be 'fake-able' (because of WR).
 
@ThomasRobert, bruh you need to chill out. Have you ever done a word recognition test? You can't just say you didn't hear the word. I think it would be very easy to tell if someone was making the words up. There would be a lot of hesitation.
I wish this was the case (as it's obviously a no-brainer), but sadly, Frequency Therapeutics was dumb enough to let people refuse to guess. It's almost a joke how much they mismanaged the trial design.

Evidence:

upload_2021-7-8_15-23-50.png
 
The part I find confusing is: if you believe audiogram improvements are not necessary to demonstrate improvement in hearing, and WR tests can be faked, I'm not sure what measurement can be used to demonstrate drug effectiveness. Which means the trial will always either fail (because of audiograms) or be 'fake-able' (because of WR).
Sadly, you are right that because there is no objective test for clarity, we have to resort to these subjective WR tests where we have to trust that the person is answering correctly. However, with this understood, a well-run trial will lean on the side of the "lesser of two evils." Let me explain.

If you have a lead-in WR screen, they are basically taking your (documented in medical records) score from > 6 months ago, comparing it to your screening score, and confirming that they are pretty close. In other words, the person seems legit.

Technically speaking, the person could still provide a dishonest screening score (intentionally screwing up a few words so that they score about the same as >6 months ago). However, the reason why this is not so bad is because the actual baseline data point will not have any incentive to be altered. Patients don't get kicked out once they're in. So for example, worst case scenario, Frequency Therapeutics would think they are recruiting someone with around 25/50 WR, but their true score is really like 30. They take the actual baseline test and score 30. It's not the ideal patient they wanted, but it's accurate. Then at follow-ups when they keep tracking the data, the person continues to lack incentive to take the tests dishonestly. It's the lesser of two evils.

The alternative is what they did (much worse), which is to incentivize people to have low WR to get in and then use that same score as the baseline score. In other words, a legit data point was tainted by incentive.

If in the next trial, they prioritize precision between scores from > 6 months ago and at screen, they should, at the very least, get accurate data, even if here and there they lose out on the ideal patient.

With regards to outer hair cells and audiograms, they hired Jeffery Lichtenhan in order to use better audiometric tests. One of these tests is the OAE test, which is an objective measure of outer hair cell activity.

They have plenty of information to put together a well-run Phase 2 repeat, but we may have to be prepared to be patient as they try to get this right.
 
Kind of a weak source, my dude.
"Manual pure tone audiometry is considered to be the gold standard for the assessment of hearing thresholds and has been in consistent use for a long period of time. An increased legislative requirement to monitor and screen workers, and an increasing amount of legislation relating to hearing loss is putting greater reliance on this as a tool. There are a number of questions regarding the degree of accuracy of pure tone audiometry when undertaken in field conditions, particularly relating to the difference in conditions between laboratory calibration and clinical or industrial screening use."

"having a maximum deviation of around ±10 dB ... that there is a significant margin of error in audiometric screening."

https://www.noiseandhealth.org/arti...16;issue=72;spage=299;epage=305;aulast=Barlow
 
"Manual pure tone audiometry is considered to be the gold standard for the assessment of hearing thresholds and has been in consistent use for a long period of time. An increased legislative requirement to monitor and screen workers, and an increasing amount of legislation relating to hearing loss is putting greater reliance on this as a tool. There are a number of questions regarding the degree of accuracy of pure tone audiometry when undertaken in field conditions, particularly relating to the difference in conditions between laboratory calibration and clinical or industrial screening use."

"having a maximum deviation of around ±10 dB ... that there is a significant margin of error in audiometric screening."

https://www.noiseandhealth.org/arti...16;issue=72;spage=299;epage=305;aulast=Barlow
Now you're getting the hang of it.

I'd like to quote your quote here.

"There are a number of questions regarding the degree of accuracy of pure tone audiometry when undertaken in field conditions, particularly relating to the difference in conditions between laboratory calibration and clinical or industrial screening use."​

Why again are we fixated on Audiograms here? It seems like Audiograms suck equally as much as Word Score. Maybe WIN is better only because it seems to test both IHC and OHC performance?
 
Now you're getting the hang of it.

I'd like to quote your quote here.

"There are a number of questions regarding the degree of accuracy of pure tone audiometry when undertaken in field conditions, particularly relating to the difference in conditions between laboratory calibration and clinical or industrial screening use."​

Why again are we fixated on Audiograms here? It seems like Audiograms suck equally as much as Word Score. Maybe WIN is better only because it seems to test both IHC and OHC performance?
If you jump to the conclusion section, it will be scarier:

"Even the median variation in sound pressure at the ear could contribute an error of 4 dB in hearing threshold values, which is sufficient to cause misdiagnosis on an audiogram. Where the degree of variation is at its highest, there is a potential error of 20 dB, which even in a single frequency band could lead to the misdiagnosis of a patient due to its contribution to the values used to categorize hearing loss."​

My point earlier was that the 10 dB improvement is not an accurate measurement. That's all.
 
It's easy to fake a word recognition test.

And isn't that the basis for why many here think the trial failed? Because people faked the test to get in?

The part I find confusing is: if you believe audiogram improvements are not necessary to demonstrate improvement in hearing, and WR tests can be faked, I'm not sure what measurement can be used to demonstrate drug effectiveness. Which means the trial will always either fail (because of audiograms) or be 'fake-able' (because of WR).
While you're in the booth, they purposely do not reveal how many words you need to enter the trial. You don't know if it's 1 or 50. How are you faking that to get in?
I wish this was the case (as it's obviously a no-brainer), but sadly, Frequency Therapeutics was dumb enough to let people refuse to guess. It's almost a joke how much they mismanaged the trial design.

Evidence:

View attachment 45498
Patients were not aware which ear was going to be treated: I'm deaf in one ear, what ear do I think I'm going to get the shot in? This is all rubbish.
 
If you jump to the conclusion section, it will be scarier:

"Even the median variation in sound pressure at the ear could contribute an error of 4 dB in hearing threshold values, which is sufficient to cause misdiagnosis on an audiogram. Where the degree of variation is at its highest, there is a potential error of 20 dB, which even in a single frequency band could lead to the misdiagnosis of a patient due to its contribution to the values used to categorize hearing loss."​

My point earlier was that the 10 dB improvement is not an accurate measurement. That's all.
10 dB might not be much but that could be the difference at reducing tinnitus volume. Also have to remember that the highest frequency that was tested was 8 kHz and only 4/15 showed improvements at that frequency in Phase 1.

If they had tested between 8-20 kHz I can imagine that the other 11 patients would have shown audiogram improvements as well.
 
While you're in the booth, they purposely do not reveal how many words you need to enter the trial. You don't know if it's 1 or 50. How are you faking that to get in?
You mean by increasing the WR score? No, obviously you can't fake that.

But you can definitely decrease it. It's a subjective test.

IF there were people who faked their WR scores - and I'm not convinced that there were - they wouldn't have been criminal masterminds. Just desperate people who made their score slightly worse to increase (not guarantee!) their chances of getting in.

It would have been no more sinister or clever than that.
 
You mean by increasing the WR score? No, obviously you can't fake that.

But you can definitely decrease it. It's a subjective test.

IF there were people who faked their WR scores - and I'm not convinced that there were - they wouldn't have been criminal masterminds. Just desperate people who made their score slightly worse to increase (not guarantee!) their chances of getting in.

It would have been no more sinister or clever than that.
OK. I was in their trial booth at two different locations for two different trials. The fewer words you said, the less your chance of getting in was. It's official, nobody knows what's going on unless they took the drug, and since there's no reports it improves tinnitu,s it's a complete waste to people who have that problem as their main goal. I'll check back in when there are reports of tinnitus improving.
 
Great, so my takeaway from this thread based on what's been said is that both audiograms AND WR scores are meaningless. So we can just decide whether FX-322 works based on our on own internal biases! Yay!
Dude, are you trolling? I'll write pretty nuanced takes on all of this stuff and you'll quote like 5 words and completely misrepresent the spirit of what I said...
 
The fewer words you said, the less your chance of getting in was.
I'm sorry, I don't get it. It's probably me.

You must have been expected at some point to get some words wrong. If you're describing a "sweet spot" where you have to get enough right to show you have good enough hearing to participate, then everyone already knows that.

I don't know how a comment like "WR tests can be faked because they're subjective" is so complicated and controversial. I don't even think people actually did it. But saying they couldn't have is pointless, because clearly they could.

Anyway, I'll tap out.
 
Great, so my takeaway from this thread based on what's been said is that both audiograms AND WR scores are meaningless.
This was actually the point I was making earlier - if we don't accept audiogram results and we accept the company hint that WR scores were faked, how do we test if it works or not.

@Zugzug provided a couple of different ways they were going to test it for the next trial, which makes sense.

So conclusion for me: Either write it off now, because no improvements were found, or believe trial was flawed and wait on the next trial where they are using different measurements.

Probably no point in thinking about it any more than that.
 
10 dB might not be much but that could be the difference at reducing tinnitus volume. Also have to remember that the highest frequency that was tested was 8 kHz and only 4/15 showed improvements at that frequency in Phase 1.

If they had tested between 8-20 kHz I can imagine that the other 11 patients would have shown audiogram improvements as well.
Is the idea here that if it improves hearing around 8 kHz, then there is a possibility it decreases tinnitus around that same frequency?
 

Log in or register to get the full forum benefits!

Register

Register on Tinnitus Talk for free!

Register Now