@Zugzug I decided to go back just now and listen (with pain and difficulty) to the webcast. I don't think this has been mentioned yet, or at least it has not been appreciated, but apparently the actual improvement in WR scores in the treatment groups were not anywhere near what they had seen in the the original Phase 1/2 study as well as the recent off-label study. This is why, Carl says, they make reference to this "dampening" effect with repeated doses that they don't quite understand on a molecular level.
Carl then goes on to say that the placebo group, in parallel, improved, but when they looked at their historical records compared to the baselines when they entered the trial, there were inconsistencies. He makes it clear - and I somehow missed this yesterday - that these inconsistencies were found across the WHOLE study. In other words, patients in BOTH the placebo and FX-322 group had inconsistencies in their historical records compared to their baseline values. It sounds like a catalogue of errors with an absolutely disastrous outcome. I'm starting to believe this might not be quite over.
Edit: When asked asked about whether they would change WR score entry requirements, Carl said they are looking forward to the opportunity to better define what a responder will look like, but he also said they felt ok with where they drew the line for WR scores. In other words, it sounds like they plan to dig through the individual data post 210-day readout and figure out who the idiots (non-responders/fakers) were.
Keep in mind that I can't listen to the webcast so it creates an unfortunate lack of information for me.
Here's what I don't understand with the faker defense. Exactly as you say, it doesn't surprise me at all that they found inconsistencies in the whole study. Even just mathematically, let's say there are 10% fakers, or 9 people. There are 24 placebos and 72 treaters. We would expect about 7 fakers to go into the treater group and 2 fakers to go into the placebo group. That gives the treaters an advantage.
If we consider some extreme distribution of the fakers, so instead of a nice 7 and 2 split, let's say it's totally nutsy disproportionate. Like let's say 5 or more (of the 9) fakers landed in the placebo group, by chance. What is the probability of this?
It ends up being
(24 choose 5)*(72 choose 4)...
+ (24 choose 6)*(72 choose 3) ...
+ (24 choose 7)*(72 choose 2) ...
+ (24 choose 8)*(72 choose 1) ...
+ (24 choose 9)*(72 choose 0)
/
(96 choose 9),
which equals approximately 0.0406, or 4.06%.
In other words, in all likelihood, the "liar" factor should have been proportionately dispersed between groups.
Now, regarding the lying itself. Here's what I can't understand. So the theory is that lying got them in, but then their baselines were their real score (higher). In a sense, how does it matter how they got in if they still performed the study correctly?
If I understand what LeBel is really suggesting, it's that the
treatment group was actually held back because the baselines started closer to the ceiling effect than they anticipated, disproportionately to the placebos.
In other words, I don't think the argument is that the placebo group got a bunch of liars. I think it's that across the board, their filter (which the bull thesis largely rested on) was nullified. Again, apologies if I am inferring incorrectly since I can't listen to it. So basically, the big mistake was that across, the board, there weren't enough true severe cases to enjoy a lack of ceiling effect.
This is fair, as the separation between placebos and treaters should be much more significant with less ceiling effect (see Phase 1).
It does seem like they are on to something with the "don't step on the lawn" analogy. However, if the Phase 1b remaining trials are not strong, this theory is gone.
Then what remains is really a defense of "everyone didn't have enough hearing loss," which is fair, but it's not great to still not see the treaters dominating the placebos.