New University of Michigan Tinnitus Discovery — Signal Timing

Dr. Shore really didn't care about hyperacusis and noxacusis, did she lol.
Do you know if anyone associated with Dr. Shore, University of Michigan, etc. is researching hyperacusis?
Okay, so what happened to the minority of patients that did report hyperacusis? What were the results? Did they worsen?
I was hoping to find this out as well.
 
Correct me if I'm wrong but isn't she dodging the question here? She's basically saying that she did not analyze the results of the second study because the tinnitus volume of the first group (the group that received the actual treatment in the first six weeks instead of the audio only) did not return to baseline after 12 weeks.

That still does not explain why the tinnitus volume of the people that received the active treatment after the first 12 weeks of audio only treatment actually seemed to increase instead of decrease?

View attachment 55525
Reading your post made me concerned about that and I want the answer too, but I see no indication that she dodged the question.
Question said:
...I interpret the below graph to mean that the group that experienced the treatment in weeks 1-6 did much better than the group that experienced the treatment in weeks 13-18...
The problem with the highlighted part is that it is not a question but a statement. Not only that but one in a long list of statements ending in a non-specific question about "the carry-over effect" and "focusing on the results of the first group". The actual question doesn't mention the group that got the audio-only treatment first, only a vague "the first group". Of course she's going to answer for "the first group" aka the part of the group that they actually used for the study. I'm as frustrated as you are but in my opinion the question should have been more specific such as "Can you explain why the audio-only-first group showed no improvement?".

I know it's frustrating not to get the answer we wished for. But in my experience with these academia matter-of-fact types, they don't pickup on implied questions and they expect questions to actually be one question at a time. They parse through sentences quickly to identify the actual question in order to answer it effectively, it's not their job to spend extra time deciphering what the asker actually wants to hear, especially when they have to spend a lot of valuable time answering a whole series of questions. She's busy helping us, I honestly don't think she'd be dishonest about it or deliberately omit facts.
 
Reading your post made me concerned about that and I want the answer too, but I see no indication that she dodged the question.

I honestly don't think she'd be dishonest about it or deliberately omit facts.
Maybe I phrased that a bit poorly, I did not really mean to imply that she deliberately gave a vague answer, only that she skipped over an important part of the question.
I'm as frustrated as you are but in my opinion the question should have been more specific such as "Can you explain why the audio-only-first group showed no improvement?".
I'm not really frustrated by any means, I just think that it's a bit of a missed opportunity that one of the most important issues with the study was not properly addressed.

I agree that the questioner should have been more clear. Whether or not Dr. Shore intentionally skipped over this specific issue I'll leave in the middle.
 
She's basically saying that she did not analyze the results of the second study because the tinnitus volume of the first group (the group that received the actual treatment in the first six weeks instead of the audio only) did not return to baseline after 12 weeks.
Yes, that's what she is saying.

Clinical trial designs can be broadly classified into parallel and cross-over designs. In the parallel design one group of people get treatment A and one gets treatment B (one of which may be a placebo) and the impact on the patients are compared between groups at the end of the trial. One problem with this type of design, is that the people in the groups are different and this may be the cause of the impact of the treatment. The cross-over design mitigates this by treating the first group with A, then B, and the second group with B, then A, and comparing the results for each group at the end of each treatment period. But in this kind of design we are assuming that treating a group with A first, for example, does not have an impact on the B treatment. And to help ensure that is doesn't there is usually a wash-out period between the treatments.

More details here:

Understanding Controlled Trials: Crossover Trials

Now, if the patients don't return to baseline after the first wash-out period, then the cross-over doesn't work because we need treatment B to begin as if A hadn't happened, which is not the case as we start at a lower TFI/SL. The linked article states what to do in this case:
If carry over is detected convention suggests this may be dealt with in the analysis in one of two ways. The usual approach is to treat the study as though it were a parallel group trial and confine analysis to the first period alone.
And this is exactly what Dr. Shore did*. Conveniently this also knocks out the less than flattering group 2 results.

I also think it is a bit strange that adding on two weeks to the treatment has changed the results so much from the first trial. Now we have a significant and enduring TFI/SL reduction in group 1 and little impact on group 2 whereas before we had reductions in TFI/SL in both groups which quickly dissipated after the end of treatment. Could it be that the extra two weeks has caused this change? I reckon the reason is the small sample size. Wouldn't it be great if tinnitus research was well funded and we had researchers all over the world doing large scale trials to see if it does indeed work!

* Although it looks to me that she has reported the TFI drop at 6 weeks from baseline (about -12 for the ITT) rather than the difference of the treatment TFI with the control TFI (about -5).
 
Correct me if I'm wrong but isn't she dodging the question here? She's basically saying that she did not analyze the results of the second study because the tinnitus volume of the first group (the group that received the actual treatment in the first six weeks instead of the audio only) did not return to baseline after 12 weeks.

That still does not explain why the tinnitus volume of the people that received the active treatment after the first 12 weeks of audio only treatment actually seemed to increase instead of decrease?

View attachment 55525
Yes, that's what she is saying.

Clinical trial designs can be broadly classified into parallel and cross-over designs. In the parallel design one group of people get treatment A and one gets treatment B (one of which may be a placebo) and the impact on the patients are compared between groups at the end of the trial. One problem with this type of design, is that the people in the groups are different and this may be the cause of the impact of the treatment. The cross-over design mitigates this by treating the first group with A, then B, and the second group with B, then A, and comparing the results for each group at the end of each treatment period. But in this kind of design we are assuming that treating a group with A first, for example, does not have an impact on the B treatment. And to help ensure that is doesn't there is usually a wash-out period between the treatments.

More details here:

Understanding Controlled Trials: Crossover Trials

Now, if the patients don't return to baseline after the first wash-out period, then the cross-over doesn't work because we need treatment B to begin as if A hadn't happened, which is not the case as we start at a lower TFI/SL. The linked article states what to do in this case:

And this is exactly what Dr. Shore did*. Conveniently this also knocks out the less than flattering group 2 results.

I also think it is a bit strange that adding on two weeks to the treatment has changed the results so much from the first trial. Now we have a significant and enduring TFI/SL reduction in group 1 and little impact on group 2 whereas before we had reductions in TFI/SL in both groups which quickly dissipated after the end of treatment. Could it be that the extra two weeks has caused this change? I reckon the reason is the small sample size. Wouldn't it be great if tinnitus research was well funded and we had researchers all over the world doing large scale trials to see if it does indeed work!

* Although it looks to me that she has reported the TFI drop at 6 weeks from baseline (about -12 for the ITT) rather than the difference of the treatment TFI with the control TFI (about -5).
I guess that's my question too. I totally understand why they did not report the week 13-18 based on study parameters/bias. However, inquiring minds would want to know why the treatment really didn't seem too effective for the second group after weeks 13-18. Makes me wonder if they needed a longer washout from the "sound only"?

In any event; I look forward to the release of the device and will be trying it out, because as with many of us, I'm desperate for any relief. My hope is in this device and Xenon Pharmaceuticals.
 
we need treatment B to begin as if A hadn't happened
I asked Dr. Shore specifically about that point and how the trial eligibility criteria seemed to contrast or even contradict the length of the washout period. I've been quite disappointed she didn't respond, however, my thoughts on this remain:

Eligibility criteria required no previous tinnitus treatment for at least 12 weeks prior. I take this to mean that the team considers a person who has not undergone some form of tinnitus treatment for at least 12 weeks prior to a new treatment to be a "clean candidate" (for want of a better way of putting it).

If a washout period is required to take participants into treatment B as if treatment A "hadn't happened" (in other words a reset of the eligibility criteria) then I have to wonder whether or not UMich, by halving the washout period of the actual one to get into the trial, violated their own rules. For consistency, shouldn't the washout period have been 12 weeks?

To be fair, whatever the issue is regarding this cross-over effect, if we scrub treatment 2 altogether, treatment 1 still looks promising. Having said that though (and I'm no expert on statistics or clinical trial design etc), I don't think it takes a wizard to see that with a reduced sample size, despite Dr. Shore's best efforts over the years, the final results have raised some pertinent questions.

Going forward I don't believe we're going to get any of the answers we want until these units are out on general release and the independent user-reviews start coming in. How long we'll need to wait for that to happen, however, is anybody's guess right now.
 
Yes, that's what she is saying.
* Although it looks to me that she has reported the TFI drop at 6 weeks from baseline (about -12 for the ITT) rather than the difference of the treatment TFI with the control TFI (about -5).
There was a question about that:
Question said:
When determining the efficacy of a treatment, the paper compares the TFI, THI, loudness statistics with the baseline rather than the control. Wouldn't it make more sense to compare to the control? Maybe that wasn't statistically significant?
Dr. Shore said:
It is important to compare a subject's tinnitus with their own baseline as we want to see if their tinnitus got better. We compared both the active and sham treatments with the subject's baseline at the beginning of the treatment.
 
I'm not worried about the second group so much as for the 65% success rate. In my mind if the criteria was somatosensory modulation, the success should have been a bit higher. It's not a bad percentage considering most people have a somatosensory component but I expected the reduction to be in at least 75% of patients.
 
I'm not worried about the second group so much as for the 65% success rate. In my mind if the criteria was somatosensory modulation, the success should have been a bit higher. It's not a bad percentage considering most people have a somatosensory component but I expected the reduction to be in at least 75% of patients.
I've always thought that when it comes to medicine and/or treatments, some people may require more time for it to be effective. Everyone is different and everyone's tinnitus is different so some could potentially require more than 6 weeks of daily usage to see better results (or any at all if they didn't).

One thing I know is that I'm definitely using it for as long as I can until I get it to either go away completely or to where it was when it first came around (very very quiet). (y)
 
I'm not worried about the second group so much as for the 65% success rate. In my mind if the criteria was somatosensory modulation, the success should have been a bit higher. It's not a bad percentage considering most people have a somatosensory component but I expected the reduction to be in at least 75% of patients.
Maybe some need to use the device longer? It's also hard to control some variables - i.e., did they truly follow through with the 30 minutes consistently, or did they miss a few days each week?

I'm a little worried, only because I don't think my tinnitus is somatic (or if it is, it is very minimal/hard to change). I'm hopeful by her Q&A where she thinks that this could work on non somatic as well, just not tested yet, due to the physiological mechanisms.
 
She's basically saying that she did not analyze the results of the second study because the tinnitus volume of the first group (the group that received the actual treatment in the first six weeks instead of the audio only) did not return to baseline after 12 weeks.

That still does not explain why the tinnitus volume of the people that received the active treatment after the first 12 weeks of audio only treatment actually seemed to increase instead of decrease?
I also think it is a bit strange that adding on two weeks to the treatment has changed the results so much from the first trial. Now we have a significant and enduring TFI/SL reduction in group 1 and little impact on group 2 whereas before we had reductions in TFI/SL in both groups which quickly dissipated after the end of treatment. Could it be that the extra two weeks has caused this change? I reckon the reason is the small sample size.
Here's a further response by Dr. Shore from today:
Dr. Shore said:
The extended effect with 6 weeks is definitely because of the extra treatment weeks and not because of sample size. This is demonstrated by timecourse - showing that the effect was greater and outlasted the effect from 4 weeks in the first study.

The sample size was sufficient to reach statistical significance even without period 2.

Think of the second period as reflecting the combined effect of the actual treatment and the sham. This is because there was no recovery from the active treatment during washout, so the treatment after the crossover (sham in this case), would be adding to (or subtracting from) the ongoing active treatment effects. Then you can't say what you are measuring (ie the response to active or sham or both). So it is not valid to analyze. The second period should thus be ignored - and not attempted to interpret or analyze. It should definitely not be interpreted as a less good result than the first period.

If there had been a 6 month washout there may have been recovery (but we don't know for sure). In any case a 6 month washout would probably result in a lot of dropouts because people don't want to be in a study for that long. When you run a clinical trial you also have to take into account such factors.

People should understand that each subject got one treatment (active or sham) in period one and then 'crossed over' to get the other treatment in the second period after the washout.

I hope this helps.
 
Dr. Shore said:
People should understand that each subject got one treatment (active or sham) in period one and then 'crossed over' to get the other treatment in the second period after the washout.

I hope this helps.
It does help. Perfectly clear now.
 
Dr. Shore said:
Think of the second period as reflecting the combined effect of the actual treatment and the sham. This is because there was no recovery from the active treatment during washout, so the treatment after the crossover (sham in this case), would be adding to (or subtracting from) the ongoing active treatment effects. Then you can't say what you are measuring (ie the response to active or sham or both). So it is not valid to analyze. The second period should thus be ignored - and not attempted to interpret or analyze. It should definitely not be interpreted as a less good result than the first period.
I'm confused. This is an explanation of why the second period of Group 1 (treatment then sham) shouldn't be analyzed. It doesn't appear to in any way explain why Group 2 (sham then treatment) didn't show improvement in their Treatment period. So it doesn't seem to answer the question of:
...why the tinnitus volume of the people that received the active treatment after the first 12 weeks of audio only treatment actually seemed to increase instead of decrease?
 
Here's a further response by Dr. Shore from today:
Correct me if I'm wrong but isn't she basically saying exactly the same thing as before here? That we shouldn't try to analyze or interpret the second period because there was no recovery from the (active) treatment during washout?

The key issue here is that the active treatment was not effective 6 weeks after the control treatment. I mean, yes, the control group showed about 5 dB improvement which was sustained during the washout period, however, you would expect a drop of another 5 decibel in the tinnitus volume of that group in the second period during active treatment, right?

I mean, I understand that you cannot use the second treatment for statistical analysis or whatever, but that still doesn't make the results any less confusing, now does it?
 
How come no one is focusing on the two subjects who had their tinnitus go away?!
Yeah baby! At this point I believe that sustained use will eventually reset fusiform cells in the DCN to "homeostasis" and cure tinnitus altogether. It will just take longer depending on the overactivity that each one of us has.

I'm thinking silent Christmas 2025. I have always dreamed of a silent Christmas. Dr. Shore = Santa for adults with tinnitus.
 
How come no one is focusing on the two subjects who had their tinnitus go away?!
Maybe because it was two people? I don't know why we would focus on that. It's obviously a good thing for them.
It does help. Perfectly clear now.
Did she answer the question about the tinnitus increasing in the subjects getting the active treatment after the sham?
 
The key issue here is that the active treatment was not effective 6 weeks after the control treatment.
This is why I keep harping on about the contradiction in the trial eligibility criteria stating no tinnitus treatment in the prior 12 weeks going into phase 1 but reducing that period to only 6 weeks during the washout period prior to phase 2. By their own criteria, subjects going into phase 2 active treatment would have actually received a prior tinnitus treatment, albeit audio only, within the last 12 weeks; 6 weeks to be precise. It doesn't make sense.
 
Question about error bars:

My understanding is that the below point values (diamonds/squares) represent the average loudness reduction vs the individual's baseline (averaged across ITT participants for the given arm) at each lab measurement.

With that in mind, can someone with statistical/scientific expertise explain in a specific manner what exactly the error bars in this figure represent, and why the actual values (the diamonds or squares) always take the most limited value (i.e. the value that provides the least evidence of Active being successful vs Control)? Is this just because it's better to be cautious in claims of efficacy? Or am I misunderstanding the entire concept?

If anyone answers, please keep it very simple as my understanding of statistics is poor.

upload_2023-8-11_21-20-35.png
 
Question about error bars:

My understanding is that the below point values (diamonds/squares) represent the average loudness reduction vs the individual's baseline (averaged across ITT participants for the given arm) at each lab measurement.

With that in mind, can someone with statistical/scientific expertise explain in a specific manner what exactly the error bars in this figure represent, and why the actual values (the diamonds or squares) always take the most limited value (i.e. the value that provides the least evidence of Active being successful vs Control)? Is this just because it's better to be cautious in claims of efficacy? Or am I misunderstanding the entire concept?

If anyone answers, please keep it very simple as my understanding of statistics is poor.

View attachment 55529
Error bars can be calculated in a few ways, but I'll try to explain the most common.

The diamond / square represents the data point mean. The error bar represents that data points standard deviation. A smaller error bar means precise data, i.e. the data values are all centred around the same value (in this case dB reduction), large error bars mean a greater data spread (i.e. some people had a large reduction and others a small reduction).

In the above graph, error bars aren't massively helpful as we're not comparing two data sets together, the error bars are really just there to show precision of data, in my opinion.

Group 1, not by a huge amount, show a lesser spread in the active treatment than Group 2, leading me to believe there weren't any 'super responders' in Group 1 and that this is the most reliable data set. There is a larger active treatment spread in Group 2 (e.g some people might have had no response to the treatment due to incorrect usage), but the trend still isn't great for Group 2.

In a purely analytical sense, precision is one of the most important factors. That is, how repeatable is my result? Looking at precision allows you to identify if your results are skewed by super responders, or non-responder outliers. In this case, Group 1 has more consistent data for the active than Group 2.

They have chosen to only show the error bars in a specific direction for sham and active.
 
The diamond / square represents the data point mean. The error bar represents that data points standard deviation. A smaller error bar means precise data, i.e. the data values are all centred around the same value (in this case dB reduction), large error bars mean a greater data spread (i.e. some people had a large reduction and others a small reduction).
Thank you for the explanation, which the remainder of this post will assume is correct.
They have chosen to only show the error bars in a specific direction for sham and active.
Indeed, they are plotted exclusively in the direction that slants towards the Active treatment working (i.e. the outer edges of the error bars show greater efficacy for Active vs the given mean value, and the outer edges of the error bars show lesser efficacy for Control vs the given mean value).

This seems odd at face value. Although, I readily admit that I know neither the specific statistical methodology used in this case, nor how scientific papers generally plot error bars at large.
 

Log in or register to get the full forum benefits!

Register

Register on Tinnitus Talk for free!

Register Now