For anyone who's interested, this was Dirk De Ridder's take on what we're discussing now (from the Tinnitus Talk Podcast transcription):
So from a practical point of view, what we have to do is we should bring all that information together meaning the grey network, the genes, the environmental influences, and this create patterns that we cannot as a human unravel, and that's why we ultimately will need artificial intelligence to help us with this problem. And this is what we are currently trying to do is see, can artificial intelligence help to unravel the patterns or the links between genes, epigenetics, which ultimately is changed by the environment, to explain why there is such a variety and the imaging data or the brain expression or the brain networks that we link two tinnitus, and so the model becomes even more complicated, but it's addressable by using ultimately artificial intelligence, the patterns and recognitions that we as humans unfortunately cannot find ourselves.
This is where, in the future, we will need large groups of people to collaborate, meaning creating European projects, American-Asian projects that look at data of maybe 1000-2000 tinnitus patients who have their complete genome sequence to have their epigenome sequence to have the microbiome sequence to have the EEG data and then apply all the pattern recognition of the artificial intelligence to ultimately tell us for this patient, for example, these three signal molecules in the brain are not optimal, so that we can supplement them and for this patient, actually these connections are not optimal, so that we can rebuild those connections or break those connections, and the beauty is that in the last couple of years new tools have been developed where we can try to rebuild connections and try to break connections, whereas before because our model was wrong, we were just saying, okay, well, this part of the brain, the auditory cortex is overactive, so we just have to suppress it. That's it. That's a very simple approach, but we did not have the technology to target different parts of the brain at the same time. Now the technology exists, but it's not yet used, still in an experimental phase to see are we truly rebuilding connections or are we truly breaking connections.
Once we know we can, then it's just applying the changes that the artificial intelligence will tell us that are typical for tinnitus—basically the signature of tinnitus, and so I do think that through technology, we are actually going to be able to individualize the treatments that we currently use. It will take a little bit; it will take five to ten more years, but that is just a matter of scaling up what we're currently doing, so the theoretical models are currently being built and the technology is being simultaneously developed, so it's just a matter of bringing those things together, but everybody will have to collaborate and we will need one or two people to organize the collaboration—call them team leaders whatever you want—that also know the clinical component of it, but the concept that I, as a medical doctor, whether I'm a neurosurgeon, psychiatrist, neurologist, or whether I'm a audiologist or psychologist, treating tinnitus patients by ourselves cannot work and will not work, and if we do not change, it will stay a problem forever. If we use the same technology and approach as they do in sports, high- tech sports, then there's no reason why we should not be able to conquer this probably within 5 to 10 years.
There are teams of people who are already doing this kind of stuff, but it's still in its infancy. As I said, in a decade or two, the way we keep our medical data, and the way Doctors interact with it, will be very different. Blockchain technology is perfect for this kind of thing as it makes data immutable, and the infrastructures are already being built to a high standard.