Deafness Cure in 5 Years

Exactly! Every tinnitus treatment device in clinicals is 80% effective, every drug treatment is only 5 years away, total tinnitus cure within 10 years. - Been going on for decades and we are still not as close as some may believe.
erik, I feel your frustration. A lot of BS, money spent and how many be got cured and how many people got rich?
 
Sometimes we focus to much of are attention on tinnitus and a future cure.

If you want idea on how exciting the future will be in the next 10 years in general. Listen to Ray Kurzweil on you
tube it's unbelievable what's coming.
He takes the exponential growth of the last 10 years or so and extrapolates that into the future.
They all do that! 'Moore's law is an observation and projection of a historical trend and not a physical or natural law.'
 
Ok point well taken...

Nuclear fusion has been around for 50 years, cold fusion is still a pipe dream.
Regenerative medicine is going down, it's happening, as I write this. This is not blind optimism, it's the 21st century.

We will have hearing regeneration, but the earth may have pandemics, world wars, or global warming will screw everything up except for the northern hemisphere and of course rich people who can buy the appropriate technology to survive.

Regenerative medicine is on its way folks, less than five years for sure.

My three cents.
While they work on regenerative medicine they are also working on how to destroy the inner ear for war purposes! Sad world.
 
AI is already being used in finding many cures. Right now we have around half a million humans DNA sequenced and in about 3 years about 2 billion people. What AI is great at doing is taking all this genetic information and comparing it to the phenotypes. Example: why do my sister, mother and wife have hearing loss much worse than me while they don't have tinnitus but I do. There could be a genetic component to tinnitus.

We're reading and writing genetic information faster then ever because of AI and using CRISPR technology to edit these genes which are leading to cures.

All tinnitus sufferers should have their DNA sequenced. It's basically free now.
Even if you can identify the tinnitus problem in the DNA that does not mean an instant cure... don't think you can just change your DNA.
 
I read the singularity is near back when it came out. I am also a fan of Kurzweil and consider myself a transhumanist.

You have no idea how much I wish I was born just 10 years later. We wouldn't have to suffer through this shitnitus.

By 2029, AGI, BCIs and FIVR will be out and we will have all likely moved on from tinnitus, most if not all diseases(including aging) will have been dealt with by then. By the 2030s we will begin transcending biology altogether(posthuman) by merging with Artificial General Intelligence.

But my god, I could have done without it. I think when I can, I'm going to wipe all memory of this condition.
That's funny since most AI scientists don't believe we will have an AGI for at least another 30-40 years, if ever! There actually are AI scientists who claim we won't ever be able to do it.

I'm working as a software developer and I can tell you there is a huge difference between developing an AI that can play a game of chess or counting the number of roof tops on a picture from Google Maps and an AGI. We can make an AI that can recognize the the difference between a cat and an elephant on a photo but if we feed that same AI a picture of a mouse it will still place it as either a cat or an elephant. It's that stupid!

Designing an AI to do ONE task that is well defined and has fairly simple rules is one thing. Designing an AI that can learn to do anything is a completely different ballpark and we aren't even scratching the surface yet.

Now I'm a firm believer that we will get to an AGI at some point but 2030 is just really, really unrealistic. But I sure hope to be proven wrong here! ;)
 
They all do that! 'Moore's law is an observation and projection of a historical trend and not a physical or natural law.'

And Moore's law has been broken for the past half decade or so. The development of the 7 nm generation of processors has taken more than 2 years already hence causing the breakage of Moore's law.

The problem is that we can't shrink transistors beyond a certain threshold and the closer we get the harder it becomes. We aren't there yet but shrinking the transistors to any less than 4-5 nm will probably be very difficult if not impossible. After that quantum tunneling becomes an issue.
 
While they work on regenerative medicine they are also working on how to destroy the inner ear for war purposes! Sad world.
They already know how to destroy the inner ear and create weather patterns. HARPP. It's audio technology, there are bases all over the world, that can send sound waves to affect the ionosphere and atmosphere. This technology has been around, old news.
Silvio Sabo is in the business and is very smart. So is Elon Musk, many things Musk says contradict Silvio's sensible predictions.

I am a sci-fi guy.

The shit is here, it's going down.

It's gonna be really ugly, and war machines are gonna be hard to control when AI really kicks in.

Silvio, I don't know if you are practicing in Sweden now, but the shit that's going down in California, Massachusetts, and Japan is jaw dropping.

I think things grow exponentially...

Mind you, I love sci-fi, and see the world as looking like Blade Runner sooner rather than later.

I will defer to you experience and intelligence on this matter... perhaps in 15 years we can have a drink together served by a robot and synthesized in a molecular transforming machine. A glass of mead would be nice.

PS. Deafness cure, less than five years... won't be perfect, but will be revolutionary.
Two years to be proven, five to hit the market in full force.
 
That's funny since most AI scientists don't believe we will have an AGI for at least another 30-40 years, if ever! There actually are AI scientists who claim we won't ever be able to do it.

I'm working as a software developer and I can tell you there is a huge difference between developing an AI that can play a game of chess or counting the number of roof tops on a picture from Google Maps and an AGI. We can make an AI that can recognize the the difference between a cat and an elephant on a photo but if we feed that same AI a picture of a mouse it will still place it as either a cat or an elephant. It's that stupid!

Designing an AI to do ONE task that is well defined and has fairly simple rules is one thing. Designing an AI that can learn to do anything is a completely different ballpark and we aren't even scratching the surface yet.

Now I'm a firm believer that we will get to an AGI at some point but 2030 is just really, really unrealistic. But I sure hope to be proven wrong here! ;)
Do you know Demis Hassabis?
 
They already know how to destroy the inner ear and create weather patterns. HARPP. It's audio technology, there are bases all over the world, that can send sound waves to affect the ionosphere and atmosphere. This technology has been around, old news.
Silvio Sabo is in the business and is very smart. So is Elon Musk, many things Musk says contradict Silvio's sensible predictions.

I am a sci-fi guy.

The shit is here, it's going down.

It's gonna be really ugly, and war machines are gonna be hard to control when AI really kicks in.

Silvio, I don't know if you are practicing in Sweden now, but the shit that's going down in California, Massachusetts, and Japan is jaw dropping.

I think things grow exponentially...

Mind you, I love sci-fi, and see the world as looking like Blade Runner sooner rather than later.

I will defer to you experience and intelligence on this matter... perhaps in 15 years we can have a drink together served by a robot and synthesized in a molecular transforming machine. A glass of mead would be nice.

PS. Deafness cure, less than five years... won't be perfect, but will be revolutionary.
Two years to be proven, five to hit the market in full force.

I'm a sci-fi guy too. I have written in several posts here before that I believe that the ultimate solution will not be regenerative medicine but a very advanced cochlear implant that can be inserted into the inner ear and replace a damaged cochlea and not only perfectly mimic the human cochlea but even enhance one's hearing.

Really the technology of such a device is actually not that far away! The biggest problem is probably cost effectiveness.

The benefits would for one be the possibility to switch such a device on and off at will and the other would be that it would never get damaged by loud noise such as a human inner ear clearly can so one would never again have to worry about getting tinnitus or hearing loss ever again.
 
That's funny since most AI scientists don't believe we will have an AGI for at least another 30-40 years, if ever! There actually are AI scientists who claim we won't ever be able to do it.

I'm working as a software developer and I can tell you there is a huge difference between developing an AI that can play a game of chess or counting the number of roof tops on a picture from Google Maps and an AGI. We can make an AI that can recognize the the difference between a cat and an elephant on a photo but if we feed that same AI a picture of a mouse it will still place it as either a cat or an elephant. It's that stupid!

Designing an AI to do ONE task that is well defined and has fairly simple rules is one thing. Designing an AI that can learn to do anything is a completely different ballpark and we aren't even scratching the surface yet.

Now I'm a firm believer that we will get to an AGI at some point but 2030 is just really, really unrealistic. But I sure hope to be proven wrong here! ;)

Greg Brockman of OpenAI thinks AGI will be here within 5-7 years. And that's using a bottom up in house approach.

That's also not true, the census from the AI community around 2017 is that people in the field think we will have it by 2030-2040. People in the field also thought we wouldn't solve Go until 2030, tell me, how did that prediction turn out? We pretty much solved Go in 2016 and then DOTA2 and Starcraft II a year and a half later, not only did we solve Go but we also solved closer to lifelike video games that are thousands upon thousands of magnitudes more difficult to teach than Go, 12 years ahead of when they thought we would even have an AI that could beat a 'Go Professional', not just the world champion. You wanna know what the estimates were in 1990? over 2100 just to solve Go. Never for AGI. Trust me, Silvio, everyone eventually catches up to Kurzweil, albeit slowly.

The reason why the estimates for AGI in the community arriving are dropping lower is because they expound on current progress, not exponential progress. DOTA2 and Starcraft II are also not narrow tasks, the reason why they are using complex video games with imperfect information is because they want to train the AI in environments that are like real life.

No offence, but your software is highly inferior to what Deepmind, OpenAI and Baidu have. Neural Networks can in fact, generate video, pictures(along with 360 degree scenes), stories. It's also irrelevant to developing AGI. Basically it's like saying 'see Cleverbot we designed? It takes messages from people and then adds it to it's database to converse with other people online, but it always chooses inappropriate messages, it's so stupid!, AGI is far way because of this'. It's a myopic view point because most people working in one field don't think about other field influencing their progress. And progress is exponential. Even on it's own, the AI field is making tons of exponential progress.

Now after all that is said and done, I believe that if we used the bottom up approach, we would get AGI by 2029. But I don't think that will happen first. Brain Computer Interfaces are going to combine the architecture of our brain with neural networks. This is all apart of the law of accelerating returns, technology is progressing faster and faster and nowadays we are seeing multiple breakthroughs every few days, let alone months or years.

Even if the AI community should fail(They wont), BCIs will get the job done for them.
 
Last edited:
Greg Brockman of OpenAI thinks AGI will be here within 5-7 years. And that's using a bottom up in house approach.

That's also not true, the census from the AI community around 2017 is that people in the field think we will have it by 2030-2040. People in the field also thought we wouldn't solve Go until 2030, tell me, how did that prediction turn out? We pretty much solved Go in 2016 and then DOTA2 and Starcraft II a year and a half later, not only did we solve Go but we also solved closer to lifelike video games that a thousands upon thousands of magnitudes more difficult to teach than Go 12 years ahead of when they thought we would even have an AI that could beat a 'Go Professional', not just the world champion. You wanna know what the estimates were in 1990? over 2100 just to solve Go. Never for AGI. Trust me, Silvio, everyone eventually catches up to Kurzweil, albeit slowly.

The reason why the estimates for AGI in the community arriving are dropping lower is because they expound on current progress, not exponential progress. DOTA2 and Starcraft II are also not narrow tasks, the reason why they are using complex video games with imperfect information is because they want to train the AI in environments that are like real life.

No offence, but your software is highly inferior to what Deepmind, OpenAI and Baidu have. Neural Networks can in fact, generate video, pictures(along with 360 degree scenes), stories. It's also irrelevant to developing AGI. Basically it's like saying 'see Cleverbot we designed? It takes messages from people and then adds it to it's database to converse with other people online, but it always chooses inappropriate messages, it's so stupid!, AGI is far way because of this'. It's a myopic view point because most people working in one field don't think about other field influencing their progress. And progress is exponential. Even on it's own, the AI field is making tons of exponential progress.

Now after all that is said and done, I believe that if we used the bottom up approach, we would get AGI by 2029. But I don't think that will happen first. Brain Computer Interfaces are going to combine the architecture of our brain with neural networks. This is all apart of the law of accelerating returns, technology is progressing faster and faster and nowadays we are seeing multiple breakthroughs every few days, let alone months or years.

Even if the AI community should fail(They wont), BCIs will get the job done for them.

There is no consensus in the matter among AI researchers. Some believe 2030 and some say never. I never said ALL of them said not by 2030. But then again in the 60's they also started to think about this matter and feared that computers would surpass humans in a matter of years. And what they worked with back then are computers that can't compare to a 200$ phone of today.

All of those you mentioned are still narrow tasks. A computer game might be a rather complex task but it's nothing compared to the real world. A computer game still has some well defined and usually basic rules. A marine in Starcraft has one weapon and can only move in a predefined manner. It can't fly! And as to many computer games there is the problem of reaction time. Starcraft is a strategy game but it's a game that also requires quick reflexes and reactions. There's actually a word for this in Starcraft and that is called "actions per minute". And a computer has a huge advantage over a human in this regard.

We can make an AI that can beat the world champion in Go. But we can't make one that can beat the world champion in Go AND beat the world champion in Chess AND beat the champion in Starcraft II AND.... you get the point. And that's what an AGI would be.

The problem is also the hardware. A human brain is inherently complex and we are nowhere near building anything close to it. An AGI would probably require hardware that we currently don't have and the limitations of silicone based hardware we currently have could be a hindrance.

The problem with an AGI is also not simply to program an AI that can do everything you throw at it. We still haven't got a clue what we would do with a superintelligent AI even if we made one tomorrow. How would we contain it? Can we contain it? Would it be couscous? Would it want to harm us?

An interesting book in the matter is Life 3.0 by Max Tegmark who is the founder of Future of Life institute that is geared towards AI safety research. It's a really good read.
 
Last edited:
I'm a sci-fi guy too. I have written in several posts here before that I believe that the ultimate solution will not be regenerative medicine but a very advanced cochlear implant that can be inserted into the inner ear and replace a damaged cochlea and not only perfectly mimic the human cochlea but even enhance one's hearing.

Really the technology of such a device is actually not that far away! The biggest problem is probably cost effectiveness.

The benefits would for one be the possibility to switch such a device on and off at will and the other would be that it would never get damaged by loud noise such as a human inner ear clearly can so one would never again have to worry about getting tinnitus or hearing loss ever again.
I agree my friend, that will be the ultimate bionic, cyber implant.
I know your posts, I'm a fan Silvio Sabo, I always enjoy and look forward to your insights. As we say in Boston, you kick ass... that's a good thing.

Sincerely, Daniel
 
All of those you mentioned are still narrow tasks. A computer game might be a rather complex task but it's nothing compared to the real world. A computer game still has some well defined and usually basic rules.

We can make an AI that can beat the world champion in Go. But we can't make one that can beat the world champion in Go AND beat the world champion in Chess AND beat the champion in Starcraft II AND.... you get the point. And that's what an AGI would be.

The problem with an AGI is not simply to program an AI that can do everything you throw at it. We still haven't got a clue what we would do with a superintelligent AI even if we made one tomorrow. How would we contain it? Can we contain it? Would it be couscous? Would it want to harm us?

An interesting book in the matter is Life 3.0 by Max Tegmark who is the founder of Future of Life institute that is geared towards AI safety research. It's a really good read.

What you're talking about is transfer learning. This has more or less to do with the raw power of neural networks. Our computation is still going to get a ton better as time goes on, wider tasks are beginning to be easier to tackle. Reality can be simulated(second life), neural networks that work in their own domain(OpenAI Five, Alphastar) already multitask. They need to learn where to put wards, they need to know when is the right time to kill Roshan, they need to learn spacial segregation between them and their opponents characters and creeps, they need to know when to pop their skills at the precise time. Honestly, I wouldn't say that's too far off from natural language understanding.

This is TREMENDOUSLY more difficult than Go was. And people were baffled when that was solved. And it only took a year and half. As opposed to chess which took over 50 years to solve(via brute force computation too mind you). We are getting closer and closer to the complexity of the real world everyday. Once it learns natural language, we will basically have AGI.

Deepmind's AlphaZero has also learned transfer learning to some extent, it learned how to play Go, Chess and Shogi with no instructions, just left to its own devices. And it was able to beat stockfish after just days of training, newer versions only took hours of training to beat stock fish.

The first AGI won't be programmed, that is nigh impossible for any human or team of humans to do, it will teach itself just like today's neural networks.

You still haven't factored in Brain Computer Interfaces either. We have a perfect model for a general learning algorithm already, it's the human brain. All you have to do is reverse engineer it with BCIs and combine it with more advanced computation and deep learning.
 
What you're talking about is transfer learning. This has more or less to do with the raw power of neural networks. Our computation is still going to get a ton better as time goes on, wider tasks are beginning to be easier to tackle. Reality can be simulated(second life), neural networks that work in their own domain(OpenAI Five, Alphastar) already multitask. They need to learn where to put wards, they need to know when is the right time to kill Roshan, they need to learn spacial segregation between them and their opponents characters and creeps, they need to know when to pop their skills are the precise time. Honestly, I wouldn't say that's too far off from natural language understanding.

This is TREMENDOUSLY more difficult than Go was. And people were baffled when that was solved. And it only took a year and half. As opposed to chess which took over 50 years to solve(via brute force computation too mind you). We are getting closer and closer to the complexity of the real world everyday. Once it learns natural language, we will basically have AGI.

Deepmind's AlphaZero has also learned transfer learning to some extent, it learned how to play Go, Chess and Shogi with no instructions, just left to its own devices. And it was able to beat stock fish after just days of training, newer versions only took hours of training to beat stock fish.

The first AGI won't be programmed, that is nigh impossible for any human or team of humans to do, it will teach itself just like today's neural networks.

You still haven't factored in Brain Computer Interfaces either. We have a perfect model for a general learning algorithm already, it's the human brain. All you have to do is reverse engineer it with BCIs and combine it with future deep learning.

No, the first AGI will probably not be programmed by humans only. It would probably have to be an AI that we create that can improve its own code and iteratively get better over time. And we're just not there yet. We are just scratching the surface.

And STILL all those you mention have been AIs that were designed to do ONE narrow task. I know you might think it's very complex but all of those games were created to be played by humans and to be easy to pick up and start playing. They have basic rules of how pieces can be moved and other limitations. It is also a very simple task. The task being: "beat this opponent given these rules". That is something a toddler can understand. But to be able to think and reason is completely different thing.

It's funny that you mention natural language as I'm currently working on search engines which are becoming better and better at natural language. But we're not there yet! And that still is something that is relatively easy to do compared to an AGI, which is a general artificial intelligence capable of learning ANYTHING.

And in the matter of neural networks. Well, to create a neural network that is as complex as the human brain is just beyond anything we are close to at the moment. And that still doesn't solve the problem of it becoming conscious when we eventually get there.
 
No, the first AGI will probably not be programmed by humans only. It would probably have to be an AI that we create that can improve its own code and iteratively get better over time. And we're just not there yet. We are just scratching the surface.

And STILL all those you mention have been AIs that were designed to do ONE narrow task. I know you might think it's very complex but all of those games were created to be played by humans and to be easy to pick up and start playing. They have basic rules of how pieces can be moved and other limitations. It is also a very simple task. The task being: "beat this opponent given these rules". That is something a toddler can understand. But to be able to think and reason is completely different thing.

It's funny that you mention natural language as I'm currently working on search engines which are becoming better and better at natural language. But we're not there yet! And that still is something that is relatively easy to do compared to an AGI, which is a general artificial intelligence capable of learning ANYTHING.

And in the matter of neural networks. Well, to create a neural network that is as complex as the human brain is just beyond anything we are close to at the moment. And that still doesn't solve the problem of it becoming conscious when we eventually get there.

It won't. The people who made AlphaGo don't even fully understand how it beat Lee Sedol. You can put in the base parameters, but all the learning will be done on its own.

They're getting less and less narrow from what I can see. Also, you don't have to create a world as complex as ours in simulation to teach an AI the fundamentals to success in this world. First it was Tic Tac Toe, then chess, then Go, then DOTA2 and then Starcraft II, it will soon be something like Second Life or an RPG. And when it learns how to function in those kinds of spaces, we basically have AGI. As
I said, language understanding will be key.

You don't have to create something as complex as the human brain to get AGI. That's like saying we need to mimic how birds fly when we solved flying via a different methodology entirely. Two different systems can achieve the same end goal via different means.

We will understand the brain too when we have BCIs. So that's a moot point.
 
It won't. The people who made AlphaGo don't even fully understand how it beat Lee Sedol. You can put in the base parameters, but all the learning will be done on its own.

They're getting less and less narrow from what I can see. Also, you don't have to create a world as complex as ours in simulation to teach an AI the fundamentals to success in this world. First it was Tic Tac Toe, then chess, then Go, then DOTA2 and then Starcraft II, it will soon be something like Second Life or an RPG. And when it learns how to function in those kinds of spaces, we basically have AGI. As
I said, language understanding will be key.

You don't have to create something as complex as the human brain to get AGI. That's like saying we need to mimic how birds fly when we solved flying via a different methodology entirely. Two different systems can achieve the same end goal via different means.

We will understand the brain too when we have BCIs. So that's a moot point.

Sure. But will we do all that by 2030? It might sound far away but 10 years is not that awful long of time.

And saying stuff like "if we solve natural language we basically have an AGI" is just not correct in the slightest. Also don't confuse machine learning with AI.

Yes, we might be able to build a chatbot that can tell between slight differences in language and be able to understand what you "mean". But it's still ONE! narrow task and not that hard (compared to an AGI). This can pretty much be achieved with scripting. Which is so far away from something able to think, reason and learn. All at the same time!
 
There is no consensus in the matter among AI researchers. Some believe 2030 and some say never. I never said ALL of them said not by 2030. But then again in the 60's they also started to think about this matter and feared that computers would surpass humans in a matter of years. And what they worked with back then are computers that can't compare to a 200$ phone of today.

All of those you mentioned are still narrow tasks. A computer game might be a rather complex task but it's nothing compared to the real world. A computer game still has some well defined and usually basic rules. A marine in Starcraft has one weapon and can only move in a predefined manner. It can't fly! And as to many computer games there is the problem of reaction time. Starcraft is a strategy game but it's a game that also requires quick reflexes and reactions. There's actually a word for this in Starcraft and that is called "actions per minute". And a computer has a huge advantage over a human in this regard.

We can make an AI that can beat the world champion in Go. But we can't make one that can beat the world champion in Go AND beat the world champion in Chess AND beat the champion in Starcraft II AND.... you get the point. And that's what an AGI would be.

The problem is also the hardware. A human brain is inherently complex and we are nowhere near building anything close to it. An AGI would probably require hardware that we currently don't have and the limitations of silicone based hardware we currently have could be a hindrance.

The problem with an AGI is also not simply to program an AI that can do everything you throw at it. We still haven't got a clue what we would do with a superintelligent AI even if we made one tomorrow. How would we contain it? Can we contain it? Would it be couscous? Would it want to harm us?

An interesting book in the matter is Life 3.0 by Max Tegmark who is the founder of Future of Life institute that is geared towards AI safety research. It's a really good read.
But they do have have an AI that will beat any human in any 2 player game, it's called AlphaZero...
 
PS. Deafness cure, less than five years... won't be perfect, but will be revolutionary.
Two years to be proven, five to hit the market in full force.
There is a kind of cure today.

'A cochlear implant is a surgically implanted neuroprosthetic device that provides a sense of sound to a person with moderate to profound sensorineural hearing loss. Cochlear implants bypass the normal acoustic hearing process, instead replacing it with electric hearing.'
 
Even if you can identify the tinnitus problem in the DNA that does not mean an instant cure... don't think you can just change your DNA.
Yes you can edit or change DNA. They have been doing it for awhile now. CRISPR was discovered about 7 years ago. Probably the greatest discovery of this decade. It literally scans your entire 3 billion base pair and performs an edit.

Screenshot_20190520-064138_Chrome.jpg


Could be a single gene or more. Human trials have just begun this year using CRISPR ex vivo for cancer.
 
Last edited:
Yes you can edit or change DNA. They have been doing it for awhile now. CRISPR was discovered about 7 years ago. Probably the greatest discovery of this decade. It literally scans your entire 3 billion base pair and performs an edit.

View attachment 29743

Could be a single gene or more. Human trials have just begun this year using CRISPR ex vivo for cancer.
Wow.
 
Since I'm on the subject of CRISPR here's some more amazing things CRISPR is doing.

Because of CRISPR we now have the means to gene drive any gene through any species with a short life cycle. Example would be the mosquito, if we wanted to kill off all the mosquitoes and save half million people from malaria or an mosquito born illness we can do that now. It would take about 6 or 7 generations to spread the gene through the population in a given area. This can be done with plants as well.

CRISPR is also helping people who need organ transplants. Millions of people die waiting on organ transplant lists...
Pigs have always been a viable option because their organs are very similar to ours but the problem was pigs had retro viruses that could spread through humans, so it was abandoned.
Thanks to CRISPR all the 32 retro viruses were removed making pig organs safe for humans and hopefully cancer resistant in the near future.
They have been breeding these pigs for a year or more, they are now being tested in animals and if all goes well they can be used in humans in 1 to 2 years.
 

Log in or register to get the full forum benefits!

Register

Register on Tinnitus Talk for free!

Register Now