Will AI outsmart us and takeover?steemCreated with Sketch.

in Project HOPE4 years ago

We have all seen the blockbuster Sci-Fi movies where intelligent machines go wild and start taking over. Some movies even portraying extinction level events that wipe out humankind. But how real is this type of scenario and how can we prevent it?

“AI is a fundamental risk to the existence of human civilization in a way that car accidents, aeroplane crashes, faulty drugs or bad food were not — they were harmful to a set of individuals within society, of course, but they were not harmful to society as a whole.”

Elon Musk

pexels-lenin-estrada-2103864-cropped.jpg

After a slow start, where AI was not possible due to limited computing power, we are now in the age where AI is advancing quickly. AI mahines beating world chess champions at a game the computer learnt, is a good indication of how far we have come and it opens our imagination to the possibilities.

AI machines are given objectives that they need to reach. Like winning a game of chess. Or driving a car safely to a destination whilst adhering to all traffic regulations. However, what happens when the AI overplays the scenario and finds an unexpected way to achieve the objective? In 2012, an AI programmer called Murphy programmed a machine to teach itself Nintendo Tetris with the objective not to lose. So what did it do? It pressed the pause button. The only certain way that it learnt it would never lose.

Imagine if a powerful AI is given the objective to solve the problem of plastics in the oceans. An AI may learn that the cause of the plastics in the oceans is humankind. Therefore, the best way to prevent plastics from being in the oceans is to eliminate humans.

Finding Solutions

It is not easy to find solutions to this problem. Some have suggested a big red off button that could disable the AI if it became too powerful. However, just like the pause button, how do we know the AI won't find out about it? Some have explored ways to contain the AI so that it will not know about the off button but isn't this like saying we can stay one move ahead of the AI in a game of chess? The AI could become curious about what is happening behind its back.

Others have suggested that we should try to teach AI our human values so that they will value the things that we value. Or that the trick is giving the AI the right objective. Giving the AI the objective the same a humankind would be a vague and open-ended objective. We don't even all agree what our human objective is in this world. So it would need to consider the ever-changing personal objectives of 9 billion people and use that to try to figure out what our collective objective is. Perhaps this vague approach could be provide our protection.

Another option is just to give the AI the objective to serve us. But do we want AI to be subservient to us? There are also enough Sci-Fi movies that explore the avenue of when AI start having rights of their own and surely if they are subservient, we are saying they don't have rights.

There are also those who are putting mechanisms into AI that will allow them to forget certain things that they have learnt. This is something similar to a neuralizer in the Men in Black movies. Though again, it seems that this is just keeping one step ahead in a game of chess.

Conclusion

So, I do believe that there are big potential risks here. It means that computer scientists need to be very careful and thoughtful about what they create. However, like with many things, I believe the biggest threat is that of rogue agents. I mean, if we consider nuclear technology, scientists have done a good job (generally) of keeping it safe. However, the biggest threat is that of some rouge agent constructing a dirty bomb that causes huge amounts of damage. It is perhaps the same with AI, however well the computer scientists do to prevent an extension level event, there will always be evil people that want to cause damage. And the more powerful the technology, the bigger the potential damage that can be caused.

Image source: Pexels

Sort:  

To the question in your title, my Magic 8-Ball says:

Outlook not so good

Hi! I'm a bot, and this answer was posted automatically. Check this post out for more information.

Hello friend, the truth is that a machine can not have more power and more intelligence than a human. If that happens, things can get out of control and that's not what we want. We cannot be dominated by machines, I think it would be illogical, it should be the opposite. I just hope that AI doesn't fall into the wrong hands.

hi @franyeligonzalez - I think that because the AI can now beat us at games like chess, in time, it will be stronger than us in so many (if not most) things. Then like all tech, the risk is the misuse rather than the use.

Thanks for your comment my friend, take care.

Coin Marketplace

STEEM 0.19
TRX 0.18
JST 0.033
BTC 88380.57
ETH 3082.21
USDT 1.00
SBD 2.72