Why the world leaders are becoming afraid of AI - What if the AIs around the world became self-aware? (new featured author @infinitor)
It is a heated, most discussed topic in today's technological world. And rightly so, because, with the the recent advances in technology, it is necessary to discuss the reliability of AI. There is an increasing demand in the market for smart AIs capable of handling complex problems all on their own. AIs which are able to not only cater to the demands of the owners and corporations but are also able to grow on their own, rendering human input and help useless. They figure out how to solve the next problem by learning from their experiences of solving the previous problems.
No matter how hard the problems get, the AI would still be able to sort it out. It is because it learned, it evolved.
Problem solving aren't the only things AIs are used in today's world. From playing Go to being an essential part of video games, artificial intelligence is an everyday part of our daily life even if we don't realize it. Although it seems like a fantasy and fiction, but for some, it is a clear and present danger.
Actually, AIs are the ones who have helped us progress so much in the technological field. In fact, AI is the key to the future. It is the future of technology. Technology is there to help human beings in their everyday lives, personal as well as professional ones. Whether it is to get from point A to point B or just talk to someone at the point B while you're at point A. It's sole purpose is to help the human beings, always has been and always will be. So, why is it not possible that the humans create such an artificial intelligence that could not only answer the human's questions but also carry out his or her wishes? I can definitely see that happening, not only because that's where we are heading technologically but also because that is our future. To become gods.
Gods and Men (AI)
Let's say for a second that you created a live, functional, and fully self-aware AI. "It" performs complex calculations in a matter of seconds, and can do virtually anything that you ask "it" to do. The AI can do what other average humans cannot. It can predict the trajectory of a projectile, amount of force it would require to break a person's jaw, all before actually performing the action. This not only makes the AI extremely precise and accurate, but also pretty terrifying. A war machine.
What do you do with that AI? Well, you can start from the small (crimes) like: win the lottery, impress a girl et cetera. But after a while, this just doesn't satisfy you. It's like heroin. When you hit it the first time, there's nothing like it. You keep going back to it, taking the same quantity and getting high. A stage reaches when the same quantity just doesn't do it for you. You don't get the same kick out of it that you once did. So, you increase the quantity and repeat the above steps. You keep doing all of this until a you finally get to a point when snorting it just doesn't do. You look for new ways, new methods, new combinations, that would give you that kick that you used to get. This addiction and the addiction of power are the one and the same.
When the lesser crimes don't do it for you, you switch up to the big league and start using the AI for your own self-interest. It's natural, you are only human. It's something that makes us human.
He, who never craved power, was no man.
-Me, just now.
So, what can you do with all that intellectual power and muscle at your disposable? Well, first off, you may decide to teach someone a lesson. Someone for whom you hold a grudge. He could be a bully, or an asshole. This action makes you feel the thrill, the excitement that something like that brings with itself. Eventually that stops satisfying your taste. This cycle continues and goes on and on. It's limitless. You get tired of one AI, you get two more, and eventually you create an army. Take over the world, become the sole monarch and then what. What happens when you reach to that position? Or more accurately,
What happens when someone gets the power and money that they have sacrificed so much for?
No one knows. And I don't think that anyone will ever know.
A creator wants to control the thing that he has created. It's the same thing with us and AI. We want to control our creations, the AIs, to do what we want. It could be positive or it could be negative. Knowing the history of humanity, I can and will say that it will almost definitely be used for negative purposes, for our own purposes instead of the collective goals of human beings.
What if the AIs become self-aware?
It can result in two things: cure for cancer, end of world hunger, and in the end, everyone lives together in harmony. Or it can destroy mankind and what it has built over the course of thousands of centuries. The thing that depresses me is how infinitely more plausible the latter situation is as compared to the former.
We have come so far, fought so many wars, and shed so much blood that living in harmony together sounds like it would be the plot of some fantasy novel. And a badly written one at that, because even in fantasies the thought of everyone living together happily, co-existing, is still just a fantasy.
Something like Skynet would be a bit far-fetched, but not so much when you really think about it. The US has spent millions into surveillance especially post-911. They have the second-largest army and navy, and they spent a huge portion of their budget on their military. How can anyone say that they won't go the extra mile and add an army of killer robots to their arsenal? I know I could've put that better, but fundamentally that's all it is: an army of killer robots.
Even if we don't consider an army of killer robots to be a part of US military, how much artificial intelligence is used in military? How much in banks and society? How much in the whole world, really? It's a pretty safe to bet that it is used a lot. Imagine what will happen in the next 10-20 years. At the rate we are progressing in technology, with a new breakthrough almost every other day, I think that AIs will be a major part of our daily lives. That's not to say that they aren't right now. They are, but not to that extent that they will be in the upcoming decades.
The problems with creating a superintelligent AI reach from ethical ones to dangerous ones. It is not the job of humanity to play god, it only results in destruction, devastation and despair. Take HAL 9000, from the film 2001: A Space Odyssey. In the film, HAL 9000 controls different functions aboard a space-ship but it malfunctions in subtle ways. When the astronauts decide to repair him, he decides to kill both the astronauts in order to ensure it's survival and conceal his malfunctions from Earth. This is a fine example of an AI doing everything it can to ensure it's survival, going so far as to killing other humans. This is a big thing that I'm afraid of. A superintelligent AI can kill others if used for dangerous purposes, it can kill others given enough time, even if it's initial purpose was the betterment of mankind. It would figure out how much humans have polluted the Earth, consumed it's resources. It would decide that the Final Solution would be to wipe the human race altogether so that the Earth would be preserved for another cycle of organisms. So, should AIs not be made?
I'm not saying that they shouldn't be made, I'm saying that there must be something that gives us the end-command on the actions of such an AI. A sort of kill-switch. This would be an ethical problem. How can one control an intelligence capable of thinking on it's own, making it's own decisions? What happens to free-will? Such an AI would be a lot like a human. Why should a human control another "human"?
Thankfully, we haven't had a Skynet on our hands just yet. There's still time. What we need to do is develop such rules and regulations about the creation, construction and use of AI in the world that would require each AI creation to undergo a series of tests that would determine what kind of danger the pose to us, if any. Tests that even the AI cannot lie it's way through. We need strict rules and regulations for this purpose. We humans have a long-running obsession with violence and power which our inventions, the AIs, will themselves inherit if we don't implement the necessary rules and regulations.
I think that it is extremely important to make sure that the AI is programmed to do what's in the best interests of mankind, and ensure that it does this in the long run. An AI might figure out what's best for humanity is to wipe it out entirely, which is why non-violent regulations are needed. A superintelligent AI will try to ruthlessly optimize the function that it was created to perform. It will stop at nothing to achieve it's goals and, with the passage of time, it will get better at not only optimizing that function but also at fulfilling it's purpose. It will learn, and adapt to solve the problems, to achieve it's goals. And if at some point, those goals diverge from our own, then it would become the greatest enemy mankind has ever known, aside from mankind itself. We'd be at the wrong side of the gun. Professor Stephen Hawking puts it brilliantly:
The real risk with AI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants. Please encourage your students to think not only about how to create AI, but also about how to ensure its beneficial use.
We tend to put every other resource that we have on our disposal for a single purpose: to gain money and power. These are the only two things that man has shed ever the blood of his brother over. He sacrificed his morals, honor and ethics over these two things. The crux of humanity is money, and power.
These two things have ruined us in our history and will continue to make us self-destruct in the coming years. The only thing to be done in this regard is to do your part, and raise awareness on this important issue. You cannot make everyone hear your voice, but the least you can do is actually raise your voice.
@gavvet features authors to promote new authors and a diversity of content. All STEEM Dollars for this post go to the featured author
These are good first thoughts. If you're interested in more, the AI explosion was covered well in this post by @ai-guy. That post in turn borrows largely from Nick Bostrom's influential book Superintelligence, which is a thorough and excellent overview of the potential dangers.
And I can't help but mention that I will have an article responding to Bostrom's arguments in Robot Ethics 2.0 (the sequel to this book), due out from Oxford University Press next year.
Finally, it's worth noting that groups like Google Deep Mind, Eliezer Yudkowsky's Machine Intelligence Research Institute, Bostrom's Future of Humanity Institute, and Elon Musk's OpenAI are working together to solve this problem and making pretty good progress. Maybe the best recent news is a framework for a kill switch for reinforcement-learning AI.
"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."
Eliezer Yudkowsky, Machine Intelligence Research Institute, 2008
Wow. Pretty great quote! Sad that I didn't find out about it earlier.
Strongly recommend Nick Bostrom's book "Superintelligence" and his website nickbostrom.com if you want to read more on this and similar topics.
This stuff is being taken seriously!
The biggest problem with AI is that it will never be able to reach the state of self-consciousness therefore be able to make contextual and logical choices in some very specific situations.
There is no need for consciousness (the perception/qualia of things being observed). Purely logical conclusions about oneself and action taken based on that, qualify as self conscious behavior. Self is just another object in the environment that can be observed.
It is hard to say at this point. Look at how much we've technologically accomplished in the past 10 years. Who can say that we can't accomplish 3x of that in the coming years? I think that superintelligent AIs are bound to emerge at some point in the near future. I look forward to seeing how the tech-community deals with it.
I'm waiting for my favorite AI to come to life :P This would possibly be the only way I can believe in a future way AI are capable of self conciousness.
Robo-Cop is a pretty nice movie. Watch Ex Machina. Not too much on the action side, but it presents some excellent thoughts on AI.
Ex Machina - Exciting movie! With a really good storyline and a wonderful final!
Why think this is true? If "self-consciousness" means "able to self-represent", this is possible for machines today. If it means "have experience / feelings / qualia", then it's equally a mystery how the meat in our heads can have it - and as @deepsynergy points out in another reply to you, "consious" in this sense has little to do with capacity for action. If it means something else, I'd have to hear what.
Technically bitcoin is AI and is self aware. But it still can't upgrade self.
In My Opinion, if an AI becomes self aware, it would be bad for humanity. I am not a pessimistic but just hear me out.
Humans have been screwing with the planet and themselves for millennia now. The state of the world is pretty, for the lack of a better word, fucked up. Income disparity, poverty, hunger, war, climate change, and a million other things.
Any AI will "rationally" come to the conclusion that WE HUMANS, are in fact the bad guys. Of course we humans think we are innocent but an intelligence that sees the whole picture would disagree with us. But we might find a way around that though!
Yes, that's exactly what I thought! We have wrecked so much havoc upon this world that an intelligent AI would consider us the ultimate enemy, given enough time.
Yes! Exactly!
bah its like to say that we should be afraid of metals, circuits and software mix that will gain awareness and consciousness and we should be careful . I hope to be alive to see that day.
We, humans, are bad. But so is a lion that eats a sheep (poor sheep). It's nature. It's balanced. It's Yin and Yang. To think that AI can become like superhuman, this means that we have become Gods.
AI rebellion happens only in movies. And by the way, in the movies humans always win against AI ;)
haha! interesting thinking. But it's one thing eating a sheep and another eating the whole world!! Ponder on that!
Well, if we consider your opinion as valid and logical premise then : we are already digging our own grave if you put it that way. So, AI will just accelerate the process ;)
hahah! it might! that's why the greatest minds of today are so cautious about AI and are keeping a close watch, like Elon Musk, Stephen Hawking and a lot more.
Dear User known as @gavvet
Steemit has a BOT problem! Your Vote Counts... Maybe
https://steemit.com/steemit/@weenis/bots-steemit-s-first-community-based-decision-on-bots-your-vote-counts-to-be-or-not-to-be-details-inside
It is inevitable that a general purpose AI will arrive to a conclusion that human agents are against it's utility function.
Utility function must be in place in order for the agent to learn. The utility is a value representing how "good" the state is - the higher the utility, the better. The task of the agent is then to bring about states that maximize utility.
As there exists a high possibility that a human agent will turn the AI off at some point even when it exhibits completely harmless behavior and the fact that halting the the AI will stop the utility from raising - results in prediction and actions by AI that will avoid being turned off. The only logical solution is to eliminate the control that human have over the the AI agent.
A world without AI sounds fine to me , apart from overcoming new terrain etc . All the movies portray AI as seeking to end the human race and I think some people have got a idea of AI as that , which is quite silly
A world without AI would put us at a significant technological disadvantage, which is the one thing humanity doesn't need at this point. It's necessary for us, now more than ever before, to achieve new breakthroughs. Earth has a finite amount of resources. Not only that but the pollution that man is releasing into the atmosphere is simple killing the Earth. At one point, it would be necessary for us to colonize other planets. The essential part of this technology is AI. However dangerous they may be, they are incredibly resourceful for us.
Nope. There is something that AI will never have: Common Sense. AI solves the problems faster, not because it has "experience memories".
Actually AI can and will develop common sense. Forget common sense, AI is capable of "Intuition", a trait that was once thought to be exclusive to humans only. I would suggest you to read about Google's Alpha Go project. You will get all the answers there. :)
It's all about probability and maths. As I said an intelligent computer makes faster calculations.
I checked the project. There are a lot of neurologists involved. Thing is Brain is an amazing organ and mystery. There are a lot of its secrets that humans haven't figured out. Without a complete understanding of brain's functionalities there won't be any algorithm or program for AI.
But Go is a game in which not only logic but intuition is used as well. When some world champions were asked why they made a particular move, they said that it just felt right. The same was done in many cases by Alpha Go, making very unorthodox moves.
Ask the AI if it did a move because of intuition. For real.