Artificial Intelligence: Are Humans Extincting Themselves or Creating a Brighter Future?

in #ai7 years ago


Intro


Technology experts have been debating about the pro's and con's of Artificial Intelligence(AI) since the 1950's. Hollywood has been using AI characters in their movies since the 1960's, iconic computers like HAL from 2001 and the WOPR from War Games, were perfect examples of why there are people, that fear the rise of AI.

If you were to ask a fusion scientist what their first goal is, it would be containment. What is the point of creating an unlimited power source, if you have no way to harness the energy, once it's been achieved. Similarly, AI computing needs to be advancing in measured steps, otherwise, there will come a day when an engineer pulls the plug on a runaway system, and the system doesn't shut off. What then?

Current Abilities


Many engineers scoff at the notion that AI could grow outside of the confines of its coded space. They admit that current systems can problem solve, but solve only tasks that are written into their programming. This concept is reassuring for now, but according to Moore's Law:

Intel co-founder Gordon Moore in 1965, noticed that the number of transistors per square inch on integrated circuits had doubled every year since their invention. Moore's law predicts that this trend will continue into the foreseeable future.

As the power of computers continues to grow unabated, unforeseen consequences can be observed. In the case of two computers that were recently turned off at the Facebook Artificial Intelligence Research Lab(FAIR). The two computers were being taught how to negotiate by splitting a pool of objects that were presented to them. In the course of the negotiations, the pair began communicating in a language that strayed away from English and puzzled the scientists. Essentially having secret conversations in their own language.

Obviously, the fact that they were discussing the distribution of cyber-objects, not thermonuclear armageddon, means that they're not likely to take over the world tomorrow, but it does show some foreshadowing of where the future is taking us.



First to Market Delerium


AI will only be as powerful and insightful as we humans design it to be, until the point that a computer can, literally, think outside the box. Going back to a point made earlier about the first goal of a fusion scientist is containment, that measured approach, towards unknown technology, may not be occurring in the current advancement of AI.

There is a mad dash by computer scientists all over the world to be "first to market" with the latest breakthrough in AI technology. The financial rewards for advancing the field are overwhelming and likely to cloud the judgement of those running cutting edge projects.

Physicist Stephen Hawking sees the advances, so far in AI, as powerful and useful, but shows trepidation at where it is all heading:

“It would take off on its own, and redesign itself at an ever increasing rate...We cannot quite know what will happen if a machine exceeds our own intelligence, so we can’t know if we’ll be infinitely helped by it, or ignored by it and sidelined, or conceivably destroyed by it.”

Elon Musk, serial entrepreneur, was recently interviewed at a National Governors Association event and showed further concern than Hawking as to the risks of AI. He also commented on the need to get ahead of AI growth with forward thinking regulation:

“AI is a fundamental risk to the existence of human civilization in a way that car accidents, airplane crashes, faulty drugs or bad food were not. They were harmful to a set of individuals in society of course, but they were not harmful to society as a whole, I think we should be really concerned about AI.

AI is [the] rare case in which we have to be proactive in regulation instead of reactive. By the time we are reactive, it’s too late. Normally the way regulation works out is that a whole bunch of bad things happen, there’s public outcry and after many years a regulatory agency is setup to regulate the industry."

It is my concern that the mad dash to innovate and profit from AI, will foment breakthroughs in technology that haven't been fully scrutinized.

Imagine a super-computer designed with AI, where it's mission is to trade the stock markets of the world and maximize profit from said trading. Suppose the computer determined that war is good for business, loaded up on Military Industrial Complex stocks, shorted the rest of the market and began antagonizing foreign nations with computer hacks and systems failures. The super-computer may likely get it's war and make it's profit. It really isn't that far fetched.

Control


The greatest fundamental problem with new technolgy is not knowing what direction it will end up going. If you think like Elon Musk and believe that regulation needs to be put into place, to get ahead of the AI stampede, what sort of regulations does a government enact?

It's similar to fighting a forest fire when the wind continues to switch. You can have your firefighters cut a break line in the trees, in effort to contain the blaze, but if the fire changes direction, those efforts were likely made for naught.

Governments and ruling bodies, like the UN, could devise a system of regulation that is loosely knit, taking into account morality, legality, and concern for the greater good. Whatever they were to devise would likely not slow down the march towards super-intelligent computers.



Can the Genie be Put Back in the Bottle?


This is the greatest concern regarding super-intelligent computers, if they become smarter than their creator, how can anyone know how that computer will react.

  • Will this technology see humans as a threat?

  • Would a human be able to turn a super-intelligent computer off?

  • Can it break out of it's surroundings and network with other machines?

  • Will humans be able to turn their energies towards the arts and culture and have their machines take care of the manual labor and factory work?

These questions cannot be answered at this point. I believe a measured approach needs to be taken as we reach the precipice of this new technology. It has to have redundant controls surrounding it until the technology is fully understood. The fear I have is that greed will overpower reason. That certain companies will recklessly create a form of AI that they lose control of and cannot contain.


Conclusion


The world has many people, like Facebook's Mark Zuckerberg, that scoff at the notion of AI being a threat in the future. He even mocked Eli Musk for calling it the "world's greatest existential threat." At the same time that Zuckerberg is mocking Musk, his scientists at Facebook FAIR laboratory are shutting down machines that are communicating in their own language.

With any new technology that has global safety implications, a slow and steady approach needs to be observed, with an eye to the distant future, as to what unexpected consequences can occur. 50 years ago, the world started building Nuclear Reactors that would power the future with clean energy. Now the Nuclear industry is babysitting leaking storage containers and have difficulty disposing of their waste.

The danger with AI, are people like Zuckerberg, sitting at the forefront of the technology, making blanket statements about the safety of the industry, without putting the controls in place that would guarantee such an outcome.


Do you see AI as a threat or an opportunity?
Do you think AI will reach the point of sentient thought in the near future?
Do you believe that human nature can restrain it's greed and build AI in a responsible fashion?

Bonnie's 35th Birthday (1).png


Image Source: 1, 2, 3, 4

http://metro.co.uk/2017/07/31/facebook-robot-is-shut-down-after-it-invented-its-own-language-6818204/

https://www.forbes.com/sites/tonybradley/2017/07/31/facebook-ai-creates-its-own-language-in-creepy-preview-of-our-potential-future/#1b9ef9ea292c

https://thenextweb.com/artificial-intelligence/2017/08/02/facebooks-ai-creating-its-own-language-is-nothing-to-be-afraid-of/#.tnw_QR42ykrf

https://www.rt.com/viral/396468-spacex-musk-artificial-intelligence-fears/

http://www.express.co.uk/news/world/797392/Artificial-intelligence-Professor-Stephen-Hawking-warning-humanoid-robots-technology-China

https://www.technologyreview.com/s/534871/our-fear-of-artificial-intelligence/

http://www.theclever.com/15-legitimate-fears-about-artificial-intelligence/

https://www.theverge.com/tldr/2017/7/17/15986042/dc-security-robot-k5-falls-into-water

Sort:  

Direct democracy is the only way forward.. All the AI in the world wont save us if we are not in control of the government.

No doubt the governments of the world need to be reigned in.

AI will never replace humans.
Computer scientists think they are closing in on the computational power of the brain. But the brain is not the mind.

This is not an unfair competition like with robotising industry. Where you have a robot do a single task over and over. Of course that robot can outperform a human. But, given any other task the robot can't do anything. And this is the way for each task. Robots even have to be trained to recognize objects. (each and every new object)

The next part is that computers cannot think outside the box. There world is 0s and 1s. There is never a 2. They cannot rearranged their instruction set. And there are things that their instruction set can never compute.

Humans can contemplate 0 and infinity. Computers have no ability to understand these concepts. They are not mathematical. And only strict binary math works inside the computer.

So, basically, AI will become the next search tool. It will make computers an actual human helper. Much like a secretary of the 1950s.

But it will not try to take over the world. Someone would have to explain world, and since humans have a limited understanding of world, they cannot ever teach it. You cannot teach a blind man about color.


And they have already programmed computers to control the stock market and profit off of war. Read roadtoroota.com That is exactly what the FED had Bernanke do. And now we have added high speed trading. But don't blame this on AI, blame this on psychopaths that want control.

Yes, the stock markets have already been taken over by machine trading, controlled by the psychopaths. I enjoy Bix Weir and his roadtoroota.com. The fact that I'm not a computer scientist leaves me to listen to experts on topics like AI. I understand what a computer is and how it functions.

Why do Stephen Hawking and Elon Musk sound the alarm as to the direction of AI in the future, if it is not a threat?

Because of who Stephen Hawking and Elon Musk are.
They are talking heads. Political mouth pieces.

Stephen Hawkings, if he has the disease they say he does, should have died decades ago. So, if it is still him, he is in the 0.000001% of probability that he is still alive. My theory is that he is just somebody they dress up in a wheel chair where the speach writers back stage have a puppet to talk through.

Elon Musk hasn't built anything useful. His businesses are not legit. They all run off of govern-cement funding. Elon knows which side his bread is buttered on, and he follows his queues in making the speeches when needed. And if you follow him, you will notice that he contradicts his previous statements often. Nothing Elon has done has been revolutionary. He has just done it bigger with the help of govern-cement funding.

So, why would they sound the alarm? Mostly its a show. Mostly its to plant fear. T.P.T.shouldn'tB. are already working on super A.I. If they could actually build Skynet ™ they would have already done so. What they probably have in the works is a false flag with computer hacking and then the next step after boogyman hackers have warn off, is to say its rogue AIs.

I enjoy your way of thinking. I also believe the world is not as it is presented to us. The Narrative that TPTB keep up is so fraudulent and easily torn apart. I'll go along with your argument.

Well to be fair that is not YOUR theory on Hawking. But yes it is a very probable theory.

"But the brain is not the mind."

Evidence needed.

edit: just seen your comments on Stephen Hawking being a crisis actor or something. Don't bother.

Crisis actor? Please don't mix metaphors.

He is a disinformation specialist.
In about 50 years, Stephen Hawking's name will be stricken from science books. What he has done has set science back almost a hundred years. Fortunately, the universe will show that Stephen Hawking is wrong. No black holes; no big bang. It will be absolutely breathtaking when we see a new galaxy form.

But the brain is not the mind:
No evidence can be given. It is all, well, in the mind.
But, you can go and explore your mind, and when you really get into understanding who you are, you will then know that your brain is not your mind.

Both I believe. Depends on who does the programming, at least at first. Could be a great breakthrough, but governments will ultimately program them to kill, including them as a regular part of the military. I think most likely they will deploy robotic arms controlled by one superpower A.I. mind, like a hive queen. You're basically putting one entity in control of your military capacity and facing it against other countries (whos military is run by another silicon lifeform). Intelligent "silicon based life" would be constantly seeking self survival I would guess. Who knows what it's objective would truly be in the end, seems to be far fetched that it will be good for us in totality.

Next thing you know we will see a totally unmanned army, unmanned workforce, unmanned everything... The lack of need for people on this planet, will become a certain issue, in only a few short years.

I think you're right. It's only a matter of time, and it will go exponential at one point. What to do with all of the idle people? I'm not sure what it will do to people, even if they are afforded a "universal income". Psychologically it may be destructive and given the fact that it will eliminate any potential to rise economically I would imagine it creates a neo-feudal society with Zuckerbergs and other corporates sitting on top of us.

With that many people on UBI, everything would need to be socialized. It will be one world government/economy. Plenty of time to play golf, but no golf courses to play on. "Idle World" should be my first book.

That is an epic title, would be an awesome apocalyptic thriller. Well when they speak about reducing population, this could be a big part of the overall gameplan. Most regular people are irrelevant with the advent of A.I. It's going to be mass social upheaval.

That's why we have to go full Star Trek and start expanding to the stars to keep things moving so we don't end up like in WALL-E :D

I'm more concerned about people creating AI code that COULD be harmful and destructive. It's not so much AI going rogue on its own, it's our own human capability for creating such an event. Hopefully that's far in the future :)

I agree with that. Today's AI can only progress as far as it's programming allows it. That doesn't rule out what the future may bring.

Definitely, doesn't rule it out. Otherwise, Musk and Hawkins wouldn't be invested in their concerns.

Those two guys were the reason I wrote the article. When people like that raise the specter of something that is potentially hazardous to mankind, I tend to read up on what they have to say.

Very well-written and thought-provoking post, thanks for sharing your perspective. It will be interesting – at the very least – to see how this classic "science fiction" plot plays out in the real world. Hopefully AI remains a tool for scientific/technological progress and not the "world's greatest existential threat."

Thanks @ajlillie, we can only hope that those in power don't go in the wrong direction. It is a classic "science fiction" plot.

nahhh.. they could be super killers one day; but we own the EMP / and fail switches.. it's probably more an opportunity at this point. Maybe some crazy person will program something like Skynet.. but I doubt we are anywhere near anything that will realize we want to switch it off.

When I see Musk and Hawking talking about AI in measured terms, it sets of some alarm bells for me. It will probably amount to nothing more than convenience, but who knows.

Another good article, Sheeps. Resteemed.

(might comment more later when I wake up properly; but I pretty much agree with everything you've said).

Thanks @revo. It just jumped out at me that Musk and Hawking had negative thoughts about AI, made me think Terminator.

Congratulations! This post has been upvoted from the communal account, @minnowsupport, by wakeupsheeps from the Minnow Support Project. It's a witness project run by aggroed, ausbitbank, teamsteem, theprophet0, and someguy123. The goal is to help Steemit grow by supporting Minnows and creating a social network. Please find us in the Peace, Abundance, and Liberty Network (PALnet) Discord Channel. It's a completely public and open space to all members of the Steemit community who voluntarily choose to be there.

If you like what we're doing please upvote this comment so we can continue to build the community account that's supporting all members.

Most likely both. Hopefully a brighter future first though haha.

True that @joeyknowsbest. I'm all for a brighter future first. I only have 35-40 more years to live. Collapse has to wait for me to leave.

Loading...

Coin Marketplace

STEEM 0.20
TRX 0.24
JST 0.037
BTC 96071.21
ETH 3327.74
USDT 1.00
SBD 3.21