Will Artificial Intelligence Take Over the World?

in #technology7 years ago (edited)

The two words; ‘Artificial’ and ‘Intelligence’ can strike fear into anyone intelligent enough to understand its implications. The late Stephen Hawking predicted that AI will supersede humanity, as he is quoted to have said:

“The development of full artificial intelligence could spell the end of the human race”.

So, is this ‘spell’ a warning to be taken seriously, or is it all just an elaborate magic trick? Some of the greatest minds of this generation have suggested that the humble beginnings of AI, such as the Amazon Alexa telling you a bed time story, could turn into a threat to the survival of mankind. The true purpose of AI is to assist our lives and satisfy the modern human desire to take the easiest path through technology. Throughout this article I will discuss where we are up to with AI now and how it might progress into the future.

Artificial Intelligence (AI)

Artificial Intelligence is everywhere; in fact, nowadays, we cannot live without it! Artificial Intelligence doesn’t necessarily mean the accidental production of terminator-esque robots with the ultimate goal of wiping out the human race - so, on that note, where is this AI technology? To begin to answer this question, let us define intelligence.

What is Intelligence?

The dictionary definition of intelligence is the ability to absorb and apply knowledge and skills [1]. To clarify, all living animals are intelligent to some degree; they learn from external stimuli all around them and respond accordingly. For example, a dog may learn to sit when it is taught to. This dog, by definition, is exhibiting intelligence. When we look at artificial, man-made machines; giving them the ability to learn from their surroundings and apply this new knowledge, without necessarily being conscious, yields Artificial Intelligence.

The 3 Types of Artificial Intelligence (AI)

Artificial Intelligence can be split into 3 significant categories: Weak AI, Strong AI and Artificial Super-Intelligence [2]. There are many, many examples of AI all around us today; where are they and what categories do they lie in?

Weak AI: Artificial Narrow Intelligence

Weak AI is the simplest form of Artificial Intelligence; to further clarify, machines that exhibit Weak AI are intelligent, but only focus on a narrow task [3]. In other words, their intelligence is constricted to what they are programmed to do. A fantastic example of Weak AI is Apple’s SIRI; it is somewhat capable of holding conversation with real people and can crack a joke here and there. When you speak to SIRI, it feeds this information into a large database and learns how to respond from experience of being asked similar questions. Its limits can be demonstrated by asking it something it is not programmed to process, thus yielding inaccurate results. In addition, all SIRI can do is talk, you cannot ask it to physically pass you your TV remote; hence why it is categorised as Weak AI. Other examples include: Google Maps, Amazon’s Alexa, the CPU opponent you play with on chess and Facebook; click here to see other examples and more importantly, why they are examples.

Strong AI: Artificial General Intelligence

The next step up the ladder is Strong AI, or, Artificial General Intelligence. Developing a machine for it to exhibit Strong AI, means it must have the same intellectual capability as a human, to function like one; in other words, Artificial General Intelligence is when the machine can think just as well as you and me [4]. We are still a long way away from achieving this; in fact, some Scientists doubt we will ever program and produce such technology. Surprisingly, it is a lot easier to code a machine that solves advanced calculus than to get it to walk up the stairs; so, what will happen to the world once we are able to create an intelligence that competes with our own?

Recursive Self-Improvement: Machine Learning

The main difference between Weak AI and Strong AI, is that Strong AI will have the capability of learning and upgrading itself, without the need of a human [5]. Just like a baby developing into an adult, a machine with a specific base code can go on to develop itself into a complex structure; this ability is called Recursive Self-Improvement, or, Machine Learning. At this point, there is no reason for this human-like machine to stop working and will continue to learn and develop. The more intelligent it becomes, the better it will be at self-improving. Consequently, it becomes better and better at learning and further developing; in other words, their intelligence will grow exponentially. It will only be a matter of time before this Strong AI becomes more intelligent than its creator. The point at which this is about to happen is known as the Singularity. This is where we reach the third and final step on the AI ladder; Artificial Super-Intelligence.

Artificial Super-Intelligence (AS-I)

Once Artificial Super-Intelligence is achieved, things get a bit exciting and potentially equally as terrifying. Any machine that has passed the Singularity is classed as super intelligent [5]. When a self-improving machine reaches such high intelligence, there are no doubts that it will continue to become increasingly intelligent, very, very fast. AS-I machines will single-mindedly carry out their own aims, whether they agree with ours, or not. What does this all mean?

The Pessimistic Angle of AS-I

While these machines may not necessarily want to cause human extinction, they may believe that doing so would be of benefit to them, so, as a result, they would not hesitate to do so. You may assume that intelligence of such a high calibre would have a degree of empathy and respect for life, as do humans, but a problem arises; ethicality and morality are both human traits [6]. What I mean, is that even we, ourselves, cannot collectively decide what is right and what is wrong; for example, abortion and euthanasia, to say the least; so, how can we ever produce a machine of such intelligence that is deemed ethical and moral?

The Optimistic Angle of AS-I

Thankfully, there is a more optimistic side to such technology. People believe that these machines will have ‘human safety’ engrained into their base code; in other words, they will remain our servants. Their sole purpose will not only be to serve us, but to help us by combining all of this intelligence to unlock more and more mysteries of our Universe. This collaboration of high intelligence can help us solve even the hardest of problems in ways that we could never have imagined before!

Conclusion

The principle of AI, in my opinion, is an incredibly exciting one; but I also do believe that developing such high levels of intelligence is like treading on thin ice. If AS-I machines consider human extinction to be of any benefit to them, I believe that machines of such high capabilities may easily surpass any initial base code… due to such intelligence. It would almost be like a human born into an atheist background, surpassing previous beliefs and following a specific religion. If a human can alter their 'base code' (in machine terms), a machine more intelligent than a human surely can too. On the other hand, if AS-I machines believe our survival is of benefit to them, we, collectively, can work together and unlock mysteries of the Universe, together, and discover things we never thought we would!

I will leave you with another quote by the famous Stephen Hawking:

“The rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity. We do not yet know which.”

If you have any questions, leave them below and until next time, take care.

~ Mystifact


References:
[1]: https://en.oxforddictionaries.com/definition/intelligence
[2]: https://javasingularity.wordpress.com/2015/05/26/three-types-of-ai/
[3]: https://www.techopedia.com/definition/3162
[4]: https://www.ocf.berkeley.edu/
[5]: https://www.youtube.com/watch?v=9TRv0cXUVQw
[6]: https://www.diffen.com/difference/Ethics_vs_Morals

Please note; no copyright infringement is intended. All images used have been labelled for re-use on Google Images. If any artist or designer has any issues with any of the content used in this article, please don’t hesitate to contact me to correct the issue.

Relevant articles:
You Can Develop Unlimited Senses!
Will Teleportation Ever Be Possible?
Can We Download Our Brains and Live FOREVER?

Previous articles:
DTube #4: How do Fireworks Work?
DTube #3: The Breakthrough in Physics brought by Gravitational Waves
The Theory of Everything: Introduction to String Theory

Follow me on: Facebook, Twitter and Instagram, and be sure to subscribe to my website!

Sort:  

I think AI is a great innovation that needs lot of thoughts in ethics and fairness. As a technology, it's already outperformed our human beings in many areas, such as face recognition and so on.

Hi, sorry for the late reply. I agree with you, but these machines that have outperformed humans with facial recognition cannot drive a car, for example. The minute they can master everything a human can do (and more) is the minute we need to be careful!

I'm pretty sure A.I is inevitable but not in a sense of taking over humanity, but it must be leverage to have a better quality of living and eliminate poverty and inequality. Blockchain already proved to be a very useful tech. and what else it can be capable of, if it is merge with A.I tech.

It is not this narrow AI that we need to be concerned of! It is the machines that we create that exceed human intelligence and pass the singularity! Nevertheless, I somewhat agree with you and thank you for your comment :)

certainly the end of mankind will come with artificial intelligence

Technology nowadays is really amazing and this rise of AI will somehow made us to think, "Will they take over the world?" As for me, I doubt it. Its next to impossible. AI is highly intelligent creature but still we humans are the one who made them and we are more superior than them, we can crate them but they cant create us. Their intelligence is just limited within the range on how they were created but human brain is so complex that we keep on learning while we are alive.

Artificial Intelligence is going to become bigget then the internet boom and the dotcom boom... alot of people are going to get very rich from Artificial Intelligence in the next 10 years.. will you be one of them..

Very effective conclusion, @elizahfhaye. Impressed.

I agree but the point of the Singularity is that once it is passed, the machine becomes more intelligent than the human in every conceivable way. Intelligence of that calibre will easily be able to adjust and control the human if needs be! Then again, in the future, we will develop our understanding of AI and (hopefully) be able to intervene if matters get out of hand.

"AI is highly intelligent creature but still we humans are the one who made them and we are more superior than them, we can crate them but they cant create us."

Like Max Tegmark says in his book, we only control lions because we're smarter than them. We control the world because we're the most advanced species. If AI became smarter than humans, what stops it from controlling us the way we control animals? If we ever manage to build a human level AGI, it could create countless amounts of better versions of itself at a much faster rate, and the only thing stopping it would be the laws of physics.

It's amazing how fast the AI is developing, I hope it's for good use for the future. I've seen robot Sophia doing public talks and I gotta say it's kinda freaky to see, lol...Check out the video:

I think it's freaky because of the lack of emotion on the robots face! Thank you for commenting :)

exciting

Any intelligent enough AI could very easily change its base code and therefore, do whatever it wants.

I think AI will surpass biological intelligence by a lot, but the solution for us will probably be to join them. To become one with them and to leave our biological bodies behind.

That's the only way we can survive long term, our bodies are too fragile and our brain is too slow... but that will change.

One would think that a super-intelligent AI would have the same concerns we would have in developing a system that is yet more intelligent than itself (and that it would then feel threaten in being superseded). I also see some type of safeguard's being in place or at least for us to be merged into that AI so that we see the benefits that AI will gain. I feel that by the time we build AI to that level we will have a much better understanding of how to mitigate risks.

Anyway great thought provoking article @mystifact !

I agree, whilst we are developing AI we will also be developing measures to regulate it as we go along. Great point. Thank you for your comment :)

The future is highly uncertain, who knows, maybe one day we will be their slaves to avoid our own extinction!

Well I don't think we would ever be their slaves, because if they really would take over the world, they would find us less useful than other machines, and since they wouldn't care for us at all, they would kill us all and replace us with machines... I do agree that the future is uncertain, but I think we should think in advance and its all so exciting!!!

Exciting yet also a mixture of other emotions indeed!

Very well done! You are more silent in our common groups and I overlooked you until today; i was somehow not following you. It was remedied :)
I am very interested in this subject and I have done a great deal of reading.
My existential question is close to yours: If AI learning will keep improving, will it somehow LEARN from the human mistakes of extermination and will it try to preserve all species? Will its intelligent "cognitive" processes offer him a sustainable way to grow while also preventing harm to another lifeform? If its goal is to learn more, wouldn't the destruction of anything else be counted as a non-learning action? I would like to believe so, but I would be proud if AI would survive us in our short lives in the Universe. I would like humans to survive but science seems to have a different opinion for now.
Anyway, on the general issue of AI i have read and liked a post outside of SteemSTEM by @mejustandrew whom I have also invited to SteemSTEM with this ocassion. Cheers!

I am incredibly thankful for your kind comment! I try my best to get involved, but I really struggle for time these days :( We'll need to have a chat one day! Regarding your comment, the main difference between man and machine, is that machine is purely based on logical algorithms and man is based around a mixture of empathy and logic. I believe that a machine with such high intelligence will ignore any 'right' or 'wrong' man makes and use its logical pathways to choose what's best in the long run. What do you think about this?

I shall have a read of the post and again, thank you for your comment!

Now this is an interesting topic, and I am optimistic on this since for starters, we are far from getting to the strong AI (I have seen machine learning and how it works, and it is absolutely fascinating) to work as we want them to. So AI is not moral or ethical because it is entirely driven by facts and data for the goal of improving whatever it is made to improve. Now since it is not emotionally connected to the lives of human beings it means that it is indeed going to have a good idea on how to improve things. I just don't see how we would ever be stupid enough to give AS-I the power to do what it wants since we don't understand it, because it is obviously going to be able to outsmart us all. I don't understand too much about what is needed for them to function that well, but if we can control it's ability to do what it wants and limit it with information and power(connection to the internet for example) then we shouldn't have that problem. Also we are going to learn much more about them as we go, this is definitely a question very much worth worrying about, but I think we are cautious enough, also the rules of robotics look like they don't have loopholes that a machine might use except for the part where it would think it's in our best interest to hurt someone, but then the other rule forbids it, so I think we might make it so that they don't take that kind of ideas into consideration at all, but rather work with what they can. Anyway Great post!!!

Thank you for your comment! It is worth noting that once a machine passes the singularity, it could be intelligent enough to rule out any measures in place to stop it from thriving; i.e. it may find an alternative power source or a need to be connected to the internet. I believe that the point of AS-I is it can work its way around any safety measures and outsmart us if needs be, but like you said, we are a long way away from this. As we develop our understanding we may potentially find ways to stop the accidental extinction of humanity!

Oh, right, possibly it could do that, but yeah.. We are careful enough to think about this long time ahead so hopefully, we are not going to be killed by our own technology xD

I think that this is a temporary kind of problem, because human consciousness does not follow any computations.

"Strong AI" is a interesting matter to imagine and build virtual dramas. Even if we could create machines that behave very similar to real human being under many circumstances, this machine will not have experience of reality and emotional intelligence as a human being. Thanks for nice article.

Coin Marketplace

STEEM 0.17
TRX 0.16
JST 0.029
BTC 75904.17
ETH 2906.60
USDT 1.00
SBD 2.64