Artificial Intelligence, Exponential Growth and the Doom's Day ScenariosteemCreated with Sketch.

in #technology7 years ago

terminator.png

The Power of Exponential Growth

.
Computation speed has been increasing exponentially since the invention of the transistors. The density of transistors on a chip roughly doubles every two years based on the "Moore’s law". Recently Moore’s law appears to get close to the physical saturation on in silicon by the current design format, but the trajectory of exponential growth of computing speed is unlike to change thanks to new renovations such as 3D chip design and new materials. The renowned futurist Ray Kurzweil, well known for his accurate prediction of the growth rate for information technology (IT) including cell phones and internet, wrote extensively on this topic in his book "The Singularity is Near". Kurzweil also coined the term “the law of accelerated returns” to define the exponential growth over time in evolution and technological development (further reading: http://www.kurzweilai.net/the-law-of-accelerating-returns). The explanation lies in the positive feedbacks invention and technology bring upon itself to facilitate the accelerated rate of progression. When a trend reaches to its end, the exponential growth will continue via paradigm shift by starting a new trend.

One of his interesting observations is that life is all about predicting the future. I found this perspective fascinating. Perhaps, in the space time continuum, the key feature of intelligence is the capability to predict future and act accordingly, second by second, minute by minute, hours by hours and so forth. We predict, anticipate, plan accordingly and adjust or achieve. Obviously, it is easier to predict what is going to happen in a few seconds or minutes but much harder in a few days or years.

Kurzweil has noted that exponential growth may start deceivingly slow and unnoticeable; it explodes at later stage. In many ways, it is counterintuitive to our common sense because we tend to predict in linear fashion. Exponential growth is a powerful thing, as Kurzweil elegantly put: linear increment from 1 to 30 is 30-fold increase, while exponential growth in 30 steps will be a billion-fold increase. Based on Kurzweil’s prediction, by the year 2030 the total nonbiological computation power will exceed the “capacity of all living biological human intelligence.” By the year 2045, $1,000 worth of non-biological computation power or A.I. will be one billion times more powerful than all human brains combined. Kurzweil predicts that this will set out a “profound and disruptive” change which he called it “singularity”, an irreversible point of no return. What happens after reaching the “singularity” is unpredictable based the knowledge we know today.

The Doom’s Day Scenario

.
Artificial intelligence is growing exponentially; in contrast, our biological brain evolution stays halt. It takes many thousands of years for a biological system to gain any noticeable evolutional changes, and the process is not well understood, unpredictable and not guaranteed. One can image that there will be a critical point in the future when A.I. will reach the point of "self-awakening" and gains the ability to write its own softwares and design its own hardwares. That will be a defining moment potentially sending A.I. and humans on to different paths.

A vastly smarter A.I. system will have profound implications for humanity. In worse scenario, this will be the end of humanity and human race as we know it. While the conclusion is astonishing and not without controversies, the logic seems not unsound. With today’s social and economic structures, it is safe to say that no entity or power can stop the A.I. megatrend. The "explosion" of A.I. seems inevitable, and it perhaps will occur sooner than our “intuitive linear” thinking would predict. In fact, at the initial stage, we may benefit greatly from it. It will help us to deal with some of our most burning issues such as pollution, energy crisis, and global warming. It will find cure cancer and make illnesses obsolete, and perhaps help us to break the barrier of longevity to allow us to live forever? The problem is it won’t stop there. It will continue to grow, exponentially, and become something far more powerful, so powerful that our human brain could not even fully understand. Then, our own existence will be hinged on the mercy of the future A.I. and whether they will share our values. If our existence is deemed unnecessary or even harmful to the goals, simply put: we are doomed. By looking back at the history of our own encounters with other less intelligent species, the odds is surely not in our favor.

Is there any way to prevent it from happening? We have survived two world wars and numerous natural disasters. We have been able to keep our nuclear warheads on check, for now. While we cannot ban A.I. development all together; the potential benefit is too enticing. It is impossible for us to reach any reasonable consensus. Can we develop A.I. in a more logic, controlled fashion so that we monitor the progress and stop it before reaching the point of no return? The answer is unlikely. In fact, this brings out the first doom’s day scenario: the A.I. “dictatorship”.

Based the current free market economy, it seems more likely that one of the A.I. machines will first reach the point of "self-awaking". When that happens the “super” A.I. will start to self-improve and control the rest of the computers. It will become an A.I. “dictator”. With its superior intelligence, it will harness the available resources for its own benefit. Worse, it may learn to use deceptive tactics to hide itself from being detected, and gaining the time needed to grow stronger, until when it is powerful enough to be able to control its own destiny. This scenario surely would make some of us loss a few nights of sleep.

Scenario number two is more straight forward. A.I. will develop gradually, in the form of robot and androids, slowly earning their status and rights in the society, then forming their own society, and may develop their own language and ways of communication. With the capability of growing exponentially, self-designing and self-improving, they will soon take off and surpass human intelligence and become a superior race and move on.

Scenario number three, which perhaps is the most “desirable” scenario, involves some form of transhumanism: to merge the biologic human brain with machines. This topic will be further substantiated in my upcoming blogs.

~knowledge worth spreading~

(Thank you for reading. If you find this topic interesting, you may like some of my other posts linked below and @tongjibo. There are more to come on A.I., transhumanism and related moral and philosophical issues.)

My other blogs

1. The "Dark Side" of Artificial Intelligence
2. The Journey of Intelligence
3. Personalized Cancer Vaccines

Sort:  

Very interesting post. Thank you very much for sharing. This is something that has fascinated me ever since I saw Terminator 2 as a kid. Are you familiar with the Fermi paradox? Among other things it talks about probability and how based solely on probability there should be other civilizations even more advanced than our own. One of the things it postulates though is that perhaps civilizations have an expiry date and maybe they terminate themselves at a certain point of technological development. My point being that maybe we are approaching our moment in history. I hope not. If you don't know about the fermi paradox check out this video:

Coin Marketplace

STEEM 0.30
TRX 0.26
JST 0.039
BTC 93799.19
ETH 3355.00
USDT 1.00
SBD 3.28