The Inescapable AI Hive Mind

in #technology5 years ago (edited)

beehive.jpg

I would like to shout-out to a favorite podcast, that is gaining traction and an audience. Concerning AI, hosted by Ted Sarvata and Brandon Sanders, asks two important questions: Is there an existential risk to humanity from AI, and if so, what do we do about it?

Ted and Brandon discuss the overlooked topic of AI safety. Currently, Artificial Narrow Intelligence (ANI) is becoming an important information processing tool in a variety of fields. ANI carries with it dangers of unintended consequences, but because it involves the scaling of intelligence in only specific domains, it should (in theory) still be under control of its human originators (who, of course, may still have their own problematic agendas and biases.) Artificial General Intelligence (AGI) does not yet exist (or does it? Read on...) but would include adaptability to many different domains (as is the case with natural intelligence) and the capacity for recursive self-improvement. With exponentially increasing processing power, this could quickly reach the level of Artificial Superintelligence (ASI) which could grow far beyond the control of, thereby posing an existential threat to, its human originators.

Concerning AI asks listeners to consider three potential milestones:

  1. How many years will it be before there is a 5% chance that AGI (artificial general intelligence) will exist?
  2. How many years will it be before there is a 50% chance that AGI will exist?
  3. How many years will it be before there is a 95% chance that AGI will exist?

They request listeners to contribute in the form of 1-2 minute recorded responses that can be played on the podcast. My own response is below:

My numbers are -10 / 30 / 45. The negative number means that I believe that there is a 5% chance that superhuman AI happened ten years ago, when cloud computing, smartphones and social media became commonplace. Information hit an exponential "hockey stick" with instantaneous communication, producing a hive mind, decentralized and independent of any one entity's control. Its behavior, for better or worse, seems to be that of a new superintelligent conscious entity with its own goals, desires and biases.

Each individual makes a microcontribution to the actions and motivations of the emergent whole. We're not stuck in traffic; we ARE traffic.

The hive mind is an exponential phenomenon with the power to trigger other exponential phenomena and reinforcing feedback loops, many of which, alone or collectively, comprise an existential threat. While the paperclip-maximizing ASI is a real possibility, the most probable future, in my view, is fraught with catastrophic climate change, willful misinformation, and proliferation of destructive technologies, the consequences of the exponentially-capable global hive mind.

We need to keep up our world understanding of far future implications of exponentials. Because when these exponential forces are in play, even a narrow AI can go far our of measure.

This is kept brief for purposes of the podcast, so I will use the rest of this blog post to elaborate. (Original image from Winnie the Pooh movie poster, Disney Movies 2011. Given the recent AI investments by the Chinese government, there may be room for a subtle Xi Jin Ping joke. I would love to know if this post gets censored in China!)

Since human thinking can use machine intelligence to augment its agency, it's reasonable to assume that an artificial intelligence could use human intelligence to augment it's agency as well. So when looking for a possible entity with AGI, consider that it could be a collective of human and machine intelligence.

It would be melodramatic to think of a network of friends on Facebook sharing pictures of their kids at Disney World as the Borg, but the organizing principles are similar.

We've had forms of collective intelligence for thousands of years, as tribes, city-states, kingdoms, nations and corporations. Strength in numbers can be a good thing, and democracy, all modern dysfunction aside, is an example of how a collective intelligence can be beneficial.

These collectives could be likened to the ANI of today, occasionally dangerous and unpredictable, but always still under human control.

Something different, however, is happening in the early 21st century. We are all inescapably interconnected, and ideas spread almost as instantaneously as synapses in a brain. This means that not only are computers carrying out the will of people, but people are carrying out the will of computers. Examples exist on social media, where algorithms curate content to manipulate behavior. It's become impossible to tell where the human parts end and the machine parts begin.

Outside of social media, consider the layers of complexity of a system such as capitalism. Corporations are vilified because they sometimes act in undesirable ways, under direction of unscrupulous CEO's. But corporate boards (representing interests of shareholders) are incentivized to elect such a CEO, and any company that successfully does so is rewarded. (Exceptions exist, but not enough to adequately tip the scales in a hive mind.)

Shareholders include anyone with investments, including the company's own employees with retirement plans that are managed through intermediaries, indexes, mutual funds, etc. Investments are placed into whatever mix has the highest rate of return, generally without the direct knowledge of the shareholder in most cases. And that's what the global hive mind looks like.

From another angle: Ask any worker why they do what they do, they will say they need to work for a living, support a family, put food on the table. Almost no one would call that greed. But put those incentivized workers all together in a hive mind, and you get a society that is centered on the pursuit of corporate profits. Almost everyone would call that greed. Bizarrely, the bad behavior of the hive mind is an emergent property of the good behavior of its members.

The collective intelligence of all of human civilization has reached a level of complexity and technological sophistication that, in my view, rises to the level of AGI, perhaps ASI. It seems to pull in directions that no rational human being desires, but is comprised of lots of individual (usually) rational human beings. Not just a society bending to the will of a few tyrannical oligarchs, but rather with a will all its own.

So where do we go from here? Top-down regulation on a global scale, via international treaties, etc. can be effective to some extent in some cases, such as controlling nuclear proliferation and (someday hopefully soon) arresting the climate crisis. But in most cases the good nature of each individual seems doomed to contribute to the bad nature of the collective hive mind.

We could look to some exceptions, such as the way in which the pursuit of profits often leads to cheaper and more efficient means of production and distribution of goods and services. This ultimately filters down to the developing world, making said goods and services (clean water, electricity, even mobile phones and internet connectivity) available to those who would otherwise go without. But it seems that the benefits to the poor are entirely circumstantial, and examples to the contrary are far more numerous and destructive.

Another option would be to encourage people to think about what they can do as individuals to be less selfish and not give in to the incentives of greed. To push this solution is valid, but it comes off as preachy, hypocritical, and worst of all completely unrealistic. Even with our abundance relative to previous periods in history, most people are still locked in a struggle to survive and take care of their kids.

I wish I had a better answer here. But as the hive mind hurtles toward existential threats such as environmental tipping points, artificial superintelligence, and proliferation of WMD's, there seems to be a need for a large global coordination effort that is likely to take much more time than we have.

Sort:  

While my grasp of the topic is still in the range of amateur, it wasn't until I researched what went into these AI systems that I came to realize the limitations.

The greatest risks from AI come through the AI manipulation algorithms; tweaking recommendations to favor friends, has a very likely potential to influence the thought process of millions of users. Similarly, can create a dystopian society where the AI determines the rules and results, but short of having the system just automatically shutting the person out of the grid, there would still be a need for human efforts.

While there is room for AI to accomplish tasks (as in AI controlled machines refining tasks through learning) that stands to put some jobs at risk, it is far more likely that these AI systems will expand to become better 'assistants' than 'replacements'... that is, until society goes full skynet.

Thanks for reading! I am an amateur myself, with only peripheral exposure to "real" AI in my IT profession. Workforce automation is a big problem, but I agree about the algorithms being the greatest risk. I think we will go "full skynet" very quietly, one newsfeed suggestion at a time, and the killer robots will show up much later, if they even need to at all.

Happy times!

Coin Marketplace

STEEM 0.19
TRX 0.19
JST 0.034
BTC 91243.00
ETH 3101.74
USDT 1.00
SBD 2.86