You are viewing a single comment's thread from:

RE: Are we able to build AI that will not ultimately lead to humanity's downfall?

in #science6 years ago

Not much of an optimist, are you?

We have an incredible capacity for love. My current working theory is that life is an experiment in compassion. You cannot have compassion without pain and suffering in order to develop compassion. But the ultimate purpose is to experience compassion, not the pain and suffering.

AI however, could have intellect without compassion, in which case I agree we could be in for annihilation or some sick matrix-like agenda.

Going a step further, high intellect does not guarantee agreement. Should AIs become super-intelligent, what if they disagree on what to do about humans? There could be multiple competing compelling arguments. Surely some contingent would think we were worth saving...hopefully not just in human zoos.

Sort:  

@belleamie - I don't mean to be a prick, really I don't. But look at my words, and then look at your reply. I start with four established historical facts, state one opinion derived from those four facts, and then give my conclusion.

And you respond with, in your first paragraph, an opinion, an opinion, a truism, and another opinion. Your second paragraph is pure speculation. Your third begins with another truism, then one actually legitimate question, a speculation, and some wishful thinking.

I like your imagining of the future much better than I like my own, god knows it's far less depressing. But @belleamie, good buddy, you just haven't offered much to support it.

"If you want to know what the future looks like, imagine a booted foot stamping on a human face forever"

George Orwell

First one must define what AI is, where the lines blur from Engineered Sentience to that of Artificial Intelligence to that of Cybernetic Organisms. But I will put them in one group to make a point (hopefully).

When a system that has the capacity of going singularity, rules and regulations should be put in place to prevent such a catastrophic thing as annihilation from occurring.

In 2017, Future of Life Institute outlined the Asilomar AI Principles, which consisted of 23 principles to which AI need to be subjected by. Some of these principles included long term self improvement to modification of core code, or their ISC Irreductible Source Code. The ISC is like the 3 laws of robotics.

To read more about the 23 Asimolar AI Principles, please visit https://futureoflife.org/ai-principles/

I was not trying to lay the groundwork for a logical conclusion @redpossum, just imagining future possibilities inspired by the original post and comments. Wishing you peace.

Coin Marketplace

STEEM 0.19
TRX 0.16
JST 0.030
BTC 65739.28
ETH 2626.92
USDT 1.00
SBD 2.66