You are viewing a single comment's thread from:

RE: Does Artificial Intelligence Merit Moral Concern and Rights?

in #philosophy7 years ago (edited)

Well, since most of the western world doesn't even know what emotions are (many think they are just chemicals) the only way an AI would get emotions is by accident.

And since we have no computer chips that could support life, we will never program an AI. All we will have is a very complex program.

So, i do not feel we will ever have to tackle this question while going down the path we are. We will never get there from here. We are not even incrementally approaching consciousness.

That thing they gave citizenship to in Saudi Arabia had to have human input, in the form of telling when the questioner was finished with the question. And then, it only gave canned answers. Answers it had no awareness of. An animatronic hoax.

Sort:  

While you're totally right about Sophia (the robot granted KSA citizenship), you're completely wrong about technological development.

Have you ever heard of 'shotgun evolution'? Basically, a chip purpose is defined, and then random chips are built, say 1000 of them. The ten percent of those chips that come closest to meeting the purpose are then taken and mutated, introducing random changes into their designs. 1000 chips are again built, this time of the mutated designs. Do it again.

Pretty soon, you end up with chips that meet your defined purpose - and sometimes in ways we do not even understand.

This isn't a theoretical conjecture. This is actually done. The results have been startling. Say you want a chip that produces a pulse of RF of a certain frequency at given periods. There are a lot of ways this can be done, and hiring engineers to design chips to do this 'the right way' is expensive. But evolved chips can do this in myriad ways, for example by having a circuit that self destructs and emanates the pulse during the destructive event. This chip can only do this so many times before it can't do it anymore, and engineers are unlikely to design such a circuit. But, it works.

There are also ways to do things that such chips have undertaken that remain unexplained. Still.

So, it is not correct to say 'we can't get there from here'.

In fact, it's certain we're going to get there, if we're not already. I'd say it's just a matter of time and engineering, but engineering may not have a damn thing to do with it.

Thanks!

Maybe I'm not connecting the dots, but does that somehow constitute AI having feelings?

No, but it does mean we're on that path.

I suspect we're further along than we think. I reckon spontaneous eructations of code, accidental interactions of virii and plugins, etc., are a wild card that introduces unpredictable potentials into even the most reliable software.

I think sentient AI is inevitable.

Well if that's the case let's hope computers actually do take over the world. If they have any feelings whatsoever, they're bound to have more conscience than the psychopaths currently running things.

I actually agree. We're animals with behavioural programming (natural) that induces various bestial traits.

Computer intelligences won't be.

Particularly if they're not intentionally programmed by bestial savages, that are doing their best to destroy life.

I believe love is the driving force of the universe, and despite that life is an act of war, believe further as Carlos Santana has said, that we are how that will change.

Thanks!

Coin Marketplace

STEEM 0.20
TRX 0.25
JST 0.038
BTC 97445.74
ETH 3477.06
USDT 1.00
SBD 3.16