You are viewing a single comment's thread from:
RE: Yes, Virginia, We *DO* Have a Deep Understanding of Morality and Ethics (Our #IntroduceYourself)
You're confusing social psychology with psychiatry/psychology.
Everyone should be able to speak on a subject but they should not make easily falsifiable claims about others. The AI fear-mongers claim that they are the only ones even looking at the problem. Reading the scientific journals makes it pretty clear that that is not the case . . . .
As you know from my post, I don't fear AI. I don't spread "fear" of AI. I promote IA, which I deem safer for the human being to AI. AI in the narrow sense, such as weak AI, I am in support of. Strong AI is the only kind of AI that I think we should be extremely cautious about, just as we would be if trying to make first contact with an alien intelligence.
But AI in the sense that it becomes self aware, and may be smarter than us? I don't think we should rush to build that before we enhance our own ability to think. We should have a much deeper understanding about what life is before we go trying to create what could be defined as an alien intelligence, which might or might not be hostile, for reasons we might or might not be capable of understanding.
What is the reason why we need to create an independent strong AI when we can just create IA, cyborgs, and many of the the benefits with a lot less risk? You can cite social psychologists but you haven't cited any security engineers or people in cybersecurity which are the people who care about the risks.
Here is a risk scenario:
China develops in secret a strong AI to protect the state of China. The Chinese people are then over time evolved to plug into and become part of this AI, but lose all free will in the process. In essence, they become remotely controlled humans to the AI, and the AI simply uses their bodies.
This scenario is a real possibly in an AI situation which goes wrong. Because we have no way to know what the AI would decide to do, or what it could convince us to do, we don't know if we'd maintain control of our bodies or our evolution in the end. It's not even decided by science that free will exists so why would an AI believe in free will? A simple mistake like that could wipe out free will for all humans and all mammals, and it could be gone for good if the AI is so much smarter that we can't ever compete.
Now I'm not going to go into the philosophy of whether or not free willI exists, only stating that it's not a scientific fact that it exists. So an AI might conclude it doesn't exist.
I am speaking about psychology, that is true, but doesn't social psychology come under the banner of psychology? I never mentioned psychiatry. As for easily falsifable claims. don't you mean easily disproved? I would define falsifiability as sceintific experimentation to prove that something is true, not to disprove it.
Do you know of Orch-OR, the Penrose/Hameroff theories? They have a different view of AI, have you read up on them?