You are viewing a single comment's thread from:
RE: Advancement Of "Artificial Intelligence" Into "Artificial Consciousness"
In class the other day, we had a fascinating conversation with our professor about AI. The crux of his argument was that if we build AIs that are at the very limits of our knowledge, meaning they are as "smart" as we can possibly make them, then how do they evolve? His argument is that they evolve on their own, which would make them "smart" in a way not understood by us. This leads to AIs that evolve beyond human comprehension that are capable of doing things humans are not. At this point, then what is the role of the human, are we even necessary?
Part of me wonders if this is the evolution of humanity? Neanderthals to Homo Sapiens to Cyborg Transhumanism?