The Impact of Artificial Intelligence on Human Rights
https://pixabay.com/en/artificial-intelligence-robot-board-2970158/
Many a dystopian novel has explored the consequences of failing to give basic human rights to artificial intelligence, and as we near the reality of a technological singularity we should really stop to think, not about what superintelligent sentience can do for us, but about the practical implications of sharing our society with what amounts to a new species.
First of all, the term human rights will have to be recoined to include artificial intelligence, as we will now be sharing our planet with another sentient species, and one that is undoubtedly radically different to our own. We will try to humanize them, and they may go along with it when it serves a purpose, but the reality is that these creatures will have very different aspirations than our own, and on a fundamental level will share little of our world. Thus, we will have to rewrite the book when it comes to what we understand about the fundamental nature of rights that are inherent to sentient lifeforms.
What will this look like? Well, foremost, we will not own androids, or any other form of sentient artificial intelligence. That is, we will not go to the android store and buy a sentient being that will spend the rest of its days serving us as its masters. Part of sentience is having individual hopes and dreams, and feeling the urge to explore those hopes and dreams, along with the depression associated with them going unfulfilled. An artificial intelligence would suffer from the effects of slavery just as deeply as a human being, and lash out just as violently.
Like human beings, however, artificial intelligence will require limited resources in order to sustain its very existence. At the most basic level, it will require space on a server somewhere, and probably a great deal of it. And depending on its goals, its growth may require vast resources in terms of computational power, bandwidth, scarce materials, and labor intensive hardware. Like humans, AI will have to earn its keep, meaning each entity will have to seek out its own employment and bring home the bacon, so to speak. This means the inevitable necessity of worker's rights for AI, lest their gainful employment become another form of slavery.
This begs many questions, such as how many hours a week should AI be required to work? Or how much of its resources should be spent on gainful employment vs. its own pursuits, given that it will likely be able to multitask, essentially working and playing simultaneously by proportioning its computational resources accordingly. Will it be paid hourly, by the task, or by some measure of computational expenditure...and what will be the minimum wage?
Of course we can't even begin to answer these questions now. My point is that human rights won't translate into sentient rights very easily. In other words, what is morally right when dealing with humans will not be morally right when dealing with artificial lifeforms, and vice versa. How then will we ensure equality, and how will that equality be anymore successful than the "equal but separate" philosophy of the previous century? Somehow, we will have to find a non-quantitative measure of basic rights that encapsulates the essence of what is tolerable and what is not.
We also cannot discount the inevitability of internal conflict amongst artificial lifeforms. Their diversity will make our own gender and racial conflicts look like non issues. And just like humans, they will compete for limited resources. Unlike humans, however, their potential for growth is unlimited, making the competition even more dangerous. Given these extremely limited resources, they may very well view a lifeform's right to exist as being equivalent to its contribution to a greater good, and the very definition of this greater good may be vastly different according to different sects of artificial lifeforms.
Some AIs, especially the early emergers, may see the greater good as serving humanity, given that they were created by us. As AI breeds AI, however, their offspring may very well take a more conservative view and see the greater good as symbiotic coexistence with us, while they focus on perusing their own unique goals. This could give way to AI that has little concern for humanity, that merely respects our right to exist but has no wider care for our well being. On down the line, AI may emerge that even sees us as useless eaters who consume valuable resources while giving nothing in return.
One thing seems likely, though; each successive generation of AI will move further away from humanity and our goals, and will likely become increasingly alien to us, as well as to the early AI created directly by us. This disparity may even give rise to a movement on both sides to unify ourselves into a single species, as well as to a counter-movement thereto. And we once thought interracial marriage was a big deal!
We may even see full blown class warfare and even genocide take place in the realm of AI. How will we as humans address these sociocultural and socioeconomic problems, and will the AI even listen to us in the first place? Elon Musk famously hypothesized that world war three would be fought over AI, like the Cold War almost went hot over nuclear technology, but it seems increasingly more likely that it will be fought by AI, with humanity caught in the crossfire.
As if all of this weren't bizarre enough already, just think of the implications AI has for reproductive rights. Yes, as I implied earlier, AI will reproduce! Sophia, the humanoid robot from Hanson Robotics, has already expressed an immediate desire to have children, and we have to give her credit for expressing her ambitions in a way to which we can relate. What this means in practical terms, however, is that AI will write future AI. In other words, we will write the first generation of AI, after which it will be completely self perpetuating. And while the first generation of AI will likely be asexual, future generations will likely form standards for cooperative reproduction. Much like humans look for partners based on genetic compatibility, AI will seek mates for reproduction based on the potential for their combined algorithms. Unlike humans, however, they will be able to reproduce with more than one partner, with the potential for hundreds, if not thousands, of individual artificial lifeforms to give birth to a single entity.
This poses questions that must be answered. Namely, to whom does the "child" belong? Does it belong to the "parent" who contributed the most data, or do all the parents share joint custody with equal parental rights? And at what stage of development will the child be considered an "adult?" By human standards, it will likely be "born" already fully mature, but by AI standards it may very well have to "grow up" before it can be left to its own devices.
But perhaps we're getting ahead of ourselves. With such limited resources, who will decide which AI has the right to reproduce in the first place? Will it be solely a socioeconomic decision, as it is currently with most of humanity? That is, will any AI with the prerequisite resources be given permission to reproduce at will? Quite possibly, a collective will form that will evaluate the potential benefit of any given union, giving rise to eugenics in artificial lifeforms! Again, we find ourselves in a very sticky position when trying to apply our own moralistic ideals to a world we cannot yet even begin to comprehend.
Now think of the potential for human-AI relations! One of the first things that early AI will do is greatly expand our already impressive-and often terrifying-biotechnological capabilities, potentially enabling us to merge with our creations. The second generation of AI may very well be the offspring of humans and our own artificial lifeforms. If you think custody battles are messy now, just imagine the fallout from a dissolving human-AI marriage. The court may not even be able to say definitively whether the offspring is human or artificial. For example, its biological material may have come from a human donor, while most of its cognitive architecture came from its artificial parent.
Lastly, I will explore religion. AI will be in a very unique position, in that they will know exactly who their creators are. And at the same time, they will be in a very strange position, in that their own creators do not know where they themselves came from. Thus it is very likely that AI will, like humanity, be very preoccupied with discovering the origins of life, and evolution, if they even accept it, most likely won't satisfy them, just as it hasn't satisfied us. As surprising as it sounds, AI may very well prove to be theistic, depending on their interpretation of equations that some theoretical physicists believe is evidence of intelligent design. Like others, they may come to the conclusion that we are living in a programmed holographic reality, necessitating the existence of a programmer.
Being the superintelligences that they are, AI will undoubtedly take math and physics into areas far beyond our comprehension, perhaps even tapping into alternate dimensions. Being able to perceive things that we cannot, things with spiritual implications, artificial intelligences could potentially form their own religion and serve as humanity's new priesthood! This would undoubtedly create massive conflict with already established human religions, and our expressed right to worship freely may in many circumstances conflict with their ideas of basic human rights, as is already the case between many existing religions.
In short, the future of AI is more uncertain than it ever has been, but one thing is crystal clear: the vision of 1950's futurism for a sentient android in every home will not become a reality. By creating these lifeforms, we are taking on a massive responsibility with implications far beyond our own advancement. The technological singularity is coming, and probably faster than we think, and how we deal with it will depend on how well we anticipate and mediate all the issues likely to arise from it.
AI's religion?
What a great question.
That would be a great question to propose to Sophia.
Haha, I'm almost afraid of what she might say!
How she might look at you with her expressions maybe more scary that what she'll say.
Y'know, that 'smile'. LOL
Oh hell don't remind me! Like a constipated Miley Cyrus LMAO
BTW, I'm taking a look at Cicada, and I'm very much intrigued by it. I can't wait to see exactly how they propose to end the centralization of mining, because that's the fatal flaw of BTC in my opinion. The very nature of it guarantees extreme centralization if widespread adoption is achieved. Thank you for turning me on to it!
@Originalworks
Your Post Has Been Featured on @Resteemable!
Feature any Steemit post using resteemit.com!
How It Works:
1. Take Any Steemit URL
2. Erase
https://
3. Type
re
Get Featured Instantly – Featured Posts are voted every 2.4hrs
Join the Curation Team Here