RE: The Moral Machine - an interactive experiment.
That's a great and thought-provoking post @ankapolo! I did the judging and must say these moral dilemma's are always difficult when you're granted time to think about them, so I'm not sure if AI's ability to think a gazillion times faster should or will make them more ethical than us humans. Talking about humans: did you notice the experiment says "hoomans"? Just admit you're behind this devilish moral nut-grinder Cope!
I digress... Here's my result after the first try:
As you can see, I hate animals but love females, large people, children and those at the bottom of the social ladder... Or... I think it's important to not adhere too much to these results on an individual level; it's meant to aid in the education of AI, something that's supposed to do the thinking for us.
I agree completely with your assessment that we are not ready to delegate these decisions to a thinking machine as long as we're not clear about our moral behavior as a species. I don't even know if there will ever be a time that it's prudent to do so. When we talk about moral behavior of car driving humans, we're not talking about decisions made in a split-second during an accident, but about whether the driver is drunk or not, or if it wasn't an accident at all, but planned and deliberate. And I know that I can think for ages on some of these moral "choices" and never come to the "right" decision because there simply is no right decision to be made. Things happen, people die and there's not always a guilty party.
I do support this experiment though, because progression won't be stopped and if we're about to delegate our thinking and decision-making to AI, what better way to learn than through input from as much diverse real minds as possible?
This is wonderful food for thought @ankapolo, and a great post. Thanks so much 😍 💜
@zyx066 thank you! your comments always add another layer of thought to consider! This experiment truly demonstrates the need for a universally accepted standards for the value of life... That is really what morality comes down to... It's clear that life value varies across the globe, and this is the prime issue is driving us to conflict, and also one that humanity will have to solve before delegating that power to AI... and I agree that we might never fully come to accept a single standard... perhaps an average of all the individual standards combined will be the best we can ever do...
Also you made me realize that I forgot to add a very important paragraph that was lost in the spell check... This test might make a person feel embarrassed about their results, because the results don't show what situations you had to judge, and all the nuances you had to consider... sometimes a set is not very balanced and you will end-up saving more thieves than you expected, and then it will appear as though you have preference towards criminals... Its important to not view this test as a personal moral evaluation but rather as that of an AI. Thank you for pointing this out.