You are viewing a single comment's thread from:

RE: Keeping systems accountable, machine ethics, value alignment or misalignment

in #philosophy7 years ago (edited)

It's true that hard-coding morality is likely the wrong approach to the problem, but we are going to have to find a way to teach our AI's how to tell right from wrong and why something is right or wrong in the first place.

It's a difficult issue to address with programming, but I believe teaching a machine to understand emotions the first step to understanding morality. Emotional and social consequences help drive the rules of social morality. If a machine was able to feel hurtful emotions then, perhaps, it could find value in morality and ethics.

Sort:  

While there is no universal right and wrong, if you have values and the AI knows your particular values, then it can learn your expected "right and wrong". This would have to include all sorts of stuff from culture to social norms to position in society to expected outcomes.

My own approach would be to focus on producing the best outcomes for the human and humanity as a whole but taking into account the values humans hold dear. There aren't really universal values held by all humans but there are values held by the consensus of humans interacting with the AI. So for example if Google search results were training an AI then an AI might be able to figure out what the subconscious of humanity is from the search patterns but even this is speculative.

Coin Marketplace

STEEM 0.16
TRX 0.17
JST 0.029
BTC 69510.55
ETH 2492.25
USDT 1.00
SBD 2.55