RE: Keeping systems accountable, machine ethics, value alignment or misalignment
While there is no universal right and wrong, if you have values and the AI knows your particular values, then it can learn your expected "right and wrong". This would have to include all sorts of stuff from culture to social norms to position in society to expected outcomes.
My own approach would be to focus on producing the best outcomes for the human and humanity as a whole but taking into account the values humans hold dear. There aren't really universal values held by all humans but there are values held by the consensus of humans interacting with the AI. So for example if Google search results were training an AI then an AI might be able to figure out what the subconscious of humanity is from the search patterns but even this is speculative.