Interesting topic @lanzjoseg.
I don't view the situation under the equation you present because I see each of those words; "man, ethics, ai" ...as completely different classes (apples to oranges to pears).
I believe ethics (and morality) only pertain to individual man's interpretation of his own "self-interest" vs. "collective" (or non-self) interest. That is, ethics are a tool that help him assess his (and others') behavior as it balances between the poles of his own self-interest versus the self-interests of other men.
Artificial intelligence cannot apply to men and it cannot equate to man's intelligence... and the two different intelligences therefore can never equate. "Artificial intelligence" will always simply be shorthand for a complex series of cause and effect relationships based on a program "logic." Human intelligence includes something additional that "logic" can never encompass... "observed" self-awarenes. Yes, a programmed AI can simulate awareness... and it can simulate observation, but it can never achieve either through true ungoverned "meta-cognition." This part of my argument is of course, debatable... and I must admit that my argument draws from experiential (non-provable) subjectivity and belief in a separate and individual soul which AI governed machines have not (yet) developed.
AI's do not have a "self" (in the human "soul" sense) and thus are more akin to collections of cooperating cells which do not share a central governance.... they do not have a developed "self."
If one makes the counter-argument that AI does not require "self" or "soul" to apply "ethics" to it's own behavior, I would then say that without "self," ethics has no function or application because ethics ARE about one's relation of behavior in benefit of "self" to that of the "others" (non-self).
What is "self?" ...if it's stated that a human's sense of self is also artificial, then, ethics as a moral question, again, is irrelevant as humans are then simply governed "programs" made up of complex cause and effect chains as well. Ethics involve a question of choice. We tend to operate under the assumption that humans (with a "soul") have choice, and that AI's, as a result of "program logic" don't truly have choices... and we tend to define ethics and "choices."
/opinion