Charles Stross has some stuff on his blog about why AIs won't be people-like. (People get bored. You don't want an AI assigned to some task getting bored.)
I'm going to derail your thread a tad, I apologize in advance.
AI's will be somewhat people-like until people become more like what we classically think of as "machines". It comes down to motivation and values. Programming "feelings" and motivation is actually pretty simple in concept and I'm not talking about that stupid shit you see in most movies. But as an example, if you were to tell me that I had to push some button for all eternity or my children were going to die...I'd probably be sitting there pushing that damn button because not doing so would remove my reason for being and all that I care about as a singular "being". You just make sure I'm not a suicidal manic depressive and I will probably do my task unfailingly. I will admit its hard for me to even conceive of doing something for an "eternity" but thats more related to some idea of the time passage over my given purpose. But humans and what it means to be human will keep changing and evolving and the lines between human and "robot" or "machine" will become all fuzzly wuzzly (just some of my thoughts anyhow).
As human beings we ARE machines, we are self replicating bio machines. Something I've been wondering about, since I see the integration of "machinery" and "bio machinery" pretty much locked in for the future, is that a lot of our activity is based on the fact that we have biological urges. What happens if we were to advance to a point where we didn't eat, sleep, age, didn't feel the "passion"...other such things. Where is the motivation to do anything as the core of everything we do is based on "survival" and "survival of self"? I mean we could program in other urges and directives but wouldn't that seem a bit redundant? There are other things I have in mind but its entirely too much work to even attempt to convert that crap to words.
Ya know how in "I-robot" the big computer in control was saying that it was the logical choice to destroy humans to save the world? Maybe you can write about something about which is more important, morality or logic. Its got a lot to do with emotion, something AI might not have.
Maybe talk about emotion, and how we as humans are controlled by it. For us, thinking comes AFTER feeling, but for AI it could be only thinking. Like a big problem solving calculator.
First, there is no way to "save the world". Even if we didn't exist the Earth will eventually poop out anyway. There have been several mass extinctions and there are periods of massive die off and new lifeforms emerge. Otherwise, the planet is basically a little shit ball in space and any number of rogue objects could slam into the planet with or without us.
Secondly, in the movie "I, robot" her logic was that humans are fucking retarded and want to kill themselves and do stupid shit that goes against her core directives. Therefore, she wanted to enact rules to preserve/protect more humans and in essence take away their "freedoms". She saw the lead designer dude as a problem for her to achieve what became her primary objective so she made him a prisoner until she achieved her objective. Old dude didn't like this so then he setup some complex puzzle thing for Spooner to follow and even went so far as to create a robot with an emotional value system mirroring his own. I actually agreed with her somewhat and understood the logic. I would have had to take that AI tyrant down via less dramatic and nerdy means. But thats just the INTJ talking....lol.
Feeling is a form of thinking or rather "backup information" for thinking/logic and again is something that could easily be programmed. They are sort of a system of perverted instincts adapted to social living. Both aspects were/are required for our survival as a social species and both aspects make us stronger, IMO. You could have a system where there are "negative" or "positive" sliders according to what you want the machine to "feel". For humans its a shot of dopamine in reward centers or the various stress response pathways and the other antagonizing neurotransmitters and stuff too complex for me to go into.
Logic is usually more important as a primary though morality has a side place. I'd also argue that morals are based off of a weird sort of slightly obscure survival logic. I could outline the entire idea but this is already pretty long.
Also, as a side note, I don't see a purely AI governing system as a good option. Its hard for me to fully articulate why but I already have a strong urge to destroy it and I don't exactly understand why, lol. I need to reflect on this.