People argue that it's possible to give machines like robots human-like emotions, but it would require the right motivation and passion and knowledge for this to happen. I think, however, that it would be a terrible plan if we want to use them as servants and friends.
Human reasoning operates primarily on the collection of ideas of which the person is immediately conscious; robots cannot do this.
We don't want robots to think of ideas in ways we cannot control. Robots that are to react against people in what is considered harmful should include such reactions in their goal structures and prioritize them together with other goals and aims rather than anger and pain. Undeniably, we humans tell ourselves to react logically to danger, insult and injury. "Panic" is our name for reacting to danger, insult and injury rather to think logically. We want perfect robots, to not panic in sense of danger, but the closest thing we could make them perfect is giving them feelings; feelings of happiness, joy, love, respect, but also the negative side: hate, greed, anger, remorse, pain.
Putting such a mechanism of emotions in a robot is certainly reasonable. It could be done by maintaining some numerical variables tweaking and adjusting, which should not be difficult with today's technologies. For example, maintaining the levels of fear, anger, and sorrow. However, human-like emotional structures to robots are not an automatic result of our intelligence. For robots and machines to have human emotions, then it would have to be able to think like a human, and being the creatures humans are, we are more or less unpredictable of our actions and doings to other living things.
It is also practically important to avoid making robots that are reasonable targets for either human sympathy or dislike.