... to even understand why we consider certain judgements to be moral or immoral, I'm not sure how we're supposed to convey that to robots.
The classic example would be the Trolley Problem: there's an out of control trolley racing toward four strangers on a track. You're too far away to warn them, but you're close to a diversion switch - you'd save the four people, but the one stranger standing on the diversion track would die instead. Would you do it, sacrifice the one to save the four?
Most people say "yes", that that's the moral decision.
Okay, so you're not next to the switch, you're on a bridge over the track. You still have no way to warn the people on the track. But there's a very fat man standing on the bridge next to you, and if you pushed him off to his death on the track below, it'd stop the trolley. Do you do it?
Most people say "no", and even most of those who say yes seem to struggle with it.
Understanding just what the difference between these two scenarios is that flips the perceived morality has long been debated, with all sorts of variants for the problem proposed to try to elucidate it, for example, a circular track where the fat man is going to get hit either way but doesn't know it, situations where you know negative things about the fat man, and so forth. And it's no small issue that any "intelligent robots" in our midst get morality right! Most of us would want the robot to throw the switch, but not start pushing people off bridges for the greater good. You don't want a robot doctor deciding to kill and cut up a patient who in the course of a checkup discovers that the patient has organs that could save the lives of several of his other patients, sacrificing one to save several, for example.
At least, most people wouldn't want that!