Well, I'm one of those people - my degree is in robotics, and yes, I've been whacked, hard, by robots while working with them. The second time taught me to be very careful, as it could have killed me if things had gone only a little differently.
It's hard to do a complete lock-out/tag-out type process when you're testing the robot, or more commonly, the interactions between the various devices in the workcell. (No, I'm not saying you shouldn't lock and tag...) There are some things that are much more easily debugged from up close - the danger comes when you *think* you're in a "safe" spot in the work envelope, but one or another of the various programs running has other ideas. (Keep in mind that the average robotic workcell may have a dozen or more controllers running their own mostly to entirely independent control logic.)
In my experience, most robot-related accidents (which thankfully, only rarely lead to serious injury or death) are due to a combination of both human error AND software errors. (Hardware errors are both far less common and far less likely to result in injury.) Like plane crashes, the root cause may be attributed to human error, but there are almost always a set of contributing factors and conditions that stack up to lead to a deadly accident. (And like SCUBA diving, you ALWAYS need a buddy - but in this case, with the big red switch in his hand.)
There are several bigger problems that need fixing - First, 20th century robot technology (which is still practically all that's in use) builds robots that are stupid - really, really stupid. Unlike the robots of SciFi, they have no concept of people or other things, and only the most rudimentary idea of themselves. Generally, they can't feel at all (except *maybe* at their end effector (hand)), and almost none of them can independently avoid collisions even with other machines and static objects in the workcell, much less unpredictable and strangely-shaped things like people.
Giving robots the ability to feel or detect impact (via skin-type force sensing) would go a long way, but then programming would have to catch up, too, so that there are good places to hang autonomic or low-level, high importance safety loops. (BTW, this sort of multi-layered control scheme was what the MIT Media Lab's Rodney Brooks was originally working on before he got seduced by shiny things. His early papers are still surprisingly relevant.) The vast majority of robots today still use what are more or less a series of GOTO instructions in threespace sprinkled with conditionals, with little to no ability to do their own path planning, or react to anything they haven't been preprogrammed specifically to deal with.
As for fixing blame - that's really hard, and very situational. (If it's a software problem, is it due to insufficient safeguards in the underlying system, insufficient care by the implementor, or something that was reasonably unexpected?) Even knowing the full story (which obviously we don't here), it can be very difficult to sort out who is (or should be) responsible for what - especially when the law may not always be congruent with expectations. In general though, it's hardly fair to hold manufacturers responsible for unwise or insufficiently careful use of products that are known to be potentially dangerous. I often use a variant of this quote humorously to refer to Unix/Linux, but it's literally true when applied to robots: "Keep in mind that robots are power tools. And power tools can kill."