Some of these solicitations come from "on high", and a contract monitor at NSF was doing some eye rolling about the notion that you could truly make an industrial robot safe to work with humans in its working element, or at least was giving speech inflections over the telephone suggestive of rolling one's eyes. A research group in Canada offered a critical take on the claims for the safety of the Universal Robotics offering from the standpoint of other university people taking these claims on face value and putting graduate students into the robot "cage."
A safer robot may need strategies such as "depowering" the robot or offering (as UR does) a depowered "teaching mode" along with control systems to obtain the required accuracy with less power. Beyond that, there is interest in vision and sensors to avoid hitting people with the robot.
But the question is, a chimp (Pan Troglodyte) can tear a person apart, but a chimp has sensors, and a chimp can be trained to be around people. Would you trust that training, would you rely on that training. A robot that has enough power to do the required factory tasks has the power to crush a person, but you can depower the robot depending on the operating mode and you can add sensors. Would you trust the algorithm design and software programming and mechanical safety systems behind such an arrangement to enter the robot cage?
Would you trust a self-driving car as a car has the power to crush someone? I guess with enough sensors and algorithms and testing, but even there, you are not guiding a self-driving car by standing right in front of it as suggested by NSF's co-robots . . . are you?