The "researchers" did not prove anything to do with what the article claims. What the article really proved is that it is impossible for a robot to make an ethical decision, if that ethical decision is based on analyzing source code.
They created a scenario where the "robot" must determine if a computer program was written correctly or not. An ethical decision hinges on that. If the program is written correctly, it must do one thing, and if the program is written maliciously then it must do another. Then they point out that the halting problem makes it impossible to guarantee that the computer program was written correctly or not. And since the computer program involves a life-or-death decision, therefore, robots can't make life-or-death decisions.
Using that logic, I can prove that a robot can't do anything. Let's try it: I will prove that a robot car cannot decide if it is safe to make a left turn or not at an intersection. I do this by imagining a scenario where the software for the traffic light might be written incorrectly. So my robot car must first analyze the software for the traffic light, determine if it is written correctly, then only make the left turn if the traffic light software is correct. Since the halting problem shows that it is impossible to create a general purpose robot car that can analyze the source code to all other pieces of software, it cannot be guaranteed to make the right decision about the intersection in this case. Ergo, robot cars are impossible and we should not make them.
Actually, all I proved is that a robot can't decide if it is safe to make a left turn if that decision is based on analyzing the source code to the traffic light.
P.S. Yes, I simplified of what the halting problem says. It doesn't say the robot absolutely can't analyze the software. It says that it may not be able to analyze the software, because the software may never end, and the robot can't determine that. I didn't want to go into that subtle difference in my TLDR analysis.