0) We all know that stopping in the middle of the highway is dangerous, BUT the way the laws are written in most countries, it's practically always your fault if you drive into the rear of another vehicle especially if it didn't swerve into your path and merely braked suddenly, or worse was stationary for some time.
1) Thus for legal and liability reasons the robot cars will be strictly obeying all convincing posted speed limits (even if they are stupidly slow by some mistake, or by some prankster), and will stick to speeds where they would be able to brake in time to avoid collisions or at least fatal collisions. Whichever is slower.
2) In most danger situations the robot cars will brake and try to come to a stop ASAP all while turning on its hazard lights. Which shouldn't be too difficult at those said speeds.
3) If people die because of tailgating it's the tailgater's fault. Same if the driver behind doesn't stop.
4) There are hardware/software failures then it's some vendors fault.
5) If braking won't avoid the problem even at "tortoise speeds", in most cases fancy moves wouldn't either. In the fringe cases where fancy moves would have helped but braking wouldn't AND it would be the robot car's fault if it braked, the insurance companies would be more than willing to take those bets.
The odds of the car being designed to do fancier moves to save lives are practically zero. If I was designing the car I wouldn't do it - imagine if the car got confused and did some fancy moves to "avoid collision" and killed some little kids. In contrast if it got confused and came to stop ASAP if any little kids are killed it would more likely be someone else's fault.
If you are a human driver/cyclist/motorcyclist you better not tailgate such cars.
Look at the Google car accident history, most of the accidents were due to other drivers. Perhaps I'm wrong but my guess is it's because of "tailgating". Those drivers might still believe the AI car was doing it wrong but the law wouldn't be on their side.