If they want it to default to showing a diverse spread when context doesn't specify, I think that's fine. It might make me roll my eyes a little bit when that spread inevitably doesn't reflect reality, but we're talking generic humans here, and I recognize that that eye-roll is probably on me. There's nothing wrong with it showing me a spread when I just ask for "people". In fact, it might be laudable.
Where it gets absurd and wrong, of course, is when the diverse spread is enforced when context is specified. Racially diverse Nazis are getting the headlines here, but just a simple "show me a white scientist" was also producing racially diverse images. That's dumb and wrong. And it gets worse when we see that it enforces the correct racial context in non-white situations (e.g. it won't show you white Zulu warriors, even when asking for them, because that's not reflective of reality).
This feels like yet another instance where efforts to be sensitive, non-racist, etc. actually had the opposite effect. It's pretty funny. Frustrating and cringe-inducing, but funny.
As for avoiding "dangerous" prompts ... I'm also leery of such efforts, but it's easy to think of undesirable uses. Let's say AI gets good enough that the images are indistinguishable from reality for a large set of people. If a neo-Nazi (to borrow the hate grop du jour) uses the tool to generate racially diverse Nazis to help promote some sort of whackadoo agenda, I think that can reasonably be considered a "dangerous" use. It's a slippery slope, though, and it's all but certain that "pirate" AIs, which don't have these safety features, will proliferate regardless of anyone's efforts. I'm not sure what the best answer is. I doubt anyone does.