You mention "70 percent", but the numbers can't be related that way from that statement. It does NOT say "70 of the wombs successfully produced babies, and 30 of the wombs had complications."
There's no way to derive a percent number from "number of operations" and "number of babies", so perhaps avoid trying to draw any conclusions in that direction until other/more numbers are available.
1) one womb can have multiple pregnancies.
2) Each pregnancy may or may not result in a healthy baby.
2a) because of the issues with the transplanted womb
2b) because of issues unrelated to the transplanted womb (e.g. genetics of the mother/father, environmental conditions, etc.)
3) as stated, some wombs may never experience a pregnancy
3a) because there is something wrong with the transplant and it's incapable of doing so
3b) the owner of the womb hasn't had an opportunity to (for whatever reason)
Because of that, it could be that only a handful of the 100 wombs are fully functional (and they are pumping out tons of babies) or nearly all of them are fully functional (but haven't gotten around to be used yet.) Or, or it's a mixed bag where the good transplants can carry 100% of pregnancies to term, while some of the bad transplant only can carry 10% of pregnancies to term, and everything in between.
tl;dr #babies =/= #good wombs.
Eeeeew. Something hit a nerve for sure! How dare anyone insult MiKr0zopht's AI?
I started lurking in 4K enthusiast groups to see if they were all cracked up to be. The arguments about relative quality of various BD/4K releases isn't even the most interesting part.
It turns out that there are a lot of issues with set top boxes playing particular disks. The disks themselves also seem terribly fussy.
The problem here is that developers can take responsibility for the action while AI can not. Humans do make mistakes and that's ok; best practice is not to just can employees for messing up. Once is a mistake. Twice is an HR event. When someone does something dumb we forgive but we also insist that meaningful steps are taken to prevent that problem in the future. AI can't really take those steps because AI can't be accountable for "don't do it again." Taking down production because you dropped a table once is forgivable. Taking it down twice for the same reason is a different matter.
The developer can be accountable. And if HR fails to hold them to account for it, HR is accountable. And if HR isn't held accountable, leadership is. And if leadership isn't held accountable, the board is. And if the board isn't held accountable, the stockholders have some hard decisions to make. And if they choose not to make them than it wasn't really that big a deal, was it?
But with an AI the option is "we stop using AI" or "we live with the result."
Everyone is so excited about not having to pay software engineers to write code that they've forgotten what engineers actually do. It's less common in the software world but go find a civil engineer or an electrical engineer or an aerospace engineer and follow them around for a week.
At some point, there's going to be a document in front of them laying out how something is going to be built and they're going to be asked to approve it. And when they do that they're taking responsibility for the design. If it falls down, if it catches on fire, or if it crashes into the mountains and kills people, they're the name on the form saying that won't happen. They're responsible.
Claude 4.5 Opus is very impressive, but if it writes a software application that kills people it can't take responsibility. It can't be punished. It can't even really be sued.
I just don't see how we, as a society, can trust fundamentally unaccountable entities to build systems that can do real harm if they go wrong. I suppose the alternative is that Anthropic accepts full legal liability for everything its models do. Their unwillingness to make that move tells you all you probably need to know about their own internal confidence in those models.
"Don't discount flying pigs before you have good air defense." -- jvh@clinet.FI