Comment Re:Socialism (Score 1) 71
Yeah, it was cool watching those four $146,000,000/each RS-25 engines return to the landing pad. Stupid capitalists.
Yeah, it was cool watching those four $146,000,000/each RS-25 engines return to the landing pad. Stupid capitalists.
Which is peanuts compared to the recurring $500-600k cost of a radiologist's annual salary.
It's not peanuts at all, which is precisely why there is also competition in the medical industry to reduce radiologists, and push more work onto larger centralized corporately hosted radiology mills.
I don't get the sense that these models are ready to replace radiologists yet
They're not even close.
LLMs outperform doctors in diagnostics. ImageNets do not outperform them clearly*.
but the closer we get the more tempting it'll be for individuals like the CEO quoted in this article. The ROI is massive.
Radiologists today are being "replaced" (in that more work is piled onto less radiologists now augmented by AI)
That is still replacement.
* there are some cases where ImageNets do outperform radiologists, and there are also cases where they do significantly poorly.
VLMs also have a very noted problem in the way that they're trained that leads them to producing reports that are quite bad when evaluated by other radiologists (implying the trainers aren't using radiologists to train the output)
Physicians are also only as good as their training data.
It's inherently different.
The ImageNet is trained to recognize images, and pair it with contextual information given in its scanning.
I.e., it is- at best- distilled from the knowledge of rad techs.
On the other hand, it's pretty easy (i.e. cheap) to train an image classifier on orders of magnitude more cases than any pathologist or radiologist could ever see in their lifetime.
Indeed. Many ImageNets have been trained on more dogs than I will ever see in my lifetime, and will still- a few percent of the time- call them a duck.
I find it funny that you start off talking about how doctors don't stay up on medicine (which is very true), and then point out that these aren't LLMs.
LLMs used in diagnostics regularly outperform doctors for precisely the reason you mentioned.
However, image classifiers match doctors at best, and are sometimes much worse in the case of false positives, most likely due to their inability to consider broader context than the image they're looking at.
This doesn't mean they don't have a place, but it's not real clear what it is yet.
Studies are mixed. Sometimes doctors augmented with AI do worse (suspected anti-AI bias), sometimes they do considerably better. Sometimes doctors alone outperform AI (critical miss rate on low sensitivity scans), sometimes worse- critical miss rate on high-sensitive scans.
Doctors are not liable for missing something on a scan. They're liable for negligence in interpreting a scan.
The types of neural networks used in these classifiers are not LLMs.
Correct.
The expense and difficulty of collecting an image set is going to far outweigh the compute time used to train them.
Like any network, how good it is is a mix of how much data you throw at it, and how many parameters it is.
Large ImageNets cost millions to train.
Lol, now you're accusing me of not writing my own posts here? Hilarious!
Not an accusation in the slightest. Was giving you an out for disclaiming authorship of that post
It is impressively ignorant.
Show evidence. Like I did.
You showed evidence of popularity of a thing within a tiny non-representative sample in a discussion about feature parity. Put scientifically- you showed nothing.
As mentioned, no amount of Claude Code's codebase is going to make you give OpenCode more github stars. Idiots are led by a different metric than code.
As a wise man once said, "Space is big. You just won't believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it's a long way down the road to the chemist's, but that's just peanuts to space."
Some kind of hypothetical future situation- a "Super Kessler" where even mobile sats cannot possibly dodge the amount of debris up there is just bad science fiction.
The rest, and ChatGPT's opinion on them is just padding your word count:
"This is the brain of the operation." "leveraging its dead code elimination for feature flags and its faster startup times." "there's a command system as rich as any IDE."
And of course, they're also simply not fact-based in the slightest.
This is why elementary school is important.
"Virtual" means never knowing where your next byte is coming from.