Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment Re:Discovery Brings Us Closer Than Ever (Score 1) 40

My level of pessimism about things like regrowing limbs has declined a lot in recent years. I mean, there's literally a treatment to regrow whole teeth in human clinical trials right now in Japan, after having past clinical trials with mice and ferrets.

In the past, "medicine" was primarily small molecules, or at best preexisting proteins. But we've entered an era where we can create arbitrary proteins to target other proteins, or to control gene expression, or all sorts of other things; the level of complexity open to us today is vastly higher than it used to be. And at the same time, our level of understanding about the machinery of bodily development has also been taking off. So it will no longer come across as such a huge shock to me if we get to the point where we can regrow body parts lost to accidents, to cancer, etc etc.

Comment Re:Checks (Score 1) 77

Whether someone is "curable" or not doesn't affect the GP's point. A friend of mine has ALS. He faced nonstop pressure from doctors to choose to kill himself. Believe it or not, just because you've been diagnosed with an incurable disease doesn't make you suddenly wish to not be alive. He kept pushing back (often withholding what he wanted to say, which is "If I was YOU, I'd want to die too."), and also fighting doctors on his treatment (for example, their resistance to cough machines, which have basically stopped him from drowning in his own mucus), implementing extreme backup systems for his life support equipment (he's a nuclear safety engineer), and the nonstop struggle to get his nurses to do their jobs right and to pay attention to the warning sirens (he has a life-threatening experience once every couple months thanks to them, sometimes to the point of him passing out from lack of air).

But he's gotten to see his daughter grow up, and she's grown up with a father. He's been alive for something like 12 years since his diagnosis, a decade fully paralyzed, and is hoping to outlive the doctor who told him he was going to die within a year and kept pushing him to die. He's basically online 24/7 thanks to an eye tracker, recently resumed work as an advisor to a nuclear startup, and is constantly designing (in CAD**) and "building" things (his father and paid labour function as his hands; he views the world outside his room through security cameras).

He misses food and getting to build things himself, and has drifted apart from old friends due to not being able to "meet up", but compared to not being alive, there was just no choice. Yet so many people pressured him over the years to kill himself. And he finds it maddening how many ALS patients give in to this pressure from their doctors, believing that it's impossible to live a decent life with ALS, and choose to die even though they don't really want to.

And - this must be stressed - medical institutions have an incentive to encourage ALS patients to die. Because long-term care for ALS patients is very expensive; there must be someone on-call 24/7. So while they present it as "just looking after your best interests", it's really their interest for patients to choose to die.

(1 in every 400 people will develop ALS during their lifetime, so this is not some sort of rare occurrence) (as a side note, for a disease this common, it's surprising how little funding goes into finding a cure)

** Precision mouse control is difficult for him, so he often designs shapes in text, sometimes with python scripts if I remember correctly

Comment Re:They will panic... (Score 1) 53

Also, the whole point of VMWare is to save money off of buying the hardware. If the price gets high enough that it's cheaper to just buy the hardware, what's the point of using it at all?

Well, actually hardware consolidation is a use case, sure, but I think nowadays it's more about redundancy, fault tolerance, rapid deployment/decomissioning. If you are doing it right, your "OS" boot volumes should be considered disposable, but a lot of shops do it wrong and want the OS volumes to be hosted in centralized storage, which is much easier with a virtual machine approach (yes you can SAN/iSCSI boot, but it's not very appealing).

Of course your first point stands, that VMWare has competition with adequate capability and until now vmware could largely get by because the customers are too lazy to move and the price wasn't enough to make them look hard at options. It's not like mainframe style lock in where the porting effort is supremely daunting, though what you say about they don't need it to last too long for it to have been worth it also stands.

Comment Re:greedy fucking liars!! (Score 1) 53

Of course, the nice meaty IBM locked in ecosystem is far stickier than VMWare.

That replacement for a mainframe, can it run exactly the same software? Generally not, it has to be ported, and porting is risk.

For VMWare, the replacements can run the same exact applications (the processor architecture and software stack have nothing to do with vmware's part of the solution). A customer may be *somewhat* stickier as they bought into vmware-centric solutions with partners, but as they migrate to newer hardware platforms, they can comfortably look at alternatives without terror that their applications are doomed if they try.

There's some friction against migration of course, but no where near what mainframe enjoys.

Comment Re:Translation (Score 1) 53

Reading into the article and his choice of words, sure some get some negotiated break, but the key is his use of the phrase:
complaints "don't play out"

What he means is that the customers complain, but the complaints are invalid because the customer isn't using as much of the feature set as they *could*, and so the complaint has no merit because they are getting what they paid for even if it's useless to them.

He then goes to either fabricate or cherry pick a few examples of what a customer might say when they recognize what fools they have been and how much more they could get out of their vmware purchase.

Comment Re:I Disagree (Score 2) 69

Well, yes -- the lies and the exaggerations are a problem. But even if you *discount* the lies and exaggerations, they're not *all of the problem*.

I have no reason to believe this particular individual is a liar, so I'm inclined to entertain his argument as being offered in good faith. That doesn't mean I necessarily have to buy into it. I'm also allowed to have *degrees* of belief; while the gentleman has *a* point, that doesn't mean there aren't other points to make.

That's where I am on his point. I think he's absolutely right, that LLMs don't have to be a stepping stone to AGI to be useful. Nor do I doubt they *are* useful. But I don't think we fully understand the consequences of embracing them and replacing so many people with them. The dangers of thoughtless AI adoption arise in that very gap between what LLMs do and what a sound step toward AGI ought to do.

LLMs, as I understand them, generate plausible sounding responses to prompts; in fact with the enormous datasets they have been trained on, they sound plausible to a *superhuman* degree. The gap between "accurately reasoned" and "looks really plausible" is a big, serious gap. To be fair, *humans* do this too -- satisfy their bosses with plausible-sounding but not reasoned responses -- but the fact that these systems are better at bullshitting than humans isn't a good thing.

On top of this, the organizations developing these things aren't in the business of making the world a better place -- or if they are in that business, they'd rather not be. They're making a product, and to make that product attractive their models *clearly* strive to give the user an answer that he will find acceptable, which is also dangerous in a system that generates plausible but not-properly-reasoned responses. Most of them rather transparently flatter their users, which sets my teeth on edge, precisely because it is designed to manipulate my faith in responses which aren't necessarily defensible.

In the hands of people increasingly working in isolation from other humans with differing points of view, systems which don't actually reason but are superhumanly believable are extremely dangaerous in my opinion. LLMs may be the most potent agent of confirmation bias ever devised. Now I do think these dangers can be addressed and mitigated to some degree, but the question is, will they be in a race to capture a new and incalculably value market where decision-makers, both vendors and consumers, aren't necessarily focused on the welfare of humanity?

Comment Re:It almost writes itself. (Score 3, Insightful) 55

I don't think there's anything wrong with those sorts of general observations (I mean, who remembers dozens of phone numbers anymore now that we all have smartphones?), but that said this non-peer-reviewed study has an awful lot of problems. I mean, we can focus on the silly, embarassing mistakes (like how their methodology to suppress AI answers on Google was to append "-ai" into the search string, or how the author insisted to the press that AI summaries mentioning the model used were a hallucination, when the paper itself says what model was used). Or the style things, like how deeply unprofessional the paper is (such as the "how to read this paper"), how hyped up the language is, or the (nonfunctional) ploy to try to trick LLMs summarizing the paper. Or we can focus on the more serious stuff, like how the sample size of the critical Section 4 was a mere 9 people, all self-selected, so basically zero statistical significance; that there's so much EEG data that false positives are basically guaranteed and they talk almost nothing about their FDR correction to control for it; that essay writers were given far too little time for the task and put under time pressure, thus assuring that LLM users will be basically doing copy-paste rather than engaging with the material; that they misunderstand dDTF implications; the significant blinding failure with the teachers rating the essays being able to tell which essays were AI generated (combined with the known bias where content believed to be created by AI gets rated lower), with no normalization for what they believed to be AI, and so on.

But honestly, I'd say my biggest issue is with the general concept. They frame everything as "cognitive debt", that is, any decline in brain activity is treated as adverse. The alternative viewpoint - that this represents an increase in *cognitive efficiency* by removing extraneous load and allowing the brain to focus on core analysis - is not once considered.

To be fair, I've briefly talked with the lead author, and she took the critiques very well and was already familiar with some of them (for example, she knew her sample size was far too small), and was frustrated with some of the press coverage hyping it up like "LLMs cause brain damage!!!", which wasn't at all what she was trying to convey. Let's remember that preprints like this haven't yet gone through peer review, and - in this case - I'm sure she'll improve the work with time.

Comment Re:Sorry I just woke up⦠(Score 3, Interesting) 9

Doesn't ANYBODY but me remember that "Napster" was actually RealNetworks? You know, the old Real.com that was the Internet's first scale, commercial streamer? Real became Rhapsody for several years. Rhapsody had no name recognition, so they bought the Napster name from it's owners... BEST BUY.

It gets weirder. Rhapsody had been Sonos' partner streaming service - and Rhapsody is also... I HEART RADIO. Now the whole Napster lot got dumped in the lap of venture capital vultures.

Slashdot Top Deals

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...