Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Re: does it, though? (Score 1) 197

That is true but also besides the point. Communicating like "a human" is the point here. WHICH human, exactly? We already have problems with hallucinations. If we now train them on huge data sets intentionally designed for the human habit of saying the opposite of what you mean, we're adding another layer of problems. Maybe get the other ones solved first?

Comment Re:THEN STOP USING IT! (Score 1) 22

This is a common pattern on Slashdot. People saying, "Why doesn't everyone switch to <completely inadequate substitute>?" and getting moderated up. The answer is usually that it doesn't do what people need. There are alternatives to vSphere, but VirtualBox definitely isn't one of them.

Comment does it, though? (Score 1) 197

"We Politely Insist: Your LLM Must Learn the Persian Art of Taarof"

While that might be an interesting technical challenge, one has to wonder why. Just because something is "culture" doesn't mean it should be copied. Slavery was part of human culture for countless millenia. To the point where we haven't even gotten around to updating our "holy books" that all treat it as something perfectly normal. That's how normal slavery used to be.

(for the braindead: No, I'm not comparing Taarof to slavery. I'm just making a point with an extreme example.)

The thing is something called unintended consequences. So in order to teach an LLM Taarof you have to teach it to lie, to say things that don't mean what the words mean. And to hear something different from what the user says. Our current level of AI already has enough problems as it is. Do we really want to teach it to lie and to misread? Just because some people made that part of their culture?

Instead of treating LLMs like humans, how about just treating them as the machines they are? I'm pretty sure the Persians don't expect their light switches to haggle over whether to turn on the light or not, right? I stand corrected if light switches in Iran only turn on after toggling them at least three times, but I don't think so. In other words: This cultural expectation only extends to humans. Maybe just let the people complaining know that AIs are not actually human?

Comment Re:kinda looks like a cash grab? (Score 1) 23

The money isn't being seized. The accounts are being frozen until you show up at a branch in person and register biometrics. The deadline for requiring biometric verification for all transactions over a (relatively now) threshold value is arriving, so people who haven't done this are losing the ability to make payments. This primarily affects people living outside the country who access their accounts entirely through the banks' web sites, as they'll need to make a trip visit a bank branch. But the government isn't taking the money, it's just like any other inactive account.

Comment Re:You are once again responding to a bot (Score 1) 109

He's been trying to say that someone has trained an LLM on his comments for months now. Typically it's an "anonymous coward" comment written in his voice, with his signature link pasted in, replying to one of his logged in comments with his name on it. I guess this time he forgot to choose the "anonymous" option, so he's claiming a comment with his name on it is a bot impersonating him. I never really believed his story. No-one would put the energy into impersonating him.

Comment He's replying to a comment you posted logged in (Score 1) 109

The comment he replied to is posted using your account. It isn't an anonymous coward replying to you as is usually the case when you accuse someone of training an LLM on your comments. I guess you forgot to choose the anonymous radio button this time. I always thought your story about someone training an LLM on your comments seemed a but implausible, but this confirms it. You post unhinged rn=ants, then claim it's someone impersonating you.

Comment Re:Seems healthy. (Score 1) 26

I can see the argument that Nvidia has no obvious advantage in LLMs that would make them want to set up their own operation; it's basically everything else about the situation that would make me jumpy if I had bet on Nvidia.

"Investing $100 billion in OpenAI's spend $100 billion on Nvidia stuff initiative" sounds, at worst, like a slightly more legal version of the trick where you shuffle stock around between business units or stuff the channel and book that as sales because you suspect that your real sales numbers will disappoint; and, even if it's not quite that dire, Nvidia being willing to get paid in faith rather than in other people's money (or shift the stock to one of their customers that actually has money) looks very much like an indirect price cut, which gives the impression that either demand is outright softening, and Nvidia has units that it can't simply immediately shift to customers who are actually paying cash right now; or that Nvidia feels the need to help fill the gap between OpenAI's seemingly unlimited appetite for doubling-down money and the, sooner or later, limited supply of VC nose candy.

That said, it's not entirely novel; Nvidia's current holdings are something like 90% Coreweave(under 10% of Coreweave's total shares; but Coreweave shares are the bulk of other-company shares Nvidia holds); and they have an agreement with them obliging them to purchase any unused capacity through 2032; so they've been expressing confidence in AI-related companies and/or trying to keep the music going by paying some of their more fragile customers' bills even before this.

It could be that Nvidia isn't even trying to diversify; but the history of bad things happening when people underestimate correlated risks also doesn't make me feel great about the situation: Obviously it's going to be a bad day at Nvidia if 'AI' cools; stock price will take a hit and they will be left holding at least some inventory and TSMC and other vendor commitments; but it's going to be a worse day the more of their hardware they sold in exchange for stock in 'AI' Nvidia buyers, rather than in exchange for money, since the fortunes of those companies are going to be fairly closely correlated with Nvidia's own, albeit likely to swing harder and have further to fall.

Comment Re:The ultimate spy tool (Score 3, Insightful) 22

Perhaps more troublingly; they'll allow facebook to see what the people you see do.

My good-faith advice to anyone who is considering letting zuck into their refrigerator just to solve the crushing problem of what to cook with available ingredients or whatever would be "probably not worth it"; but that's ultimately a them problem one way or the other.

The trouble is that much of the pitch here is that you are supposed to provide footage as you wander around; merrily making the you problem everyone else's problem as you do so. And, yes, 'no expectation of privacy, etc, etc.' but there's a fairly obvious distinction between "in principle, it wouldn't be illegal to hire a PI to follow you around with a camera while you are in public", which involves a typically prohibitive cost in practice and "you paid them to upload geolocated footage, nice going asshole", where the economics of surveillance change pretty radically.

If people want to outource their thinking to facebook themselves I'd have to be feeling fairly paternalistic to intervene; but given that the normalization of these is, pretty explicitly, about facebook having eyes on everyone I can only hope that 'glasshole' continues to be a genuine social risk to any adopters.

Slashdot Top Deals

<<<<< EVACUATION ROUTE <<<<<

Working...