Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:The Itsukushima girl is an absolute Karen (Score 1) 96

They had set out to descend after sunset, and I don't remember seeing any lights on the path. Even a paved road can be dangerous in pitch black.

This. I've had to descend a mountain as the sun was going down once (got stuck at the top due to weather for some time, and when it let up enough for a safe descent, it was late). It's absolutely not fun, even when there's still some light. Had it been dark, I think I would've taken my chances staying at the top rather than going down.

That said, anyone not a complete idiot checks things like "time of last cable car" a) in person, b) at the day, c) at the location. Because even there is an official website that is well-maintained (and that's already two big if's) things might change at the location due to weather, workers being ill, no tourists that day or whatever.

Also, checking in person means at least one other person knows that you're up there.

Comment it's a tool like any other tool (Score 1) 39

AI is a tool. And like any tool its introduction creates proponents and enemies.

Some might say I'm a semi-professional writer. As in: I make money with things I write. From that perspective, I see both the AI slop and the benefits. I love that AI gives me an on-demand proof-reader. I don't expect it to be anywhere near a professional in that field. But if I want to quickly check a text I wrote for specific things, AI is great, because unlike me it hasn't been over that sentence 20 times already and still parses it completely.

As for AI writing - for the moment it's still pretty obvious, and it's mostly low-quality (unless some human has added their own editing).

The same way that the car, the computer, e-mail and thousands of other innovations have made some jobs obsolete, some jobs easier, and some jobs completely new, I don't see AI as a threat. And definitely not to my writing. Though good luck Amazon with the flood of AI-written garbage now clogging up your print-on-demand service.

Comment Re: does it, though? (Score 1) 244

The human using the LLM, obviously.

Trivially obviously not. The LLM wasn't trained on texts exclusively written by the human using it, so it won't ever speak like that particular person.

If someone wants to train a specific "Tarrof" LLM - go ahead. I'm simply advocating against poisoning the already volatile generic LLM data with more human bullshit.

Comment Re: does it, though? (Score 1) 244

That is true but also besides the point. Communicating like "a human" is the point here. WHICH human, exactly? We already have problems with hallucinations. If we now train them on huge data sets intentionally designed for the human habit of saying the opposite of what you mean, we're adding another layer of problems. Maybe get the other ones solved first?

Comment does it, though? (Score 1) 244

"We Politely Insist: Your LLM Must Learn the Persian Art of Taarof"

While that might be an interesting technical challenge, one has to wonder why. Just because something is "culture" doesn't mean it should be copied. Slavery was part of human culture for countless millenia. To the point where we haven't even gotten around to updating our "holy books" that all treat it as something perfectly normal. That's how normal slavery used to be.

(for the braindead: No, I'm not comparing Taarof to slavery. I'm just making a point with an extreme example.)

The thing is something called unintended consequences. So in order to teach an LLM Taarof you have to teach it to lie, to say things that don't mean what the words mean. And to hear something different from what the user says. Our current level of AI already has enough problems as it is. Do we really want to teach it to lie and to misread? Just because some people made that part of their culture?

Instead of treating LLMs like humans, how about just treating them as the machines they are? I'm pretty sure the Persians don't expect their light switches to haggle over whether to turn on the light or not, right? I stand corrected if light switches in Iran only turn on after toggling them at least three times, but I don't think so. In other words: This cultural expectation only extends to humans. Maybe just let the people complaining know that AIs are not actually human?

Comment Re:Seems healthy. (Score 1) 26

I get the unpleasant impression that we are doing a corporate reenactment of that period in European history where basically everyone was ruled by Habsburgs who were incestuous and incompetent in equal measure.

Nah, it's simpler than that. This is just good old fashioned gaming the stock market.
- Announce a big project or big investment that makes people think your company is doing great things
- Stock price goes up
- Profit!
- A year later, people have forgotten that you never actually built the big project you announced
- Lather, Rinse, Repeat

Slashdot Top Deals

After Goliath's defeat, giants ceased to command respect. - Freeman Dyson

Working...