Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment Re:There are useless jargons and useful jargons (Score 1) 135

Practical terms will be made up to refer to a specific subset of something that a layman doesn't care about (niche nerd crap) but is relevant in the field and benefits from shorthand.

Obligatory https://xkcd.com/1095/

The best jargon is intuitively understood (no one should need "paint yourself into a corner" explained) most can be spotted with context (esp by someone competent, yes) but a few are obtuse enough that you can't really expect people to immediately grok it.

That extra word being key, since today you're one click away from looking it up or asking the LLMs. If it's so obscure it doesn't exist, it's probably something in-house and some mix of unimportant, convenient, frequent, or obvious. Or obviously weird, and people immediately call it out.

"Oh, yeah, that's what we call the DX3's, they never last more than two weeks and look funny so we started calling them milk cartons."

I'll assume TFA is more about vocabulary/buzzword theatrics, humans have always been obsessed with posturing (sometimes with good reason) but the behavior spiked after the true eternal september drew everyone in with mobile/social. Sure, festizios "hurt morale and collaboration". Not sure about calling them jargon.

Comment Re:You had me up to AI (Score 1) 53

If the AI is being used solely for the search engine, my understanding is pattern-matching is what AI's genuinely good at, the likes of that geoguesser web game or whatever.

The reliability of the signature map itself not clear to me, seeing posts that it's not as consistent and immutable as they'd like you to believe.

Comment Re:I sure observe the opposite (Score 2) 28

There are three variables.

(1A) Expresses having high confidence in answer; (1B) Internally has actual high confidence in answer; (1C) Actually had the right answer
(2A) Expresses lacking high confidence in answer; (2B) Internally lacks actual confidence in answer; (2C) Actually had the wrong answer

You'd expect B and C to align, most people (and bots) have a reasonably accurate sense of certainty: "I probably know this!" "I probably don't know this..."

Unfortunately the training taught the autocomplete machine to emphasize (A) regardless of (B), people scoring the behaviors would have always shown favoritism towards (1A) presentation in any variations of B and C.

Which is dumb but depressingly familiar; We tend to reward sycophant deceit, tend to punish messengers/whoever is convenient (courts are constantly seeing attempts to hunt a "facilitator" over actual culprits because it's convenient, headlines ensue) tend to see salesmen most successful when they say "Absolutely does! Absolutely sure! Absolutely included! Absolutely true!" without regard to C.

In any case no amount of confidence internal or not should be taken as 100% certainty. It's an approximation machine, it has no understanding of the human word "fact" except as a word that associates with X Y and Z. Such a machine can be used risk-free and to great effect when the results can be an approximation: Essays, imagery, general topic queries, even humor and poetry. Anything subjective! But we can't stop ourselves and want the robot to do our objective work, then cry when our demand for citations produces an approximation of "what a bibliography looks like".

Slashdot Top Deals

How many hardware guys does it take to change a light bulb? "Well the diagnostics say it's fine buddy, so it's a software problem."

Working...