Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Re:It WILL Replace Them (Score 4, Insightful) 38

The illusion of intelligence evaporates if you use these systems for more than a few minutes.

Using AI effectively requires, ironically, advanced thinking skills and abilities. It's not going to make stupid people as smart as smart people, it's going to make smart people smarter and stupid people stupider. If you can't outthink the AI, there's no place for you.

Comment Re:No problem (Score 5, Insightful) 62

Big tech companies don't really know what to do with 10x engineers. 1x engineers they manage out or warehouse until the next layoff, 1.5x engineers get promoted, but when they get a 10x engineer they try to make them into something different. Typically this means taking them away from hands-on engineering and trying to get them to do things that are more "high impact", such as engineering management or tech leadership. If they're not good at these things this frustrates everyone involved. If they are... well, they've probably traded a 10x engineer for a 1x manager or tech lead, which likely isn't a good trade.

Comment Innovation has nothing to do with it (Score 5, Insightful) 62

Most employees at big companies, including tech companies, don't innovate. They're not allowed to innovate, and if they try to do so they're told to keep working on their TPS reports or Jira tickets. Laying off such engineers won't reduce innovation at a big company.

The people big companies allow to innovate are either product/marketing types, or in tech companies people with titles like "principal" and "distinguished". Most of these people don't actually innovate either (and the innovation coming from the product/marketing types is usually bad), but occasionally you get people who can, and that's where all the innovation from big companies is.

If you want to innovate, become a founder. If you're at a big tech company, you can probably ask management and they'll tell you the same thing.

Comment Re:Too Simplistic (Score 1) 84

Karo is not HFCS , but yeah, lot of kitchens have hydrogenated oils (a.k.a "shortening", also "margarine"), artificial colors ("food coloring"), and flavors (vanillin probably is most common). HFCS would be unusual in a home kitchen, but "invert sugar" is less so and pretty much the same thing. Sucrose itself is already highly processed, it doesn't exactly come out of the beet as a white granular substance.

The UPF thing is woo, by people who should know better. At least the bro science people know they're bro science people. Or it's just a scam.

Comment Re:Oh, Such Greatness (Score 1, Interesting) 271

Lincoln was a Free Soiler. He may have had a moral aversion to slavery, but it was secondary to his economic concerns. He believed that slavery could continue in the South but should not be extended into the western territories, primarily because it limited economic opportunities for white laborers, who would otherwise have to compete with enslaved workers.

From an economic perspective, he was right. The Southern slave system enriched a small aristocratic elite—roughly 5% of whites—while offering poor whites very limited upward mobility.

The politics of the era were far more complicated than the simplified narrative of a uniformly radical abolitionist North confronting a uniformly pro-secession South. This oversimplification is largely an artifact of neo-Confederate historical revisionism. In reality, the North was deeply racist by modern standards, support for Southern secession was far from universal, and many secession conventions were marked by severe democratic irregularities, including voter intimidation.

The current coalescence of anti-science attitudes and neo-Confederate interpretations of the Civil War is not accidental. Both reflect a willingness to supplant scholarship with narratives that are more “correct” ideologically. This tendency is universal—everyone does it to some degree—but in these cases, it is profoundly anti-intellectual: inconvenient evidence is simply ignored or dismissed. As in the antebellum South, this lack of critical thought is being exploited to entrench an economic elite. It keeps people focused on fears over vaccinations or immigrant labor while policies serving elite interests are quietly enacted.

Comment Re:Computers don't "feel" anything (Score 1) 56

It's different from humans in that human opinions, expertise and intelligence are rooted in their experience. Good or bad, and inconsistent as it is, it is far, far more stable than AI. If you've ever tried to work at a long running task with generative AI, the crash in performance as the context rots is very, very noticeable, and it's intrinsic to the technology. Work with a human long enough, and you will see the faults in his reasoning, sure, but it's just as good or bad as it was at the beginning.

Comment Re:Computers don't "feel" anything (Score 3, Informative) 56

Correct. This is why I don't like the term "hallucinate". AIs don't experience hallucinations, because they don't experience anything. The problem they have would more correctly be called, in psychology terms "confabulation" -- they patch up holes in their knowledge by making up plausible sounding facts.

I have experimented with AI assistance for certain tasks, and find that generative AI absolutely passes the Turing test for short sessions -- if anything it's too good; too fast; too well-informed. But the longer the session goes, the more the illusion of intelligence evaporates.

This is because under the hood, what AI is doing is a bunch of linear algebra. The "model" is a set of matrices, and the "context" is a set of vectors representing your session up to the current point, augmented during each prompt response by results from Internet searches. The problem is, the "context" takes up lots of expensive high performance video RAM, and every user only gets so much of that. When you run out of space for your context, the older stuff drops out of the context. This is why credibility drops the longer a session runs. You start with a nice empty context, and you bring in some internet search results and run them through the model and it all makes sense. When you start throwing out parts of the context, the context turns into inconsistent mush.

Comment Re:Hardware will be fine (Score 1) 56

OpenAI and Anthropic are betting that this time will be different, that the payoff will come fast enough to pay back the investment. Google is betting this somewhat, too, but Google has scale, diversity and resources to weather the bust -- and might be well-positioned to snap up the depreciated investments made by others.

I think this makes sense. OpenAI pays Google for compute, Google uses that to build more DC capacity. If OpenAI goes bankrupt, Google keeps the compute (and whatever they've already been paid) and it's very unlikely they can't find other uses for the compute, so while they'd have better off if OpenAI stayed around, they don't lose too big.

Comment Re:Just make the penalty a fine (Score 1) 28

Make the fine for paying ransomware 3x any ransom paid. If a company is really set one paying the ransom, it will come with a much higher price, and use that money to fight cybercrime and protect infrastructure.

You might want to consider how the incentives for the government work in that situation.

Comment Re:Writing is kinda useful (Score 2) 245

We were just talking about how one of the most useful, long term skills I picked up in school was my architectural drafting class in high school where they drilled us on perfect print.

Sure, but that's print. As other have pointed out, most of the advantages of cursive have gone away since the introduction of the ballpoint pen. Some of the simplified letterforms (e.g. the lowercase 'a') are useful, but looping and joining aren't. Cursive is long obsolete as a writing form. At best it's more aesthetically pleasing while being less readable; more commonly it's just ugly unreadable scrawl

Slashdot Top Deals

Machines take me by surprise with great frequency. - Alan Turing

Working...