Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Re:Oh, Such Greatness (Score 2) 132

Lincoln was a Free Soiler. He may have had a moral aversion to slavery, but it was secondary to his economic concerns. He believed that slavery could continue in the South but should not be extended into the western territories, primarily because it limited economic opportunities for white laborers, who would otherwise have to compete with enslaved workers.

From an economic perspective, he was right. The Southern slave system enriched a small aristocratic elite—roughly 5% of whites—while offering poor whites very limited upward mobility.

The politics of the era were far more complicated than the simplified narrative of a uniformly radical abolitionist North confronting a uniformly pro-secession South. This oversimplification is largely an artifact of neo-Confederate historical revisionism. In reality, the North was deeply racist by modern standards, support for Southern secession was far from universal, and many secession conventions were marked by severe democratic irregularities, including voter intimidation.

The current coalescence of anti-science attitudes and neo-Confederate interpretations of the Civil War is not accidental. Both reflect a willingness to supplant scholarship with narratives that are more “correct” ideologically. This tendency is universal—everyone does it to some degree—but in these cases, it is profoundly anti-intellectual: inconvenient evidence is simply ignored or dismissed. As in the antebellum South, this lack of critical thought is being exploited to entrench an economic elite. It keeps people focused on fears over vaccinations or immigrant labor while policies serving elite interests are quietly enacted.

Comment Re:The two largest economies on the planet (Score 0, Troll) 31

AI and modeling is making robotics useful across a wide range of disciplines, very suddenly. Where before you might need years and scores of prototyping attempts to get a functional restaurant dish washing robot today videos of humans doing the job can be fed to the AI, a prototype designed in the computer, and then thousands of actions and exceptions modeled in a few days before ever cutting a piece of metal.

When people who aren't smart enough to do anything more complex than wash dishes are laid off, where are they going to go?

Comment Re:Why compare to these schools? (Score 2) 22

They may as well just said, "Chinese technical universities have outpaced American universities in everything."

This guy lives and works in China and writes about the business environment there (with occasional digressions). All his articles are lavishly supplied with links.

https://kdwalmsley.substack.co...

This has happened suddenly, but decisively, that Chinese universities now dominate the world rankings for the hard sciences.

The Nature Index is a comprehensive ranking of over 18,000 universities and colleges from around the world, and the scores are based on quality research output. These tables are sortable, as well, by scientific discipline. For example, in Physics, the United States didn’t show up in the top 10 ranking at all.

Sichuan University in Chengdu, across all scientific and engineering disciplines, is now ahead of Stanford, MIT, Oxford, and University of Tokyo.

Here are some other takeaways. 8 out of the top 10 research institutions are Chinese. Zhejiang University is in that bunch, and where Liang went. Of the top 50 universities, 26 are from China. US has 14. Have you ever been to Xiamen? Me either. But they have a university in Xiamen that’s ahead of Cal, Columbia, Cornell, and Chicago. I know about all of those. Half the top 100 are Chinese. . .

Comment Re:Really? (Score 1) 31

Broadly speaking, a lot of the 'cloud native' stuff are complex solutions to potentially complex problems that fit within the parameters that those approaches can handle.

If you don't have those complex problems, then it's a premature optimization that is painful. If your use case is not the sort of use case resembling the bread and butter of the applications that instigated these approaches, then it's all pain, no gain.

There was a team that maintained a project that was broadly panned for not being good enough. The developers decided that the cure for what ails them would be changing to 'cloud native' approach, despite the complaints really being about limited functionality, not even about performance or scaling issues. Now on top of having the same functionality complaints *now* they have performance and reliability complaints too, and they have no idea what they are doing, they just arbitrarily carved their single fixed instance of software into a couple of dozen fixed instances of services (they can't figure out how to scale arbitrarily, so they still have exactly one instance of every component). Not one of them is capable of debugging the convoluted network situation they've created, and the logging information is just a mess.

Comment Re:Computers don't "feel" anything (Score 1) 52

It's different from humans in that human opinions, expertise and intelligence are rooted in their experience. Good or bad, and inconsistent as it is, it is far, far more stable than AI. If you've ever tried to work at a long running task with generative AI, the crash in performance as the context rots is very, very noticeable, and it's intrinsic to the technology. Work with a human long enough, and you will see the faults in his reasoning, sure, but it's just as good or bad as it was at the beginning.

Comment Are we back in the '90s? (Score 5, Informative) 90

I remember a story on Slashdot from around the turn of the century, an audit of servers at the Pentagon found that the most common Admin password was Password, the second-most common was P@ssw0rd.

At my first real IT job in 1996 if you knew the birthdate of of the children of 4/5 of the users you knew their password. I wasn't allowed to insist on a change in the user training.

Comment Re:Computers don't "feel" anything (Score 3, Informative) 52

Correct. This is why I don't like the term "hallucinate". AIs don't experience hallucinations, because they don't experience anything. The problem they have would more correctly be called, in psychology terms "confabulation" -- they patch up holes in their knowledge by making up plausible sounding facts.

I have experimented with AI assistance for certain tasks, and find that generative AI absolutely passes the Turing test for short sessions -- if anything it's too good; too fast; too well-informed. But the longer the session goes, the more the illusion of intelligence evaporates.

This is because under the hood, what AI is doing is a bunch of linear algebra. The "model" is a set of matrices, and the "context" is a set of vectors representing your session up to the current point, augmented during each prompt response by results from Internet searches. The problem is, the "context" takes up lots of expensive high performance video RAM, and every user only gets so much of that. When you run out of space for your context, the older stuff drops out of the context. This is why credibility drops the longer a session runs. You start with a nice empty context, and you bring in some internet search results and run them through the model and it all makes sense. When you start throwing out parts of the context, the context turns into inconsistent mush.

Comment Re:Society had better be ready for (Score 1) 48

Stockton Rush is a rare example of a tech bro who didn't get to walk away from his monumental fuckup and leave the rest of us to clean up his mess. Fortunately for us, his collapsible submarine only murdered a few people, along with the "genius" himself. We're just lucky Rush didn't decide he liked reactors more than submarines. No doubt that deficiency in America's tech bro-volution will soon be rectified.

If I weren't an atheist, I'd say, "God help us all".

Slashdot Top Deals

A bug in the hand is better than one as yet undetected.

Working...