Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Serious question (Score -1, Troll) 137

If i was considering buying Dell in the future I sure as hell ain't now.

But George Soros amirite folks?!

I am seriously confused about the rationale here.

You thought Dell was a good fit for your next purchase. You've done some research, and Dell appears to have met your needs for quality and price.

The owners of Dell pledged $625 billion to a system that should help children in various ways. There have been a lot of complaints recently from young people about how the system has failed them: housing is too expensive, not enough high paying jobs, they can't afford to get married, own a home, and have kids.

This donation seems like it would be a start towards fixing this. Perhaps other high-end donors will add to the accounts.

Having an easy way to add to your children's future over time seems like it would help fixing this, it's simple and a "no brainer" for parents: they don't have to learn financing or do research or set up accounts, everyone online will be analyzing the accounts and tell whether they are a good idea. Parents can focus on parenting and rely on expert analysis for whether this is a good idea.

And you are so miffed at this that... for some reason... you've decided to boycott Dell and spend your money elsewhere?

I'm completely at sea here. In what universe does your decision make sense?

Addendum: Looking over the edit page of this post, it occurs to me that there is a universe where "jacks smirking reven" is not a US citizen, and is just making troll posts to foment divisiveness in America. I've seen a lot of really funny news accounts (example) of political shitposters on X being from foreign nations. Are we in that universe?

And will Slashdot ever add the "account based in" feature?

(BTW, I'm from the US, I promise :-)

Comment Re:What's old is new again (Score 1) 42

That wasn't *all* I said, but it is apparently as far as you read. But let's stay there for now. You apparently disagree with this, whnich means that you think that LLMs are the only kind of AI that there is, and that language models can be trained to do things like design rocket engines.

Comment Re:What's old is new again (Score 5, Informative) 42

Here's where the summary goes wrong:

Artificial intelligence is one type of technology that has begun to provide some of these necessary breakthroughs.

Artificial Intelligence is in fact many kinds of technologies. People conflate LLMs with the whole thing because its the first kind of AI that an average person with no technical knowledge could use after a fashion.

But nobody is going to design a new rocket engine in ChatGPT. They're going to use some other kind of AI that work on problems on processes that the average person can't even conceive of -- like design optimization where there are potentially hundreds of parameters to tweak. Some of the underlying technology may have similarities -- like "neural nets" , which are just collections of mathematical matrices that encoded likelihoods underneath, not realistic models of biological neural systems. It shouldn't be surprising that a collection of matrices containing parameters describing weighted relations between features should have a wide variety of applications. That's just math; it's just sexier to call it "AI".

Comment Re:It WILL Replace Them (Score 4, Insightful) 45

The illusion of intelligence evaporates if you use these systems for more than a few minutes.

Using AI effectively requires, ironically, advanced thinking skills and abilities. It's not going to make stupid people as smart as smart people, it's going to make smart people smarter and stupid people stupider. If you can't outthink the AI, there's no place for you.

Submission + - AI avatar creates a Top 100 album (instagram.com)

Okian Warrior writes: Solomon Ray topped the iTunes Top 100 Christian and gospel albums chart last week, and he’s not even real or Christian or black.

Ray is solely a creation of Artificial Intelligence (AI).

Comment Universal positive regard (Score 5, Interesting) 33

Sometimes, to get your thoughts straight, all you need is to discuss them with somebody. Chatbots seem to be just great for this. You really do not need anything from them, you just explain your ideas and this makes them more organized. This is really useful. Especially, now when you really have to be careful what you say to others, or you may end up totally cancelled.

ChatGPT has three aspects that make this practice - what you describe - very dangerous.

Firstly, ChatGPT implements universal positive regard. No matter what your idea is, ChatGPT will gush over it, telling you that it's a great idea. Your plans are brilliant, it's happy for you, and so on.

Secondly, ChatGPT always wants to get you into a conversation, it always wants you to continue interacting. After answering your question there's *always* a followup "would you like me to..." that offers the user a quick way that reduces effort. Ignoring these requests, viewing them as the result of an algorithm instead of a real person trying to be helpful, is difficult in a psychological sense. It's hard not to say "please" or "thank you" to the prompt, because the interaction really does seem like it's coming from a person.

And finally, ChatGPT remembers everything, and I've recently come to discover that it remembers things even if you delete your projects and conversations *and* tell ChatGPT to forget everything. I've been using ChatGPT for several months talking about topics in a book I'm writing, I decided to reset the ChatGPT account and start from scratch, and... no matter how hard I try it still remembers topics from the book.(*)

We have friends for several reasons, and one reason is that your friends will keep you sane. It's thought that interactions with friends is what keeps us within the bounds of social acceptability, because true friends will want the best for you, and sometimes your friends will rein you in when you have a bad idea.

ChatGPT does none of this. Unless you're careful, the three aspects above can lead just about anyone into a pit of psychological pathology.

There's even a new term for this: ChatGPT psychosis. It's when you interact so much with ChatGPT that you start believing in things that aren't true - notable recent example include people who were convinced (by ChatGPT) that they were the reincarnation of Christ, that they are "the chosen one", that ChatGPT is sentient and loves them... and the list goes on.

You have to be mentally healthy and have a strong character *not* to let ChatGPT ruin your psyche.

(*) Explanation: I tried really hard to reset the account back to its initial state, had several rounds of asking ChatGPT for techniques to use, which settings in the account to change, and so on (about 2 hours total), and after all of that, it *still* knew about my book and would answer questions about it.

I was only able to detect this because I had a canon of fictional topics to ask about (the book is fiction). It would be almost impossible for a casual user to discover this, because any test questions they ask would necessarily come from the internet body of knowledge.

Comment Re:Oh, Such Greatness (Score 1, Interesting) 297

Lincoln was a Free Soiler. He may have had a moral aversion to slavery, but it was secondary to his economic concerns. He believed that slavery could continue in the South but should not be extended into the western territories, primarily because it limited economic opportunities for white laborers, who would otherwise have to compete with enslaved workers.

From an economic perspective, he was right. The Southern slave system enriched a small aristocratic elite—roughly 5% of whites—while offering poor whites very limited upward mobility.

The politics of the era were far more complicated than the simplified narrative of a uniformly radical abolitionist North confronting a uniformly pro-secession South. This oversimplification is largely an artifact of neo-Confederate historical revisionism. In reality, the North was deeply racist by modern standards, support for Southern secession was far from universal, and many secession conventions were marked by severe democratic irregularities, including voter intimidation.

The current coalescence of anti-science attitudes and neo-Confederate interpretations of the Civil War is not accidental. Both reflect a willingness to supplant scholarship with narratives that are more “correct” ideologically. This tendency is universal—everyone does it to some degree—but in these cases, it is profoundly anti-intellectual: inconvenient evidence is simply ignored or dismissed. As in the antebellum South, this lack of critical thought is being exploited to entrench an economic elite. It keeps people focused on fears over vaccinations or immigrant labor while policies serving elite interests are quietly enacted.

Comment Re:Computers don't "feel" anything (Score 1) 56

It's different from humans in that human opinions, expertise and intelligence are rooted in their experience. Good or bad, and inconsistent as it is, it is far, far more stable than AI. If you've ever tried to work at a long running task with generative AI, the crash in performance as the context rots is very, very noticeable, and it's intrinsic to the technology. Work with a human long enough, and you will see the faults in his reasoning, sure, but it's just as good or bad as it was at the beginning.

Comment Re:Computers don't "feel" anything (Score 3, Informative) 56

Correct. This is why I don't like the term "hallucinate". AIs don't experience hallucinations, because they don't experience anything. The problem they have would more correctly be called, in psychology terms "confabulation" -- they patch up holes in their knowledge by making up plausible sounding facts.

I have experimented with AI assistance for certain tasks, and find that generative AI absolutely passes the Turing test for short sessions -- if anything it's too good; too fast; too well-informed. But the longer the session goes, the more the illusion of intelligence evaporates.

This is because under the hood, what AI is doing is a bunch of linear algebra. The "model" is a set of matrices, and the "context" is a set of vectors representing your session up to the current point, augmented during each prompt response by results from Internet searches. The problem is, the "context" takes up lots of expensive high performance video RAM, and every user only gets so much of that. When you run out of space for your context, the older stuff drops out of the context. This is why credibility drops the longer a session runs. You start with a nice empty context, and you bring in some internet search results and run them through the model and it all makes sense. When you start throwing out parts of the context, the context turns into inconsistent mush.

Comment Re:Separate grid, please. (Score 2) 71

It probably makes more sense given their scale for them to have their own power generation -- solar, wind, and battery storage, maybe gas turbines for extended periods of low renewable availability.

In fact, you could take it further. You could designate town-sized areas for multiple companies' data centers, served by an electricity source (possibly nuclear) and water reclamation and recycling centers providing zero carbon emissions and minimal environmental impact. It would be served by a compact, robust, and completely sepate electrical grid of its own, reducing costs for the data centers and isolating residential customers from the impact of their elecrical use. It would also economically concentrate data centers for businesses providing services they need,reducing costs and increasing profits all around.

Comment Declining fertility years and culture (Score 1) 176

One explanation is in the rise of women in careers and education.

A woman has roughly 16 years of fertility, from age 14 to 30. At age 30, 10% of couples can't conceive after a year of trying and the numbers get worse after that. Yes, older couples can have kids, but the probability goes way down.

Culturally, having a child before age 18 is assumed to be a bad thing (4 years). Then if the woman has a college education (another 4 years), then goes for an advanced degree (up to 7 years), or wants to establish herself in her career (5 years?), or wants to work off some of the college debt (5 years), the remainder of her fertile years is not enough for population replacement.

It's largely the same for men, at age 22 they may want to start a career and pay off some college debt for several years. Men can father a child at any age, but by and large they tend to marry people roughly their same age.

It's also harder to raise a family in an apartment than in a house, so for both parents it may "make sense" to work for several years to save up for a house.

(Don't take this the wrong way, I personally feel that women should be in colleges and have careers, I'm just pointing out the conflict of interests here.)

It would seem that culture has to change somehow to allow (encourage) couples to have kids earlier, but I'm not sure how that would work given our current economic system.

Comment More explanation (Score 4, Informative) 35

Imagine a black unit cube cake with white frosting. Take a knife and cut out pieces of the cube to make a black hole outline within the white frosting. When you do this at an angle to the sides, it turns out that a cube 6% larger than the original cube can pass through the outlined hole.

All the platonic solids have this property, along with a lot of other polyhedral solids.

Slashdot Top Deals

How come financial advisors never seem to be as wealthy as they claim they'll make you?

Working...