Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:I wonder what the real impacts would be. (Score 1) 303

The point, which you seem to be actively avoiding, is that capping credit card interest rates would make it more difficult for people to get credit cards, which would make it more difficult for them to build credit.

Yes, banks want that revenue. That's why they'll actively oppose this half-baked nonsense. Remember: they invented the credit score system for their benefit, not yours. Corporations are not your friend and predatory lending is big business.

Comment Re:I wonder what the real impacts would be. (Score 1) 303

wouldn't having cards cancelled and/or not issued be a good thing? [...] What's the downside?

Losing a credit card will hurt your score, not help it.

Remember that the amount of credit you have available, how long you've had it, and how much you're using are important factors when determining your credit score. Having a bunch of old cards is actually good for your credit.

Making it difficult for people to get credit cards will also make it more difficult for them to build and maintain credit.

Comment Re: I predict this will be short-lived (Score 1, Interesting) 63

Let's say we live in a fantasy land where LLMs are magically 95% accurate. Would you trust a car that only worked 95% of the time? What about brakes that only stopped your car 95% of the time?

What about legal advice? Would you hire a lawyer that would make up silly nonsense 5% of the time?

Sorry, kid. LLMs just aren't the science fiction fantasy that you want them to be. Your AI girlfriend does not and can not love you. You're not going to have a robot slave. Whatever nonsense it is that you're hoping for isn't going to happen. Not with the technology we have today.

Comment Re:Does Trump hallucinate? (Score 5, Insightful) 63

Human mistakes are of an entirely different nature and quality than AI 'mistakes'. A human won't accidentally make up facts, cases, or sources. A human won't write summaries of things that don't exist. A human won't accidentally directly contradict a source while citing it. A human is also actually capable of identifying and correcting mistakes, unlike an LLM. Stop with this absurd nonsense that it's okay for LLMs to "make mistakes" because humans also "make mistakes" These things are not the same and you know it.

As for this 100% business, with AI, you'd be lucky to get 60% accuracy. A human with that kind of track record offering legal advice would be arrested.

Comment Re:What I want to see (Score 1) 180

His VP (who is likely just as corrupt as Maduro is) will just come to power and then what's even the point of all this?

Cults always die with the leader. Vance doesn't have a fraction of the influence or the inexplicable immunity. The press isn't afraid of him. Congress isn't afraid of him. He doesn't command an army of morons and doesn't have the charisma to coral the leftover legion.

Comment Re:Epstein files please (Score 1) 180

You do realize, if Trump were to be removed from office, JD Vance becomes president?

So what? He's far, far, less dangerous.

A surprising percentage of Americans are firmly in the "this is what I voted for" camp

A rapidly shrinking minority. In 10 years time, probably less, you'll have a very hard time finding anyone willing to admit they supported this insanity.

you're going to have a damn difficult time convincing them otherwise.

We don't need to convince them. Reality will take care of the bulk, the remainder were always beyond hope.

Comment Re: common sense (Score 1) 69

You're deeply confused. The word "mistake" doesn't make any sense and implies that they're doing far more than they objectively are. Again, all the model does is produce a set of next-token probabilities. That is a completely deterministic process. The final token selection is the only thing done probabilistically, but that only makes things worse for your particular delusion. As no internal state is retained between tokens, there is objectively no possibility for the model to "plan" a "response" beyond the current token, which it has only limited control over anyway.

I know you really want to pretend that these things are super-smart science-fiction robots, but they're not. They really are a glorified auto-complete. That isn't hyperbole. That's an objective fact. No amount of wishful thinking or mad rambling about "emergence" will change that.

Yes, it looks like the model is doing more, but so does Joe Weizenbaum's Eliza program. The illusion disappears as soon as you put any thought at all into it. That you're so completely dedicated to this fantasy is pitiful. At least, it would be if it wasn't also actively harmful.

Comment Re:Happened to me today (Score 1) 69

It doesn't matter. All the little links mean is that text from those pages was included in context. It will happily produce responses in direct contradiction to the source provided. Remember it is not producing a summary of the linked page. These things can't actually summary text, only produce text that looks like a summary.

Comment Re: common sense (Score 1) 69

It's not like it can actually evaluate the response. It's just as likely to "correct" one wrong answer with another, double-down, or even "correct" an accurate response with nonsense.

I don't know how many times this needs to be said, but LLMs do not operate on facts and concepts. They do not and can not form a complete answer after careful consideration of the prompt. It just generates next-token predictions, deterministically, based exclusively on the current input. The actual token selected is done probabilistically. To the model, there is no such thing as a response.

The term 'neural network' seems to inspire all kinds of delusions about what LLMs are, what they do, and what they can do.

Comment Re: What is this "retrain" thing? (Score 1) 154

Can you code? I think that speaks for itself.

The only people who think programming requires some special talent or special mind are the idiots with no other skills and way too much of their ego wrapped up in their ability to write computer programs. I highly recommend that you get over yourself.

Comment Re: What is this "retrain" thing? (Score 1) 154

Learn2code was a failure for a reason.

Did you ever see any of those nonsense "learn to code" resources? It was a failure by design. Any idiot can learn to code (just look at all the idiots here with long careers creating tech debt) but absurdities like 'hour of code' seemed to go out of their way to make simple things needlessly complex. Many of their 'coding' exercises obscured essential concepts so completely that I'm convinced it must have been intentional.

Insecure professionals love needless complexity. Not only does it keep them from getting bored, it helps keep the number of developers low enough to keep salaries from falling through the floor. They love absurdities like Agile that let you justify unnecessarily large teams while keeping software quality low enough that individual developers are hard to replace. This nonsense wouldn't be sustainable if we taught basic programming in middle school.

We need fewer professional programmers and more professionals that can program. Even if they never write a single line of code, they can apply those skills in countless other ways. Even the idiots with no other skills know this, which is why they fight so hard to keep people from learning to code! It's why they hated VB back in the 90's -- you could hire a kid right out of high school for pennies and they'd be productive enough to justify their salary in a few months. Sure, they'd sometimes make a mess, but so would the much more expensive newly-minted CS grads.

I understand gatekeeping a profession, but programming? It's a skill that children under 10 can pickup on their own with almost no resources. It's a skill people in other professions can pickup over a weekend to make their real job easier. Let's stop pretending that it's a rare talent or requires a "special mind" or other silly nonsense.

Comment Re:What jobs? (Score 1) 154

You missed this bit:

at least until the bubble bursts.

The implication being that any actual job loss (those claims being highly questionable) is based on lofty speculation about future cost and performance that are unlikely to materialize, given the nature of the technology.

Remember that LLMs were supposed to dramatically disrupt every industry, leading to mass unemployment and a realignment of the world economy ... more than two years ago. The overwhelming majority of AI projects fail to deliver any measurable value (upwards of 95%, according to report by MIT's NANDA initiative) the few that succeed I suspect do so despite the use of an LLM, not because of it.

So people should be retrained to pick up garbage off the street?

Yes. Not because of AI, obviously. Litter is absolutely out-of-control and I expect the positive downstream effects would be significant.

Comment Re:Seriously (Score 1) 154

I think people will have limit of how much unnecessary stuff they are willing to buy.

Our disposable culture didn't come out of nowhere, it was engineered (BBC 1964).

For example most people are quite happy with owning just one washing machine per family, which usually lasts about 8 years

The obvious solution to that "problem" is to make sure that washing machines only last 6 years ... then 4 years ... then make them so unreliable that it makes sense for families to lease them, paying a monthly fee. You will own nothing and like it.

Slashdot Top Deals

I am more bored than you could ever possibly be. Go back to work.

Working...