Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment Re:Details? (Score 1) 37

The signalling and handling of money is done completely on the exchange end

That was far from universal, as anyone with a red box 30 years ago could tell you.

Later payphones were surprisingly 'smart'. Not only could handle payments, they could even be scheduled to call home to report. Others you had to initial the call,and hope no one answered! Sometime around the turn of the century, I worked on a project that used that data to create dynamic collection routes.

Comment Re:Today's models are not the old models. (Score 1) 37

but DEI has killed advertisements - it used to be 'sure I would' but now it's mostly 'nah.' Sex can't sell if everyone says 'nah.'

I hate to break it to you, but most people aren't bigoted freaks who can't handle seeing non-white people in ads without crying.

Corporations don't care about social issues. They only care about money. They put "DEI" in ads because it works. Get woke or go broke.

Comment Re:And the big question is... (Score 3, Insightful) 26

People will invest in that

Only stupid people who don't know the difference between profit and revenue.

Don't let the fact that these things are inescapable at the moment fool you. These things are absurdly expensive. Everyone is banking on the tech improving rapidly and the cost falling dramatically. Enjoy the affordable access while it lasts because it won't last long.

Comment Re:"AI" is just an artificial politician (Score 3, Insightful) 26

Is AI good at summarizing

It's astonishingly bad at summarizing text. It will ignore important details and 'hallucinate' others. Oh, and if the thing you want it to summarize isn't accessible or doesn't exist, it will still provide 'summary'.

or do you just believe it's good because it's convenient?

The output looks really good if you don't bother to check it for accuracy.

Is it really doing what it says it's doing

They suck at summarizing text because they're not actually summarizing text. All these things do, all they can do, is next-token prediction. That's why it doesn't matter if there isn't any text to summarize. Next-token probabilities are produced the exact same way, no matter what the context happens to be.

do you just have a shortcut in your brain that says confident speech is probably right so you don't have to waste time thinking about it?

To be fair, I think we're all guilty of that. If not when it comes to AI generated nonsense, then a book or some other media. It's impossible for us to be experts in everything, so we all lean on expert opinion. We also tend to associate confidence with certainty, which is fine most of the time, provided we don't also mistake certainty for accuracy!

Comment Re:Ambiguity? (Score 1) 62

a fairly basic and trivial problem that should've been covered in the first week of any CS class.

Nonsense. Multidimensional arrays are not a week 1 topic. Not even in AP CS, which doesn't cover anything other than programming.

I'm periodically asked to teach the AP CS class at a local private school. Most students have no programming experience at all. A few have done hour of code, but their no better off for it. The first two weeks are all computational thinking. You can't just throw aggregate structures and nested loops at them and expect anything other than confusion and frustration.

If those are the hardest questions on the exam, I have no idea how the pass rate is so low. My class met just once a week for one semester and I have a 100% pass rate. Once they have a handle on programming, the rest is just details. I have to wonder what the failing 40% were being taught...

Comment Re:Just like humans: How we train them. (Score 1) 55

The problem with AI is that it does not get this feedback and so does not learn & get better.

LLMs simply do not work that way. We do update models using human feedback, just not they way people imagine. These things are not science fiction robots. They do not work like biological brains. They do not learn and change as they're used, neither can they be taught through pretend conversations or by having them fix bugs or make other corrections. Remember that they operate strictly on relationships between tokens, not on facts and concepts. All they do is next-token prediction with no internal state retained between them, making it objectively impossible for them to plan a response beyond the current token. Whatever higher-level reasoning you're imagining simply isn't there, making whatever feedback you'd want them to incorporate completely meaningless.

Comment Re:Oh grasshopper⦠(Score 1) 90

That's just what silly autodidacts tell themselves to feel important. He couldn't be more transparent:

nothing your professor says in school will matter. You'll learn a bit from the books and assignments on your own, but what really matters will be the connections you make and the stamp on your resume

"School is useless! You can learn everything on your own. The only reason I'm not more successful is because I don't have the 'stamp on my resume' or the professional connections you were able to make."

Comment Re:Serious (Score 1) 70

They weren't AI-based, they were just programs where you answered a bunch of questions about the patient, symptoms, test results etc. and it came up with the most likely diagnoses.

I hate the break it to you, but expert systems are AI. In fact, expert systems were the driving force behind the great AI Boom 40+ years ago. See: XCON

They're expensive to make and expensive to maintain, but expert systems don't hallucinate. They're utility and reliability are why they're still in use, with new systems being developed and deployed all the time. They don't get a lot of attention in the press, but they are still and active area of research.

Comment Re:Poor couple. (Score 1) 81

The law is unconstitutional, as other similar laws have been found in the past. It hasn't been removed from the books only because nobody has been charged for it in a century, thus nobody has had a chance to challenge it on those grounds. The exception is for the military, which has the UMC which is allowed to have stricter restrictions on behavior.

Comment Re:More things wrong with the world. (Score 4, Informative) 81

YEah, none of this will happen. Let's assume they don't have a prenup (in which case the settlement of assets is dictated by that). The wife would get 50% of what was generated during their marriage at best. That may include the house, but its value would be subtracted from what she got in cash. Alimony... depends on a lot of circumstances, but it's more rare and generally a limited time. Plus we have no idea what the wife's income is, she may make as much or more.

Will he get a job again? Of course he will. Probably not as a CEO in the near term, but he'll absolutely get jobs where he isn't a visible presence for the company. And in a few years the CEO jobs will open again, because nobody is going to give a fuck a year from now.

As for going to jail- no. If the alimony (which is unlikely to exist) does exist and it is set high, he goes back to court to get it lowered. Because alimony is based on your income (with a few exceptions for example purposefully staying unemployed). Given that he was just publicly fired, his current income potential is very low, so any alimony would be matchingly low. There are formulas for these things.

So in other words, your just spouting misogynistic bullshit.

Slashdot Top Deals

The greatest productive force is human selfishness. -- Robert Heinlein

Working...