Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Re:LLMs can't match even braces (Score 1) 54

The code compiles about 50% of the time. It often can't even match braces and will introduce semi-colons in the middle of statements.

That doesn't match my experience at all from C++ and Python. I use the Plus version of ChatGPT, and I've literally never seen it make a syntax error since GPT-4 based models launched. It occassionally produces subtle logic bugs (but so do humans), but we're talking about once in every thousands of lines of code - and never something as basic as matching braces or semi-colons. Given Java has a simpler grammar and is so popular, I would have expected its proficiency in Java to be at least as good.

You mention ChatGPT. Are you sure you are using one of the latest models and one that is recommended at coding? o4-mini-high is described as "Great at coding and visual reasoning"

Comment Re:Already done with markov chains (Score 1) 54

The only limitation of both old models like HHM and the current chatbots is that they don't have a concept of the state of the chess board.

I'm not so sure. I was recently working on some font rendering software with ChatGPT to parse the glyphs of TrueType files. The glyphs are described with contours called bezier curves which are described with loops of (x,y) points called contours. I had parsed out the (x,y) points for the contours of the character 'a' in some rare font, and I gave ChatGPT these points (but I didn't tell it what character they were from). It suddenly occured to me to ask a side question of whether it could recognize this character from these contour points, and it could. It said it "looked" like the letter 'a'.

How in the world can a language model do that? The data had been normalized in an unusual way and the font was very rare. There is zero chance it had memorized this data from the training set. It absolutely must have visualized the points somehow in its "latent space" and observed that it looked like an 'a'. I asked it how it knew it was the letter 'a', and it correctly described the shape of the two curves (hole in the middle, upward and downward part on right side) and its reasoning process.

There is absolutely more going on in these Transformer-based neural networks than people understand. It appears that they have deduced how to think in a human-like way and at human levels of abstraction, from reading millions of books. In particular they have learned how to visualize like we do from reading natural language text. It wouldn't surprise me at all if they can visualize a chess board position from a sequence of moves.

Comment Re:"Top talent" (Score 1) 57

After a certain modest amount (less than you think), money can't buy happiness.

Overall, I rate your claim false.

Nobel laureates Daniel Kahneman and Angus Deaton agree with me: High income improves evaluation of life but not emotional well-being

From the paper:

We raise the question of whether money buys happiness

TL;DR; It doesn't

Comment Re:"Top talent" (Score 1) 57

The recipes are not secret. Most AI researchers know the full recipe. DeepSeek has published detailed papers on all their techniques. Meta and DeepSeek have published their full weights and models, and there are hundreds of models from them and many other companies published on Hugging Face. The math involved is simple Linear Algebra and Calculus - first year undergrad math at every university. DeepSeek is staffed by new grads with 1-2 years experience.

That said, there is probably a few minor properietary tricks you would pick up from first-hand experience at the market-leader OpenAI. The ridiculously high pay is just demand vs supply doing its thing. Even the minorest of edge is extremely valuable given the upside of winning the AI race (high demand), and it would be hard to get OpenAI folks to change ships (low supply).

The other thing I would mention is something people that haven't had realy high-paying jobs are not aware of: After a certain modest amount (less than you think), money can't buy happiness. You think it can, but it can't. Ask lotto winners how they're doing.

Comment Re:Seriously? (Score 1) 175

You've misunderstood Darwinism. Natural selection has nothing to do with who "deserves" anything; it's only about whose genes get propagated forward and whose do not. And it's not (necessarily) the stupidest among us who will likely die off, it's the least fit, for whatever definition of "fit" is pragmatically relevant for a genome's survival and reproduction under current circumstances. In today's world, stupidity might actually be a reproductive advantage.

Obligatory Idiocracy reference: Idiocracy (2006)

Comment If this is real... (Score 1) 175

If this is a true story (which I am a little skeptical about)...

I think this is an opportunity. Rather than AI reinforcing mental illness it could be trained to do the opposite and get peoples feet back on the ground.

Another thing to consider: Suppose that is already the case, and 99 times out of a hundred it does help stablize the mentally ill. It's just 1 time in 100 it gets into a feedback loop fueling the delusion. We're not likely to hear the stories about the 99 times, right? A kind of survivorship bias.

Comment Re:You're going to have to learn it (Score 1) 68

And I'm sorry but if you refuse to learn that skill, or you just plain suck at it, you will not be performing up to expectations in the jobs of the near future. People who can do it will run rings around you.

I think you are correct, at minimum human workers should learn to use AI immediately, if not sooner.

However, I'm pretty sure AI is going to completely replace, and not just augment, the vast majority of current roles - and much faster than previous technological transformations. That's the big fear experts are sounding the alarm about. There have been previous big transformations. On the main street of NYC in ~1900 every vehicle was a horse-drawn carriage. 10 years later every single vehicle was a motor car. The humans surived that no problem BUT think about it from the horses point-of-view. :) We are the horses in this scenario.

Comment Re:The writing is on the wall... (Score 1) 37

There is no evidence of actual creativity

Please describe a test that, if passed, would demonstrate that the entity taking the test has "actual creativity". I've seen first-hand AI making inferences between fields and drawing conclusions that are not in its training data. If that's not "actual" creativity, then what is?

The consensus (as in vast majority) among experts in the field, experts in related fields, world leaders and CEOs is that AI is already very good and is getting better at an alarming rate. There's is almost none among them that believe AI won't become superhuman (like chess), the debate has turned now to how long.

It was just a month or two ago I was getting downvoted for suggesting that AI soon be able to produce photorealistic videos indistinguishable from reality - and then comes along Veo 3. This is going to keep happening, month after month, until the AI-skeptics will be as embarassing as the flat-earthers.

Comment Re:The writing is on the wall... (Score 2) 37

And how will people and organizations will be motivated to publish anything in that world? Right. Eventually it will just be old data and AI slop in these "knowledge engines".

Your question supposes that AI is and/or will continue to be incapable of producing new content that is as good or better than humans can.

I don't think this is the case. The consensus among experts in the field is that new AI content is currently on-par with about a college-level student in almost every field, up from around high-school level last year. Like chess, it is expected it won't be long before they are far better than the best human expert in every field.

I wouldn't be surprised if the opposite prejudice develops against human-produced work. That is, if someone publishes so-called "human slop" it will be considered a rude waste of time getting in the way of the far superior AI-generated content.

Further, your question supposes that AI is just regurgitating human-produced work. The existing set of human-produced data is indeed used to bootstrap AI systems, but that doesn't imply it will be needed on an on-going basis. There is clear evidence that AI is able to develop new knowledge, in much the same way as a human can.

Comment Re:in other words (Score 2) 181

If you're actually interested in this topic there is a really great survey of all the different theories of consciousness called A landscape of consciousness: Toward a taxonomy of explanations and implications by Robert Lawrence Kuhn (creator of PBS series "Closer To Truth").

My interpretation is that there are plenty of answers to these questions. The problem is that consciousness is a subjective experience, so there is no scientific way to distinguish objectively right answers from objectively wrong answers. That is, there is no test we can perform. Answers are not falsifiable. That is a barrier for a scientist, but philosophers are allowed to plow right through. :)

Comment The writing is on the wall... (Score 2) 37

It's obvious that AI will make the Google Search business model obsolete. Who on earth will still want to sift through all those pages/sites when you can get an AI to do it for you? The way ChatGPT is "monetized" is that you pay them money to use it (shock! horror!). People should welcome this. When you pay money for a product, the product is the product. When you use an ad-sponsored product for free, YOU are the product.

What Google should do is drop advertising and require a $/month subscription to use google.com. That way they would have no obligation to drive traffic to anyone, and they can focus on providing users with whatever information they are searching for.

That's not going to happen anytime soon because, after 20 years, the advertising business is now part of Googles DNA. So they are just going to slowly die for the next decade clinging to it, milking it till its last drop, until they are on their deathbed and make a big pivot, just like Microsoft did with shrink-wrap.

Comment Re:Complete outsider... (Score 1) 34

Free tier is limited to ~10-20 queries per day of the low end models, whereas $20 tier is virtually unlimited for the "dumbest" models , and moderate access to reasoning models, while $200 tier gives you access to enhanced reasoning models that can think for multiple minutes at a time

That's a reasonable summary but I think it downplays the $20/month access (Plus subscription) a little too much. For $20/month you get quota-based access to the best models (including the "Deep Research" ones that spend 30mins developing a research paper). My understanding is that for $200/month (Pro subscription) you get unlimited access to the best models.

Slashdot Top Deals

Money is truthful. If a man speaks of his honor, make him pay cash. -- Lazarus Long

Working...