Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment How is open source authoritarian? (Score 2) 27

The culture of Chinas AI firms seems to be to open-source everything on Hugging Face, which is really awesome. That's what OpenAI was supposed to be, but instead the american firms try to keep everything closed source, proprietary and for-profit. And you have the nerve to call Chinas AI "authoritarian" and the US one "free". You've very much got it backwards.

Comment Give it time (Score 1) 45

Horse breeders didn't vanish the moment the model T was put on the market. Horse breeding peaked 7 years after the model T was launched. The web took about 5 years to hit 100 million users after Netscape Navigator 1.0. In contrast, ChatGPT had 100 million users in 2 months, easily an all-time record. It's a revolution, and yes, its going to take all the jobs. Get over it.

Comment Integration more interesting (Score 1) 36

I think what would be more interesting than using AI to do the game development (which is already happening and not news) - would be to integrate AI into the games themselves. People are already experimenting with having AI control NPCs (think in a world-of-warcraft style open world game where you can have a real conversation and develop a relationship with the NPCs). LLMs are actually very well-suited for that sort of thing (Chatbots, like ChatGPT, are actually just fictional NPCs written by the foundation model of LLMs - see this paper).

Comment Re:What's the use case for bipedal humanoid form? (Score 1) 92

The four-legged robot dogs make more sense.

I think I'd find it annoying to have an extra set of legs like a Centaur. The hind quarter would just get in the way most of the time, even though the extra stability would be useful in some situations - I suspect those situations would be in the minority. I think there is reason such things haven't evolved.

Comment Re:We aren't close at all (Score 1) 92

We can make a robot that can look humanoid and act a bit human. But we're 50+ years from a robot that has the manual dexterity of a human. We won't see a humanoid robot that can sculpt like Michelangelo or paint like Rembrandt for at least 50 years, if not longer. Our linear actuator/motor technology has plateaued for many decades with no foreseeable improvement. The only hope is some sort of artificial muscle tech, but materials tech in that field has also plateaued for a few decades.

AI-generated art seems to be unarguably on-par with the quality of michelangelo or rembrandt (setting aside the extent to which it is derivative or truly original), but I agree manual dexterity of physical robots seems to be lagging for some reason. My sense of it is that the problem isn't computational but physical. We can't seem to develop physical robots with the same raw weight/power/dexterity/flexibility ratios that animals posess in their muslces/skeleton. If we have a computer simulation of a cat with all its different muscles/skeleton acurately simulated, I think we can get it to learn to do the same things control-wise. What we can't do is create a real metal/mechanical cat with all those same degrees of freedom. (Though corrections welcome.)

Comment Re:What does it do? (Score 2) 92

A general-purpose humanoid robot with human-like intelligence and dexterity will be able to substitute for a human being. It's true that special-purpose non-humanoid robots may do a better job at specific tasks, but due to the economy of scale (commoditization) of mass-produced general-purpose humanoid robots, they will be available at a lower price than lower-volume special-purpose robots. That's my understanding of the theory, anyway.

Comment Re:Sycophantic responses (Score 1) 41

Chatbots seemed wired to ingratiate themselves to their users. I use one to help with coding, mainly answer "how do I do this" or "explain what x does" so I can better code. If I point out an error in its reply, I get a "You are correct, great catch" type responses and many questions are answered starting with "Great question..."

You'd think, after ingesting tons of message board posts, for coding at least, most answers would be "Noob, RTFM."

AI companies (like OpenAI) use a technique called RLHF (Reinforcement Learning From Human Feedback) to modify AI bots personality. The math is a little complicated but the gist of it is that they show them examples of the kind of behavior they like ("Great question" "Great catch" and so on) and then the AI learns (is brainwashed) to exhibit that behavior.

It's only one of many sources of the AIs personalities, but it is a very potent one.

Comment Re:LLMs can't match even braces (Score 1) 54

The code compiles about 50% of the time. It often can't even match braces and will introduce semi-colons in the middle of statements.

That doesn't match my experience at all from C++ and Python. I use the Plus version of ChatGPT, and I've literally never seen it make a syntax error since GPT-4 based models launched. It occassionally produces subtle logic bugs (but so do humans), but we're talking about once in every thousands of lines of code - and never something as basic as matching braces or semi-colons. Given Java has a simpler grammar and is so popular, I would have expected its proficiency in Java to be at least as good.

You mention ChatGPT. Are you sure you are using one of the latest models and one that is recommended at coding? o4-mini-high is described as "Great at coding and visual reasoning"

Comment Re:Already done with markov chains (Score 1) 54

The only limitation of both old models like HHM and the current chatbots is that they don't have a concept of the state of the chess board.

I'm not so sure. I was recently working on some font rendering software with ChatGPT to parse the glyphs of TrueType files. The glyphs are described with contours called bezier curves which are described with loops of (x,y) points called contours. I had parsed out the (x,y) points for the contours of the character 'a' in some rare font, and I gave ChatGPT these points (but I didn't tell it what character they were from). It suddenly occured to me to ask a side question of whether it could recognize this character from these contour points, and it could. It said it "looked" like the letter 'a'.

How in the world can a language model do that? The data had been normalized in an unusual way and the font was very rare. There is zero chance it had memorized this data from the training set. It absolutely must have visualized the points somehow in its "latent space" and observed that it looked like an 'a'. I asked it how it knew it was the letter 'a', and it correctly described the shape of the two curves (hole in the middle, upward and downward part on right side) and its reasoning process.

There is absolutely more going on in these Transformer-based neural networks than people understand. It appears that they have deduced how to think in a human-like way and at human levels of abstraction, from reading millions of books. In particular they have learned how to visualize like we do from reading natural language text. It wouldn't surprise me at all if they can visualize a chess board position from a sequence of moves.

Comment Re:"Top talent" (Score 1) 57

After a certain modest amount (less than you think), money can't buy happiness.

Overall, I rate your claim false.

Nobel laureates Daniel Kahneman and Angus Deaton agree with me: High income improves evaluation of life but not emotional well-being

From the paper:

We raise the question of whether money buys happiness

TL;DR; It doesn't

Comment Re:"Top talent" (Score 1) 57

The recipes are not secret. Most AI researchers know the full recipe. DeepSeek has published detailed papers on all their techniques. Meta and DeepSeek have published their full weights and models, and there are hundreds of models from them and many other companies published on Hugging Face. The math involved is simple Linear Algebra and Calculus - first year undergrad math at every university. DeepSeek is staffed by new grads with 1-2 years experience.

That said, there is probably a few minor properietary tricks you would pick up from first-hand experience at the market-leader OpenAI. The ridiculously high pay is just demand vs supply doing its thing. Even the minorest of edge is extremely valuable given the upside of winning the AI race (high demand), and it would be hard to get OpenAI folks to change ships (low supply).

The other thing I would mention is something people that haven't had realy high-paying jobs are not aware of: After a certain modest amount (less than you think), money can't buy happiness. You think it can, but it can't. Ask lotto winners how they're doing.

Slashdot Top Deals

Statistics means never having to say you're certain.

Working...