Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:What's the use case for bipedal humanoid form? (Score 1) 92

The four-legged robot dogs make more sense.

I think I'd find it annoying to have an extra set of legs like a Centaur. The hind quarter would just get in the way most of the time, even though the extra stability would be useful in some situations - I suspect those situations would be in the minority. I think there is reason such things haven't evolved.

Comment Re:We aren't close at all (Score 1) 92

We can make a robot that can look humanoid and act a bit human. But we're 50+ years from a robot that has the manual dexterity of a human. We won't see a humanoid robot that can sculpt like Michelangelo or paint like Rembrandt for at least 50 years, if not longer. Our linear actuator/motor technology has plateaued for many decades with no foreseeable improvement. The only hope is some sort of artificial muscle tech, but materials tech in that field has also plateaued for a few decades.

AI-generated art seems to be unarguably on-par with the quality of michelangelo or rembrandt (setting aside the extent to which it is derivative or truly original), but I agree manual dexterity of physical robots seems to be lagging for some reason. My sense of it is that the problem isn't computational but physical. We can't seem to develop physical robots with the same raw weight/power/dexterity/flexibility ratios that animals posess in their muslces/skeleton. If we have a computer simulation of a cat with all its different muscles/skeleton acurately simulated, I think we can get it to learn to do the same things control-wise. What we can't do is create a real metal/mechanical cat with all those same degrees of freedom. (Though corrections welcome.)

Comment Re:What does it do? (Score 2) 92

A general-purpose humanoid robot with human-like intelligence and dexterity will be able to substitute for a human being. It's true that special-purpose non-humanoid robots may do a better job at specific tasks, but due to the economy of scale (commoditization) of mass-produced general-purpose humanoid robots, they will be available at a lower price than lower-volume special-purpose robots. That's my understanding of the theory, anyway.

Comment Re:Sycophantic responses (Score 1) 41

Chatbots seemed wired to ingratiate themselves to their users. I use one to help with coding, mainly answer "how do I do this" or "explain what x does" so I can better code. If I point out an error in its reply, I get a "You are correct, great catch" type responses and many questions are answered starting with "Great question..."

You'd think, after ingesting tons of message board posts, for coding at least, most answers would be "Noob, RTFM."

AI companies (like OpenAI) use a technique called RLHF (Reinforcement Learning From Human Feedback) to modify AI bots personality. The math is a little complicated but the gist of it is that they show them examples of the kind of behavior they like ("Great question" "Great catch" and so on) and then the AI learns (is brainwashed) to exhibit that behavior.

It's only one of many sources of the AIs personalities, but it is a very potent one.

Comment Re:LLMs can't match even braces (Score 1) 54

The code compiles about 50% of the time. It often can't even match braces and will introduce semi-colons in the middle of statements.

That doesn't match my experience at all from C++ and Python. I use the Plus version of ChatGPT, and I've literally never seen it make a syntax error since GPT-4 based models launched. It occassionally produces subtle logic bugs (but so do humans), but we're talking about once in every thousands of lines of code - and never something as basic as matching braces or semi-colons. Given Java has a simpler grammar and is so popular, I would have expected its proficiency in Java to be at least as good.

You mention ChatGPT. Are you sure you are using one of the latest models and one that is recommended at coding? o4-mini-high is described as "Great at coding and visual reasoning"

Comment Re:Already done with markov chains (Score 1) 54

The only limitation of both old models like HHM and the current chatbots is that they don't have a concept of the state of the chess board.

I'm not so sure. I was recently working on some font rendering software with ChatGPT to parse the glyphs of TrueType files. The glyphs are described with contours called bezier curves which are described with loops of (x,y) points called contours. I had parsed out the (x,y) points for the contours of the character 'a' in some rare font, and I gave ChatGPT these points (but I didn't tell it what character they were from). It suddenly occured to me to ask a side question of whether it could recognize this character from these contour points, and it could. It said it "looked" like the letter 'a'.

How in the world can a language model do that? The data had been normalized in an unusual way and the font was very rare. There is zero chance it had memorized this data from the training set. It absolutely must have visualized the points somehow in its "latent space" and observed that it looked like an 'a'. I asked it how it knew it was the letter 'a', and it correctly described the shape of the two curves (hole in the middle, upward and downward part on right side) and its reasoning process.

There is absolutely more going on in these Transformer-based neural networks than people understand. It appears that they have deduced how to think in a human-like way and at human levels of abstraction, from reading millions of books. In particular they have learned how to visualize like we do from reading natural language text. It wouldn't surprise me at all if they can visualize a chess board position from a sequence of moves.

Comment Re:"Top talent" (Score 1) 57

After a certain modest amount (less than you think), money can't buy happiness.

Overall, I rate your claim false.

Nobel laureates Daniel Kahneman and Angus Deaton agree with me: High income improves evaluation of life but not emotional well-being

From the paper:

We raise the question of whether money buys happiness

TL;DR; It doesn't

Comment Re:"Top talent" (Score 1) 57

The recipes are not secret. Most AI researchers know the full recipe. DeepSeek has published detailed papers on all their techniques. Meta and DeepSeek have published their full weights and models, and there are hundreds of models from them and many other companies published on Hugging Face. The math involved is simple Linear Algebra and Calculus - first year undergrad math at every university. DeepSeek is staffed by new grads with 1-2 years experience.

That said, there is probably a few minor properietary tricks you would pick up from first-hand experience at the market-leader OpenAI. The ridiculously high pay is just demand vs supply doing its thing. Even the minorest of edge is extremely valuable given the upside of winning the AI race (high demand), and it would be hard to get OpenAI folks to change ships (low supply).

The other thing I would mention is something people that haven't had realy high-paying jobs are not aware of: After a certain modest amount (less than you think), money can't buy happiness. You think it can, but it can't. Ask lotto winners how they're doing.

Comment Re:Seriously? (Score 1) 175

You've misunderstood Darwinism. Natural selection has nothing to do with who "deserves" anything; it's only about whose genes get propagated forward and whose do not. And it's not (necessarily) the stupidest among us who will likely die off, it's the least fit, for whatever definition of "fit" is pragmatically relevant for a genome's survival and reproduction under current circumstances. In today's world, stupidity might actually be a reproductive advantage.

Obligatory Idiocracy reference: Idiocracy (2006)

Comment If this is real... (Score 1) 175

If this is a true story (which I am a little skeptical about)...

I think this is an opportunity. Rather than AI reinforcing mental illness it could be trained to do the opposite and get peoples feet back on the ground.

Another thing to consider: Suppose that is already the case, and 99 times out of a hundred it does help stablize the mentally ill. It's just 1 time in 100 it gets into a feedback loop fueling the delusion. We're not likely to hear the stories about the 99 times, right? A kind of survivorship bias.

Comment Re:You're going to have to learn it (Score 1) 68

And I'm sorry but if you refuse to learn that skill, or you just plain suck at it, you will not be performing up to expectations in the jobs of the near future. People who can do it will run rings around you.

I think you are correct, at minimum human workers should learn to use AI immediately, if not sooner.

However, I'm pretty sure AI is going to completely replace, and not just augment, the vast majority of current roles - and much faster than previous technological transformations. That's the big fear experts are sounding the alarm about. There have been previous big transformations. On the main street of NYC in ~1900 every vehicle was a horse-drawn carriage. 10 years later every single vehicle was a motor car. The humans surived that no problem BUT think about it from the horses point-of-view. :) We are the horses in this scenario.

Comment Re:The writing is on the wall... (Score 1) 37

There is no evidence of actual creativity

Please describe a test that, if passed, would demonstrate that the entity taking the test has "actual creativity". I've seen first-hand AI making inferences between fields and drawing conclusions that are not in its training data. If that's not "actual" creativity, then what is?

The consensus (as in vast majority) among experts in the field, experts in related fields, world leaders and CEOs is that AI is already very good and is getting better at an alarming rate. There's is almost none among them that believe AI won't become superhuman (like chess), the debate has turned now to how long.

It was just a month or two ago I was getting downvoted for suggesting that AI soon be able to produce photorealistic videos indistinguishable from reality - and then comes along Veo 3. This is going to keep happening, month after month, until the AI-skeptics will be as embarassing as the flat-earthers.

Comment Re:The writing is on the wall... (Score 2) 37

And how will people and organizations will be motivated to publish anything in that world? Right. Eventually it will just be old data and AI slop in these "knowledge engines".

Your question supposes that AI is and/or will continue to be incapable of producing new content that is as good or better than humans can.

I don't think this is the case. The consensus among experts in the field is that new AI content is currently on-par with about a college-level student in almost every field, up from around high-school level last year. Like chess, it is expected it won't be long before they are far better than the best human expert in every field.

I wouldn't be surprised if the opposite prejudice develops against human-produced work. That is, if someone publishes so-called "human slop" it will be considered a rude waste of time getting in the way of the far superior AI-generated content.

Further, your question supposes that AI is just regurgitating human-produced work. The existing set of human-produced data is indeed used to bootstrap AI systems, but that doesn't imply it will be needed on an on-going basis. There is clear evidence that AI is able to develop new knowledge, in much the same way as a human can.

Slashdot Top Deals

Most people will listen to your unreasonable demands, if you'll consider their unacceptable offer.

Working...