Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re: ChatGPT is not a chess engine (Score 1) 105

This is a badly conducted experiment by some random fuck on LinkedIn. Talking about unchecked data and garbage. Apparently everybody on Slashdot is now so hellbent on disparaging anything AI that they'll take any bit of ragebait at face value.

The LinkedIn post: https://www.linkedin.com/posts...

Relevant quotes by the author:
- "Despite being given a baseline board layout to identify pieces, ChatGPT confused rooks for bishops, missed pawn forks, and repeatedly lost track of where pieces were — first blaming the Atari icons as too abstract to recognize, then faring no better even after switching to standard chess notation."
- "Regardless of whether we’re comparing specialized or general AI, its inability to retain a basic board state from turn to turn was very disappointing. Is that really any different from forgetting other crucial context in a conversation?"

Also note that he doesn't indicate which model was used, which means it was probably a combination of GPT 4o and 4.1-mini. Both are far removed from the state of the art and specifically created to be cheap whilst sacrificing reasoning skills. This experiment is telling us stuff we already knew and has been improved in other models (maybe even solved).

But I guess the engagement bait worked for this guy's LinkedIn post.

Comment Re:Total number of qbits (Score 3) 26

The summary is hilariously ironic:

This was the approach IBM focused on initially, but the company eventually realized that creating the hardware to support it was an "engineering pipe dream"

And then later:

"We feel confident it is now a question of engineering to build these machines, rather than science."

"Cracked the code" might be overselling it a bit at this point, I would say.

Comment Re:20+% growth is stalling? (Score 1) 18

When your head is firmly buried in the sand, it may look like that.

The volatile pricing of inference is correctly seen as hard to plan for, which simply means that inference providers will start offering more rate limited flat-fee options and other pricing models that are attractive to customers who need the predictability. This is nothing special (cloud computing has had similar issues and solutions for years) and definitely not evidence for "OMG AI IS HYPE YOU IDIOTS" with which you so consistently pollute Slashdot.

Comment Re: Foolishness. (Score 1) 68

You say that with the confidence of a chat bot

is a thinly veiled attack at me and AI assistants, each by themselves constituting an ad hominem. Chat bot is pejorative.

We're not talking about a knowledgeable personal tutor here but a next token predictor optimised on stringing together sentences that the user is most likely to believe is an actual answer to their question

Again a massive understatement of what AI assistants can currently do. You can say that it is just a stochastic parrot or whatever negative description you can come up with as often as you want, but it doesn't change the billions and billions of times AI assistants have already provided knowledge and insight and continue to do so. It doesn't change the fact that AI assistants already perform better than humans on a multitude of very hard tasks and will only become better.

Now I agree that they're not perfect, but this is just not true: "Their potential to be counterproductive is about as big as to be helpful."
It's like saying that the chances of winning the lottery are 50-50: Either you win or you lose. It does not work that way.
The baseline potential would require counting the percentage of interactions a student had with such an AI that was helpful versus counterproductive. Even then it would highly depend on how the AI was integrated into the process, what its additional guardrails were, the system prompt used, etc., etc.

Looking at it as an engineering problem, I'd say it's pretty easy to make an educational tutoring AI assistant that is helpful and reliably so.

Comment Re:Foolishness. (Score 1) 68

You seem to think that it would be hard to use the base level of knowledge and intelligence of frontier LLMs and engineer software around it that is finetuned to a very specific environment and very specific uncontroversial corpus. You seem to think that it needs to be perfect to add value. You seem to think that it would consist of giving the answers to questions without using any form of additional validation.

See it as an engineering challenge rather than an impossibility. How would you make LLMs usable in education?

Comment Re:Foolishness. (Score 1) 68

No, it's very clear that trying to ban the tools is going to be completely unsuccessful and if it's also clear that having a personal tutor can immensely increase the rate of learning. It is thus very far from a 'wild guess', so keep your stupid ad hominems to yourself.

Comment Re:Foolishness. (Score 4, Insightful) 68

This is not about that. This is about every student having a personal tutor, specifically designed for educational purposes. Khan academy on steroids.

Will every student use it to actually learn things? No. Will it be far more productive than unsuccessfully trying to ban these tools. Yes.

The genie is out of the bottle, people. Adapt or die.

Comment Re:Seems reasonable (Score 2) 74

We've found that there is substantial societal benefit to having police communications public. This is an established fact.

Is it?
Can you elaborate?

I can see the requirement of the communications having to be recorded for legal purposes, but I can't think of a case for them having to be made public 'live'.

Comment Re:The question is... (Score 2) 361

This is misapplying the scenario

No, your scenario just sucked. You tried to counter "machines will take our jobs" with "well I do some stuff by hand for free for myself and that will never go away" (which I agree with, by the way: many people won't mind doing things for free, especially for themselves).

Building them will never be cheap, even though they are *very good* machines.

What? Don't say stupid things. It's a comparison. How the fuck are you going to transport people at 800km/h through the air with just humans? This is a case where humans aren't just less capable of doing the job, they are entirely incapable of doing it.

There is no reason to suppose that this will ever be possible.

You're wrong (the things unique about organics are a detriment for the things we're talking about), but it was a hypothetical, so don't deflect and answer the question:
What business owner would then choose the more expensive human employee over that robot?

Today, computers remain at roughly the same price each year.

No, they don't. Computing power and capabilities in general keep increasing. Not at 40x a year, but definitely more than 1.2x a year. That leads to this: Computers that were made 5 years ago cost way less than they used to. Devaluation of computer hardware is one of the highest of all products on this planet.

There are limits to how low prices can go.

This is a dumb argument. If you're getting a million times the capability for the same price, it doesn't matter what the lower bound on the price is. The product is still getting more attractive than the alternatives. Which was the subject. Human labor will not become (much) cheaper. AI labor will. Millions of times cheaper.

Comment Re: The Users Are Worrying Too Late (Score 3, Insightful) 45

Don't be disingenuous. IRC can not do all of the same things. Discord supports sharing your screen, easy voice chatting and a whole slew of minor features that IRC can't touch on any level of convenience.

Remember that you have to think from the perspective of a non-tech random gamer. IRC ain't it.

Comment Re:The question is... (Score 1) 361

Why didn't I used such a machines? Simple...cost.

So you were paid absolutely nothing for your human labor. Yep, that is exactly the future.

Most *businesses* that build fences, also don't use such machines, for the same reason.

It's cost vs. benefit. If the current machines can't do it cheaper or better than you, they're not very good machines, now are they?
What if such a machine/robot is developed (and it will be)? It does whatever you do with regard to building fences better and cheaper than a human employee does. What business owner would then choose the more expensive human employee over that robot?

The point is, using AI isn't cheap

https://epoch.ai/data-insights...

Yeah. You aren't getting cheaper by a factor of 40 to a 100 every year. AI is.

Comment Re:They did (Score 2) 141

It is a contentious subject, but notice that you did not at any point address GPs point. One of the things I've noticed in this is that people on our side (yes, I am very much on the left) are very, very quick to dismiss or ignore the things GP is talking about.

The notion that men might in some way be disadvantaged in modern Western society is often not even worth a millisecond of thought. "These young men must be indoctrinated by misogynists or otherwise just dumb assholes who don't see how they are the oppressors."
It's insane to not take any of it seriously and think that if you just tell these kids to stop complaining and not be such woman haters often enough, they'll feel welcome again with us.

One of the strongest examples of men being disadvantaged in modern Western societies I've found is related to the draft, and especially that in Ukraine. Millions of human beings have been forced to become murderers, be traumatized for life, and risk their life and limbs, but only the men. The only thing that makes a difference between that and being completely free to doing whatever somebody wanted to do was whether they had a dick or not. Technically speaking, that is a huge violation of human rights, but in Western societies it is pretty much accepted that such an egregious thing is fine: "That's just how it is" (or worse: "Women don't start wars"). Now I'm not commenting on whether the situation should have been different there, that's a different discussion. The point is the absolute callousness with which we collectively approach and dismiss such male disadvantages. There's no denying it. Straight up life or death sexism hurting men and a shrug is all it generally gets.

If you extend that further and start looking around it becomes very apparent that we show very little empathy towards men in most societies in a multitude of circumstances. We treat them as perpetrators and dangers, but almost never as victims (and for women it is the opposite, which on both sides shows the deeply rooted stereotype of "men strong, women weak"). Ironically, trans people often have the best perspective on this as they have experienced being treated as both man and woman. Trans men report going into a social desert: nobody ever touches you anymore, talking about your mental health is not an everyday occurrence anymore, and in public nobody sticks up for you; you're on your own. (They also report the other side of the coin, which is being taken more seriously in discussions etc., but that is not the point here.)

Turning this back to politics: I truly believe this is one of the biggest blind spots on our side. In our push to support the oppressed and be inclusive we've become antagonistic and excluded the specific needs and grievances of huge group of people. Until we start taking them seriously, they'll never feel welcome on our side and the Trumps (and Tates) of the world will happily welcome them into their swamp.

Slashdot Top Deals

All the evidence concerning the universe has not yet been collected, so there's still hope.

Working...