Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re:Before you rail on this... (Score 1) 105

The thing is the LLMs don't really demand a great deal of 'literacy', so it's a bit silly to devote a lot of cycles to teaching literacy. It's kind of like back in the day when you had a whole course devoted to learning Microsoft Word, that was ridiculous.

One of the biggest areas for getting used to LLMs is also one that academic settings are the least well equipped to handle. How incorrect results manifest. Academic fodder tends to play to LLMs strengths, and trying to find counter-examples to illustrate LLM breaking down is a whac-a-mole as any noteworthy known example gets amended and/or is unreliable to produce in the wild. So you may be stuck having to go over captured examples rather than having the students experiment a lot with that phenomenon.

When we are learning math, we start by forbidding use of a calculator. Not because calculators are bad, but because we need to foster independent thought first. University probably should be mostly "LLM off" since the point is not to get academic material produced as quickly as possible, but to have people internalize some sampling of academic experience and equip them to operate independently.

Comment Re:Isn't that the point? (Score 1) 133

Depends.

Is it the case that everyone can stop working hard, or that some people get to do that while others get to work even harder to support that working hard?

If Norway gets to slack off and have a fantastic lifestyle, but only because they can import just tons of crap from more exploitative countries, that wouldn't be good.

If everyone around the world gets to slack off and still sustain a great lifestyle, then fantastic. This is in fact the goal, we should be thinking not of "can I get enough work" and more "can I have a good living?", but it needs to be considered as consistently as is feasible across the board.

This was one of the points in the story "Manna" that drove me nuts. Among other issues, they describe with sincerity a utopian socialist paradise, but they will only take so many people because that's all they can afford and leave behind those that didn't buy into the utopia when it could have just been a scam to suffer in dystopian societies. They explicitly expressed disdain for the unseen wealthy class for ignoring the plight of the less fortunate, then promptly have only a few people saved to the utopia while leaving most of the people in poverty behind to not help them either. Arguably even less, as the poverty class at least had housing and sustenance provided for by the dystopia, while the utopia explicitly left them to be "someone else's problem".

Comment Re:Yet another example of... (Score 1) 133

If you say the numerical representation of wealth can endlessly grow, sure, that's just a number. There's no such thing as objective, true numerical value, so if you have a system that is predicated on "line always goes up", you can make that system function even under static constraints by somehow changing how the numbers map to reality.

To his credit, he points to non-money indicators, like ability to succeed on tests.

It is fair to say that we can't consume exponentially more resources or create exponentially more physical goods every year, but ultimately we want to advance our standard of living and folks to do their fair part, so you don't end up with a hypothetical like one nation enjoys the high life with 8 hours worked per week while importing gob tons of stuff from another country where people work 80 hours a week.

Being sustainable can and should be part of the model. If we just went to coasting on our advancements in the 50s, we'd not have a lot of potentially sustainable advancements.

It's certainly a very weird thing to try to distill something as nuanced and complex as all of our resources, time, and thought into a single metric, yet we try and that number is flexible enough to, in theory, support infinite growth. The problem is less the absolute value, but discrepancies in how those values manifest person to person.

Comment Re: We're so back (Score 1) 66

Do you mean the mass market media where MBAs have moved studios toward milquetoast cheap slop when they get the chance?

Or the "brainrot" short form content that is kind of similar to the super lame the we would do at that age, but now video and distribution is free so the whole world gets to see it?

Well the good news is that at least sometimes the slop flops and studios get reminded they can't just dump any old thing and get success.

Of the younger generation I interact with, they are more likely to know and like a song from a YouTuber from no where, and half of the tracks I hear wouldn't have sounded out of place in the 60s, much to my surprise, with some decent musical depth and theory and not just the nth assemblage of attractive people designated to perform the same chord progression to just take in the money.

Comment Re:Read the article about the maths Olympiad (Score 1) 103

25% still seems a bit high to me. I do wonder if they really have forgotten how to do long division, or simply forgot what the words 'long division' mean. Like if you told them to work a division problem by hand, would they naturally just do long division, forgetting that was all that long division meant?

Comment Re:Get it while it's hot! (Score 1) 34

Because either:

a) It works as intended and the job inherently fast-tracks self-obsolescence.

b) It doesn't work as intended and this job evaporates as the hype money comes back down to earth.

No matter how well/poorly this current technology goes, this is a job that is not set to be a career.

Just like people claiming to be "prompt engineers", either the LLMs work and you are a useless middle man or they don't work and people don't want to fool with you. Just like "webmaster" was a thing just by being able to edit HTML files and that evaporated in the early 2000s.

Comment Re:Will it make ICEs irrelevant (Score 2) 174

Even for those that don't need that much range, there can be benefits.

The reason they can tout a goal of 600 mile range is that solid state batteries have much more energy per kg. NMC batteries are roughly 200Wh/kg, *maybe* someone can get 350Wh/kg in the most aggressive marketing claims I could find. Solid state batteries are more like 700-800 Wh/kg.

So if you say for a given car and lifestyle you could accept a 150 mile range, then you could produce for example an electric Miata that could weigh about the same as the ICE miata (ICE miata drivetrain+fuel weighs about 400 lbs, a credible electric motor might weigh 200lbs and a 150 mile solid state might also weigh about 200lbs.) A miata is the sort of car that may likely get away with low range as a 'fun' car you probably don't want to be road tripping in anyway. Or targeting a 300 mile range and being only 200lbs heavier instead of having to be 600 lbs heavier with NMC.

Comment Re:why give AI the previliage? (Score 1) 151

For these people the options are either making Agentic AI able to do everything or them doing nothing at all because they don't actually know how. One of those options includes maybe money for some period of time, and the other has no opportunity for money.

I didn't read much about the other one, but the SaaStr guy was obviously a true believer. He had been making posts gleefully detailing his vibe coding journey and then clearly feeling betrayed by how quickly it all went south out of his control.

Comment Re:Not surprising (Score 2) 151

The thing is the "vibe coding" movement is about not needing any of the technical skills that would have you actually understand testing/staging, let alone actually making an environment that would actually enforce it to an otherwise enabled "agentic" LLM.

Having another LLM to fix the other LLM is just the blind leading the blind.

It is a solvable issue, but the solutions run counter to the expectations around the immense amount of money in play. LLMs are useful, but not as useful as the unprecedented investment would demand. After the bubble deflates a bit, maybe we will see good utilization of LLMs, but right now there's a lot of high risk grifting in play and a lot of people getting in way more over their head than they formerly could manage.

Comment Re:This is not an AI failure (Score 1) 151

This is a failure of AI marketing, and how the AI companies encourage this behavior.

There are a *lot* of people without the skillset but have seen the dollars. Either they watch from the outside or they manage to become tech execs by bullshitting other non-tech executives.

Then AI companies talk up just a prose prompt and bam, you have a full stack application. The experienced can evaluate it reasonably in the context of code completion/prompt developing specific function, with a managable review surface and their own experience to evaluate and get a sense for how likely an attempted LLM usage is going to be productive and how much fixing it's going to need. The inexperienced cannot do that, so they make a go at vibe-coding up what would be tutorial fodder. Then they see a hopelessly intimidating full stack application that does exactly what they say and erroneously conclude that it must be generally capable.

So some folks can be happier vibe coding up a shovelware game with pretty low stakes and decent chance of success (though it sucks to dilute the game landscape with too much content that is utterly devoid of creativity). Some people think they can get rich quick by participating in a skilled industry without any skills (The Saastr story is particularly funny, they purport to be a resource for other developers, but can't even develop themselves). Not great, but less of a risk. The real risk are those tech execs high on BS and low on technical acumen, who are generally insecure about people that have an advantage over him. He sees a great equalizer and all his personal sources that could grade it are people he doesn't trust. So it's good to see stories like this for those executives to maybe, possibly understand the risk when they talk about laying off all or nearly all their software developers (yes, a few weeks ago an executive with hundreds of developers told me this was basically his plan, and I was only safe because I understood my respective customer base better than marketing, sales, and the executives, but most of his developers just do what he says and his "executive insight" is valuable, but their work is prime to be replaced by executives just vibe coding up stuff directly instead of having developers do it).

Comment Re:It just shows (Score 1) 64

I chose RC boat because some people are using this competing at "stupid human trick" as an example of intrinsic proof of LLMs being able to supersede humans.

In the scenario of an olympic swimming competition, an autonomous boat vs a manned boat would show no difference to each other, both would compete the task much better than a human. It's a useless test to measure general utility. Just like a person swimming a 1500 meter distance is not really a useful indicator on its own of how useful they are. These Math Olympiads are similar in that they are not particularly indicative of people being useful. The same ways that we would stress a human in impressive ways does not mean that a computer coming at it from a different approach should be considered to have broadly superseded the humans.

Yes, a real boat is valuable and LLM can be valuable when utilized correctly, but in the face of exaggerated hype some pessimistic reality check is called for to balance expectations.

Comment LLMs can't think and they don't need to (Score 2) 103

LLMs have a great deal of utility and extend the reach of computing to a fair amount of scope that was formerly out of reach to computing, but they don't "think" and the branding of the "reasoning" models is marketing, not substantive.

The best evidence is reviewing so-called "reasoning chains" and how the mistakes behave.

Mistakes are certainly plausible in "true thinking", but the way they interact with the rest of the "chain" is frequently telling. It flubs a "step" in the reasoning and if it were actual reasoning, that should propagate to the rest of the chain. However when a mistake is made in the chain, it's often isolated and the "next step" is written as if the previous step said a correct thing, without ever needing to "correct" itself or otherwise recognize the error. What has been found is that if you have it generate more content and dispose of designated "intermediate" content you have a better result, and the intermediate throwaway content certainly looks like what a thought process may look like, but ultimately it's just more prose and mistakes in the content continue to have an interesting behavior of isolation rather than contaminating the rest of an otherwise ok result.

Slashdot Top Deals

Nothing is more admirable than the fortitude with which millionaires tolerate the disadvantages of their wealth. -- Nero Wolfe

Working...