Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:Overwrought (Score 1) 28

This does not appear to be holding up in practice, at least not reliably.

It holds up in some cases, not in others, and calculating an average muddles that.

Personally, I use AI coding assists for two purposes quite successfully: a) more intelligent auto-complete and b) writing a piece of code using a common, well understood algorithm (i.e. lots of sources the AI could learn from) in the specific programming language or setup that I need.

It turns out that it is much faster and almost as reliable to have the AI do that then finding a few examples on github and stackoverflow, checking which ones are actually decent, and translating them myself.

Anything more complex than that and it starts being a coin toss. Sometimes it works, sometimes it's a waste of time. So I've stopped doing that because coding it myself is faster and the result better than babysitting an AI.

And when you need to optimize for a specific parameter - speed, memory, etc. - you can just about forget AI.

Comment Re:Vibe coding is the new self-driving (Score 1) 28

The promises started long before the technology could fulfill them. Who is going to do the vibe-cleanup coding if it takes a decade or three for the tech to catch up to the hype?

People who understand how to write reliable maintainable code, of course... But the world seems poorly positioned to produce more of those.

I would argue that it may be impossible to produce many more of those, as it requires a specific mind-set, specific skills and specific motivations. Obviously, treating the existing ones badly does make the problem worse.

The tricky thing is that LLMs are actually pretty good at implementing homework assignments. It's when you need code beyond that scope that the illusion of competence starts to fall apart.

Exactly. Homework is simple because you need to be able to learn from it. Fail on the homework (or use an LLM) and you will not get anywhere.

Comment smoke and mirros (Score 1) 28

Hey, industry, I've got an idea: If you need specific, recent, skills (especially in the framework-of-the-month class), how about you train people in them?

That used to be the norm. Companies would hire apprentices, train them in the exact skills needed, then at the end hire them as proper employees. These days, though, the training part is outsourced to the education system. And that's just dumb in so many ways.

Universities should not train the flavour of the moment. Because by the time people graduate, that may have already shifted elsewhere. Universities train the basics and the thinking needed to grow into nearby fields. Yes, thinking is a skill that can be trained.

Case in point: When I was in university, there was one short course on cybersecurity. And yet that's been my profession for over two decades now. There were zero courses on AI. And yet there are whitepapers on AI with me as a co-author. And of the seven programming languages I learnt in university, I haven't used even one of them ever professionally and only one privately (C, of course. You can never go wrong learning C. If you have a university diploma in computer science and they didn't teach you C, demand your money back). Ok, if you count SQL as a programming language, it's eight and I did use that professionally a few times. But I consider none of them a waste of time. Ok, Haskell maybe. The actual skill acquired was "programming", not a particular language.

Should universities teach about AI? Yes, I think so. Should they teach how to prompt engineer for ChatGPT 4? Totally not. That'll be obsolete before they even graduate.

So if your company needs people who have a specific AI-related skill (like prompt engineering) and know a specific AI tool or model - find them or train them. Don't demand that other people train them for you.

FFS, we complain about freeloaders everywhere, but the industry has become a cesspool of freeloaders these days.

Comment Re:Same old song (Score 1) 28

Indeed. Almost like people do not even know anymore that you do not have to be part of every hype.

At the same time, Azure probably got completely compromised. Again. And they do not even know who got in and did what, as they have no logs. Maybe invest some real money into IT security? But no, empty promised of "Security is our highest priority" is the extend of what they do about that. But Billions go into AI. As extending the fragile house of cards even more was a sane idea.

Comment uh... wrong tree? (Score 1) 68

"When the chef said, 'Hey, Meta, start Live AI,' it started every single Ray-Ban Meta's Live AI in the building. And there were a lot of people in that building,"

The number of people isn't the problem here.

The "started every" is.

How did they not catch that during development and found a solution? I mean, the meme's where a TV ad starts Alexa and orders 10 large pizzas are a decade old now.

Comment Sounds doomed... (Score 2) 17

This seems like the sort of advice that is going to be exceptionally hard to get followed because it's mostly so dull.

There can be some interesting futzing in principle to keep unnecessary sources of variation from getting folded into build artifacts, normally followed by less-interesting making of those change in practice across a zillion projects; and basically anything involving signing should at least be carefully copying the homework of proper heavyweight cryptographers; but most of the advice is of the "fix your shit" and "yes, actually, have 10 people, ideally across multiple orgs, despite the fact that you can get it for free by pretending that the random person in Nebraska won't make mistakes, get coopted by an intelligence agency, quit to find a hobby that doesn't involve getting yelled at on the internet for no money, or die" flavor; which is absolutely stuff you should do; but the sort of deeply unsexy spadework that doesn't have magic bullet vendors lobbying for it to get paid for.

Comment Re:Of course... (Score 1) 68

What seems sort of damning is that the explanation is "our tech sucks".

The 'explanation' is that the demo triggered all the devices within earshot because apparently a device designed to perform possibly-sensitive actions on your behalf was assigned a model line wide, public audio trigger in order to make it feel more 'natural' or something; rather than some prosaic but functional solution like a trigger button/capacitive touch point/whatever; and that the device just silently fails stupid, no even informative feedback, in the even of server unresponsiveness or network issues. Both of these seem...less than totally fine...for something explicitly marketed for public use in crowded environments on what we euphemistically refer to as 'edge' network connectivity.

You obviously have limited control over the network in a situation like this; so nobody expects the goggles to fix the internet or facebook's server resource allocations for you; but having some sort of "can't reach remote system" error condition has been ubiquitous basic function since around the time that dirt was still in closed beta.

Comment Re:Demo failure not a product failure (Score 1) 68

I suspect that this is symptomatic of the same phenomenon; but it seems especially weird that they'd be trotting the CTO out to give a, from context, apparently intended to be exculpatory postmortem when the problems with a device you are intended to wear on your face, in public, are 'sensitive to external trigger shared across entire product line' and 'silently fails stupid if network conditions are suboptimal'.

Slashdot Top Deals

To be is to program.

Working...