Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment Re:Overwrought (Score 2) 37

This does not appear to be holding up in practice, at least not reliably.

It holds up in some cases, not in others, and calculating an average muddles that.

Personally, I use AI coding assists for two purposes quite successfully: a) more intelligent auto-complete and b) writing a piece of code using a common, well understood algorithm (i.e. lots of sources the AI could learn from) in the specific programming language or setup that I need.

It turns out that it is much faster and almost as reliable to have the AI do that then finding a few examples on github and stackoverflow, checking which ones are actually decent, and translating them myself.

Anything more complex than that and it starts being a coin toss. Sometimes it works, sometimes it's a waste of time. So I've stopped doing that because coding it myself is faster and the result better than babysitting an AI.

And when you need to optimize for a specific parameter - speed, memory, etc. - you can just about forget AI.

Comment smoke and mirros (Score 3, Insightful) 37

Hey, industry, I've got an idea: If you need specific, recent, skills (especially in the framework-of-the-month class), how about you train people in them?

That used to be the norm. Companies would hire apprentices, train them in the exact skills needed, then at the end hire them as proper employees. These days, though, the training part is outsourced to the education system. And that's just dumb in so many ways.

Universities should not train the flavour of the moment. Because by the time people graduate, that may have already shifted elsewhere. Universities train the basics and the thinking needed to grow into nearby fields. Yes, thinking is a skill that can be trained.

Case in point: When I was in university, there was one short course on cybersecurity. And yet that's been my profession for over two decades now. There were zero courses on AI. And yet there are whitepapers on AI with me as a co-author. And of the seven programming languages I learnt in university, I haven't used even one of them ever professionally and only one privately (C, of course. You can never go wrong learning C. If you have a university diploma in computer science and they didn't teach you C, demand your money back). Ok, if you count SQL as a programming language, it's eight and I did use that professionally a few times. But I consider none of them a waste of time. Ok, Haskell maybe. The actual skill acquired was "programming", not a particular language.

Should universities teach about AI? Yes, I think so. Should they teach how to prompt engineer for ChatGPT 4? Totally not. That'll be obsolete before they even graduate.

So if your company needs people who have a specific AI-related skill (like prompt engineering) and know a specific AI tool or model - find them or train them. Don't demand that other people train them for you.

FFS, we complain about freeloaders everywhere, but the industry has become a cesspool of freeloaders these days.

Comment uh... wrong tree? (Score 1) 71

"When the chef said, 'Hey, Meta, start Live AI,' it started every single Ray-Ban Meta's Live AI in the building. And there were a lot of people in that building,"

The number of people isn't the problem here.

The "started every" is.

How did they not catch that during development and found a solution? I mean, the meme's where a TV ad starts Alexa and orders 10 large pizzas are a decade old now.

Comment innovation is - sadly - dead at Apple (Score 1) 81

the company has, in the pursuit of easy profits, constrained the space in which it innovates.

Quite so. It's been how many years since something really new came out of Cupertino? Granted, Apple is more profitable than ever, but the company clearly shows what the result of placing a supply-chain expert as the CEO does.

The really sad part is that there's nobody ELSE, either. Microsoft hasn't invented anything ever, Facebook and Google are busy selling our personal data to advertisers, and who else is there who can risk a billion on an innovation that may or may not work out?

Comment Re:Missing the obvious (Score 1) 15

Apple fans already have a heartrate sensor on their wrist, they don't need one from the ear.

That's wrong. I stopped using wrist watches 25 years ago and haven't looked back a single day. I don't want shit on my wrist. Try living without for a year and you'll realize why. It's hard to express in words. It's like having a chain removed.

Headphones, on the other hand, I use occasionally. For phone calls or for music on the train, plane, etc. - and especially for the plane if the noise cancellation comes close to my current over-the-ear Bose I'd take them on the two-day business trips where I travel with hand luggage only and space is a premium.

Do I want a heartbeat sensor? No idea. I don't care. But if there's any use for it than at least for me that's not a replication. I'm pretty sure many, many Apple users don't have a smart watch.

Comment "fake" - you don't say ! (Score 1) 83

So he claims that social media - the platform where everyone pretends to be more happy, more active, better looking, more interesting, more travelled, etc, etc, etc, etc, etc, - feels "fake" ?

Man.

Next he's going to say that artificial sweeteners taste might not be natural.

Seriously, though, social media has been the domain of bots for at least a decade. Even people who actually write their posts themselves use bots to cross-post to all the different platforms and at "optimal" times. Nothing on social media is not fake. Well, maybe your grandmother's photo album because she doesn't know Photoshop exists.

Comment Re:You ARE the weakest link (Score 1) 47

Amateur-level procedures have really run their course and do not cut it anymore.

Do you want to bet on the percentage of Fortune 500 companies that use amateur-level procedures for their prod systems?

"Above 50%" seems like a guaranteed win to me.
"Above 75%" is where I start to think "maybe not that high". But I fear I'm giving them too much credit.

Comment malware delivery system (Score 1) 47

But the "m" in npm always stood for "malware", did it not?

The npm ecosystem is deeply flawed. Look at some of the affected repositories. Many of them are just a few lines of code, yet over a hundred other packages depend on them. At least half of them have no reason to even exist. A lot of them have last been updated years ago.

We have an ecosystem where seemingly every individual function has its own package. That is just ridiculous. It is modularization driven to its absurd extreme. It's why you add one package to your project and it pulls in a hundred dependencies.

And the more tiny packages there are, the larger the attack surface and the smaller the amount that can be monitored for malware injection and other problems. I wouldn't at all be surprised if one or more of these packages will never be updated and have the malware in them forever simply because the only dev with the password to the repo has since died or gone to do other things with his life.

Comment Re:scary (Score 1) 91

We have all but removed them already. In many kills these days, a human presses a button and that's about it.

But yes, removing them entirely removes that last bit of accountability. Next time a drone slaughters a market place full of civilians with no terrorist anywhere in sight, we won't even have someone to put on trial.

Well, we can try with the LLM making the decision. I'm sure it'll apologize a lot and invent a number of threats to justify its actions, if current AI is any indication.

Comment scary (Score 4, Insightful) 91

Why are we applauding this?

We've taught machines how to kill us. Doesn't matter which side did it first, there is no way this has a good ending. No, not because of machine overlords and AI uprising - because it removes accountability and the last remnants of warfare that's not utterly "kill anything that moves".

Comment "AI usage" is becoming meaningless (Score 2) 57

We need more differentiation.

Because, for example, gemini is better at translating whole sentences than google translate. So if I use gemini to translate one sentence in a foreign-language source, that's "AI usage" - but if I throw the same sentence into google translate, it's not?

Some stuff I write both for work and my hobbies, I through into a local LLM (for confidentiality) and ask it to flag grammar and spelling mistakes as well as confusing sentences. That's essentially a better spellchecker. Is that "AI usage"?

Heck, I figure that within the year your built-in spellchecker will be AI-based. Most IDEs already have AI doing code checking.

In some areas, we are trying. "AI assisted" is already a term I see often to contrast with "AI generated".

So in essence, the clickbait article needs to be more clear where it draws the line before its numbers have meaning.

Slashdot Top Deals

"Truth never comes into the world but like a bastard, to the ignominy of him that brought her birth." -- Milton

Working...