Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment Are today's "AI" companies important to future? (Score 3, Interesting) 14

The Generative AI companies did their thing. It was overall very impressive, even if they massively overstated its usefulness. ChatGPT is a great early demo of this infantile, currently-almost-useless-but-very-promising tech! Now someone simply (heh) needs to get the compute requirements down two to four orders of magnitude.

If companies like OpenAI can (and want to) work on that, great! Or others can build on the work that's been done up to now. I don't think anyone will miss the current companies, though they might currently be employing people who likely have a leg up (thanks to their familiarity with the subject) on addressing the compute resources problem.

But whenever (if ever) it gets done, people are going to run it on their own machines, not your servers and jail. Lock-in has always been, and will always be, an adversarial force to be eliminated by progress. If that means OpenAI's long-term plans won't work out, well, too bad.

Comment Hmmm (Score 3, Insightful) 48

I currently work hybrid. It reduces my effective pay by around 10%, which is a hell of a cut. It gains me nothing, since all meetings - even when we're all in the same room - are via teams, because company policy.

I see no added value from visiting the office.

Comment Limits of applied psychology? (Score 0) 35

Are you sure that you actually cancelled your Prime account? How long until you are sure that you really did it?

I think this is a sort of joke, but my guess is that you only got far enough to convince yourself that you could cancel it, but somewhere along the way you changed your mind and decided not to. Sort of like "I can quit gambling/drinking/gaming whenever I feel like it, so I'm not addicted." If you had gotten too close to actually cancelling your membership, then they would have pulled out the big psycho-weapons until you backed down.

In general I think psychology and psychiatry are full of BS, but the applied psychologists have gotten too good at pulling people's strings for the sake of selling deodorant, laundry soap, and politicians who really stink to high heaven notwithstanding any amount of deodorant and soap. The applied psychologists have an enormous advantage. They are basically behaviorists and they don't worry about the value of the human soul, the nature of evil, or collateral damage. None of that trivial stuff matters when you have widgets and snake oil to sell.

Disclaimer needed: I haven't had any direct contact with Amazon in decades. My second and final Amazon purchase was that long ago. I evaluated what Amazon was doing with my personal information and decided that I wanted no part of it. Nothing that I have seen in the years since has improved my opinion of the cancer.

Comment Measuring blood pressure indirectly (Score 1) 34

Even though you're apparently feeding a troll, I think there is a more substantive answer involving a solution approach that would involve 'light AI' technology. It's actually a topic I've been researching for some years, even though the doctors have never really convinced me I need to worry about my blood pressure.

So the fundamental problem is that most direct (external) measurements of blood pressure involve comparing the blood pressure to air pressure, so they take a substantial amount of power to pressurize some kind of balloon. Major problem for small battery devices like watches.

An alternative approach would involve timing, based on the variation in pulse timing, though maybe the approach flopped. Key term is HRV (Heart Rate Variability), which allows you to track and time individual pulses as they reach different parts of the body. Last research I read was a couple of years ago and I still don't haven't seen any products on the market. However the basic idea would be to take timing data from different locations and use it to calculate the blood pressure. You would need a couple of separate pulse detectors with stable locations, but the real problem is that the arteries are not uniform, either over distance or time. That means that you would need to train a fairly sophisticated model to figure out what blood pressures really correspond to what timing differences. Probably need to train it for each person, too.

Comment Three parrots flew into a bar (Score 1) 72

Ouch, ouch, ouch.

Okay, you got your funny mod points, but did you have to propagate the vacuous Subject, too? On the grounds of your Funny, I forgive you for getting me to look at AC's tripe, though I didn't actually try to read it. Enough to know he was too ashamed to even attach a handle to the tripe...

(Now if I was an actual humorist I would have figured out a way to work a wind turbine into my joke, but that was just a replacement/filler joke because I couldn't figure out how to create a Subject full of punctuation in response to your joke. (And my replacement joke is really due to a similar joke I saw on another website.)

Comment Hard to say; what standards do they support? (Score 1) 22

Can you use the hardware without any Meta services? Can you use competing hardware with Meta's services? And then beyond just services, can you fully replace the whole software stack?

Any "no"s above will make the utility dubious, such that there's little point in spending much time getting to know the product (except for RE purposes). OTOHs "yes"s will indicate that these types of wearables are starting to become viable.

Comment Re:Overwrought (Score 2) 62

This does not appear to be holding up in practice, at least not reliably.

It holds up in some cases, not in others, and calculating an average muddles that.

Personally, I use AI coding assists for two purposes quite successfully: a) more intelligent auto-complete and b) writing a piece of code using a common, well understood algorithm (i.e. lots of sources the AI could learn from) in the specific programming language or setup that I need.

It turns out that it is much faster and almost as reliable to have the AI do that then finding a few examples on github and stackoverflow, checking which ones are actually decent, and translating them myself.

Anything more complex than that and it starts being a coin toss. Sometimes it works, sometimes it's a waste of time. So I've stopped doing that because coding it myself is faster and the result better than babysitting an AI.

And when you need to optimize for a specific parameter - speed, memory, etc. - you can just about forget AI.

Comment smoke and mirros (Score 4, Interesting) 62

Hey, industry, I've got an idea: If you need specific, recent, skills (especially in the framework-of-the-month class), how about you train people in them?

That used to be the norm. Companies would hire apprentices, train them in the exact skills needed, then at the end hire them as proper employees. These days, though, the training part is outsourced to the education system. And that's just dumb in so many ways.

Universities should not train the flavour of the moment. Because by the time people graduate, that may have already shifted elsewhere. Universities train the basics and the thinking needed to grow into nearby fields. Yes, thinking is a skill that can be trained.

Case in point: When I was in university, there was one short course on cybersecurity. And yet that's been my profession for over two decades now. There were zero courses on AI. And yet there are whitepapers on AI with me as a co-author. And of the seven programming languages I learnt in university, I haven't used even one of them ever professionally and only one privately (C, of course. You can never go wrong learning C. If you have a university diploma in computer science and they didn't teach you C, demand your money back). Ok, if you count SQL as a programming language, it's eight and I did use that professionally a few times. But I consider none of them a waste of time. Ok, Haskell maybe. The actual skill acquired was "programming", not a particular language.

Should universities teach about AI? Yes, I think so. Should they teach how to prompt engineer for ChatGPT 4? Totally not. That'll be obsolete before they even graduate.

So if your company needs people who have a specific AI-related skill (like prompt engineering) and know a specific AI tool or model - find them or train them. Don't demand that other people train them for you.

FFS, we complain about freeloaders everywhere, but the industry has become a cesspool of freeloaders these days.

Comment uh... wrong tree? (Score 1) 75

"When the chef said, 'Hey, Meta, start Live AI,' it started every single Ray-Ban Meta's Live AI in the building. And there were a lot of people in that building,"

The number of people isn't the problem here.

The "started every" is.

How did they not catch that during development and found a solution? I mean, the meme's where a TV ad starts Alexa and orders 10 large pizzas are a decade old now.

Comment Re:Don't you understand yet ? (Score 1) 31

Mod parent Funny but actually too true to be funny.

However I have a new theory about moderation on Slashdot. First you have to pass a reverse CAPTCHA test. The mod points are then only given to accounts that can prove they are not human.

On the story itself, I think we are all so fscked that it doesn't even make sense to think about solution approaches. The giant corporate cancers can always find a suitable jurisdiction where they they fsck you if'n they want to.

However it would reach a new level of Funny if she has never even visited jolly ol' England.

Comment Re:I think AI is great! (Score 1) 61

All the moderators missed the joke and failed to give you a Funny?

Or perhaps all the moderators are AI? Slashdot's new (and secret) moderation policy is to only give mod points to accounts that can pass a reverse CATPCHA to prove they are NOT human?

Just asking questions? Of course not?!?

Comment Re:Russian nesting dolls of scams (Score 1) 48

Mod parent Funny, but I think the humor is at a level that will sail over the moderators' heads. Assuming they have heads and Slashdot hasn't adopted a policy of giving all the mod points to AI accounts. Slashdot could use a reverse CAPTCHA where you have to prove you're not human before you can have any mod points to bestow.

For my next failure to be funny, consider the threat of learning not to think like a machine by learning not to think about any question the machine won't answer. Combination of "Nothing to see there" and "How dare you even think of such a question?" And I predict most people won't even notice the "guidance" of their "thinking".

Slashdot Top Deals

And on the seventh day, He exited from append mode.

Working...