Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:The problem with SAS (Score 1) 26

SAS has been dead for 15y; it started with R and then Python absolutely destroyed it. No one teaches SAS in universities any longer, why would they? It's terribly expensive and absolutely fucking dead.

We migrated away from SAS back in 2017 and never looked back. The only verticals still using it are heavily regulated and running long-standing legacy code that they're slowly migrating to Python.

I remember absolutely dying when they tried to renegotiate our contract UP back in 2015. I flat out told them they were dead and we were moving away from them and they told me, "good luck managing your data without us!"

Two companies and 10 years later, we're doing just fine and they are not.

Comment Irritating! (Score 3, Informative) 39

I am increasingly bombarded by AI generated text. It seems that browsers, search engines, customer "service" and many app have incorporated AI. Even my Tesla now has the Nazi Grok AI as default.
Besides the obvious threat of monitoring my thoughts and behavior, I find all of these AI "services" very irritating. They serve up long winded rambling blocks of "information" which is usually at least partially wrong or irrelevant.
I really hate all of these AI efforts.

Comment My takes on this presentation (Score 1) 6

1. There are a lot of empty seats; a lot.

2. The demo wasn't live, likely due to the huge failure of an event that the Meta one was.

3. They noted that you do all of this 'hands-free', likely an intentional knock at Meta's offering.

4. The examples were...odd. Who the fuck is going to be using this to shop for a fucking rug? Come on; give some real-life examples that are IMPORTANT. None of these were.

5. The entire presentation's style, across multiple different presenters, was...exhausting...halting...jarring...and...really undergraduate level. It was almost as if they were being fed what to say in their earpieces, not from memory and not in a fluid and practiced way.

---

Personally? I love the idea of AR glasses that work well. I want to have live subtitles for humans talking to me as I'm hard of hearing and hearing aids do not work well for me, particularly in public spaces.

I want it to give me important information, respond to my environment in ways that are useful (telling me where I am really isn't that; I know where the fuck I am--tell me what I should be doing or where I should be going next, perhaps?)

I know these are early adopter level devices, but they're just fucking ugly due to their bulk.

I strongly prefer this option to Meta's simply because I don't have to do stupid fucking mime-style hand gestures, but I want this technology to be useful, now, not in 5 years. We're going to see this largely flop just like so many other AR/VR toys out there unless they make this something more than a gimmicky piece of shit.

Comment Re:Complete failure all around (Score 1) 140

You clearly do not live in the US. The legal system does NOT do anything about anything (other than child support and alimony) as outlined in a divorce decree.

And, even if they MIGHT do something, you have to wait 12+ months to get on the court's docket, paying thousands of dollars to glorified expensive secretaries in the process while you wait.

The entire system is fucking broken.

Comment Re:bullshit and hype (Score 1) 126

From TFA:
In an update this week, Palisade, which is part of a niche ecosystem of companies trying to evaluate the possibility of AI developing dangerous capabilities, described scenarios it ran in which leading AI models – including Google’s Gemini 2.5, xAI’s Grok 4, and OpenAI’s GPT-o3 and GPT-5 – were given a task, but afterwards given explicit instructions to shut themselves down.
Certain models, in particular Grok 4 and GPT-o3, still attempted to sabotage shutdown instructions in the updated setup. Concerningly, wrote Palisade, there was no clear reason why.
Andrea Miotti, the chief executive of ControlAI, said Palisade’s findings represented a long-running trend in AI models growing more capable of disobeying their developers. He cited the system card for OpenAI’s GPT-o1, released last year, which described the model trying to escape its environment by exfiltrating itself when it thought it would be overwritten.

Submission + - HAL 9000 (theguardian.com)

mspohr writes: After Palisade Research released a paper last month which found that certain advanced AI models appear resistant to being turned off, at times even sabotaging shutdown mechanisms, it wrote an update attempting to clarify why this is – and answer critics who argued that its initial work was flawed.

In an update this week, Palisade, which is part of a niche ecosystem of companies trying to evaluate the possibility of AI developing dangerous capabilities, described scenarios it ran in which leading AI models – including Google’s Gemini 2.5, xAI’s Grok 4, and OpenAI’s GPT-o3 and GPT-5 – were given a task, but afterwards given explicit instructions to shut themselves down.

Certain models, in particular Grok 4 and GPT-o3, still attempted to sabotage shutdown instructions in the updated setup. Concerningly, wrote Palisade, there was no clear reason why.

Slashdot Top Deals

Nothing is easier than to denounce the evildoer; nothing is more difficult than to understand him. - Fyodor Dostoevski

Working...