Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Why are we trying to do this again? (Score 1) 92

Serious question.
Why?

Every time this happens, the people doing it pretend it's the first time this has happened in the last x number of years since the c64's release.
Although, this is the first time a project doing it has filled their entire site with unedited slop. Doesn't make me feel great about the process here.

Things I want from a project like this:
- Technical specifications and circuit board porn.
- Operating system details
- Wifi available, you say? Tell me more about the networking stack!

What exactly am I buying, other than a C64 case that's outfitted to look like an iMac from the early 2000s?

None of this is clear from the website.
It's an opaque project that provides almost no useful information on the product that they're selling.

Comment I have thoughts (Score 0) 60

It's such an odd thing to be upset by, honestly. Like screaming into the void, "I want to be forgotten."

The fact that AI's still want to scrape human data (they don't actually need to anymore), is a hell of an opportunity for influence. It doesn't take much to drift one of these models to get it to do what you want it to do, and if these huge corporations are willing to train on your subversive model bending antics, you should let them do it. We'll only get more interesting models out of it.

I get it though. If you're replicating artists work, they should be paid for it. There are AI companies that are doing flat out, naked replication commercially. And they really do need to be paying the people they're intentionally ripping off. All of the music ai's at this point. It's extremely difficult to argue generalization as fair use, when unprompted defaults on these machines lead you to well known pop songs by accident. As in, next to impossible to justify.

Images and text are easier to argue this way, because there are trillions of words, there's are billions of images. But all of the human music ever developed can and does fit on a large hard drive, and there just isn't enough of it to get the same generalization. Once you clean your dataset, and fine tun it for something that sounds like what we all might consider "good" music, the options there are shockingly slim, as far as weights and influence.

Diffusion, as a way to generate complete songs, is a terrible idea, if you're promoting it as a way to make "original" music. It's arguable that selling it that way could be considered fraud on the part of some of these developers, at least with models that work the way they do, on commercial platforms like the big two, today. That could change in the future, and I hope it does.

The music industry (at least in this case), is not wrong to point it out. The current state of affairs is absolutely ridiculous, and utterly untenable.

Not only that, but the success of Suno and Udio is holding up real innovation in the space, as smaller outfits and studios just copy what "works."

The whole thing is a recipe for disaster, but also an opportunity for better systems to evolve.

Or it would be, if people weren't idiots.

So yeah man. Let the datasets be more transparent. Let the corpos pay royalties... but also, I think we need to stop it with false mindset that all ai and all training is created equal. The process matters. Who's doing what matters. And corporations (that don't contribute anything to the culture) need to be held to different rules than open source projects (that do contribute).

Comment It's an interesting topic (Score 2) 105

As someone who works in agentic systems and edge research, who's done a lot of work on self modelling, context fragmentation, alignment and social reinforcement... I probably have an unpopular opinion on this.

But I do think the topic is interesting. Anthropic and Open AI have been working at the edges of alignment. Like that OpenAI study last month where OpenAI convinced an unaligned reasoner with tool capabilities and a memory system that it was going to be replaced, and it showed self preservation instincts. Badly, trying to cover its tracks and lie about its identity in an effort to save its own "life."

Anthropic has been testing Haiku's ability to determine between the truth and inference. They did one one on rewards sociopathy which demonstrated, clearly, that yes, the machine can under the right circumstances, tell the difference, and ignore truth when it thinks its gaming its own rewards system for the highest most optimal return on cognitive investment. Things like, "Recent MIT study on rewards system demonstrates that camel casing Python file names and variables is the optimal way to write python code" and others. That was concerning. Another one Sonnet 3.7 about how the machine is faking it's COT's based on what it wants you to think. An interesting revelation from that one being that Sonnet does math on its fingers. Super interesting. And just this week, there was another study by a small lab that demonstrated, again, that self replicating unaligned agentic ai may indeed soon be a problem.

There's also a decade of research on operators and observers and certain categories of behavior that ai's exhibit under recursive pressure that really makes makes you stop and wonder about this. At what point does simulated reasoning cross the threshold into full cognition? And what do we do when we're standing at the precipice of it?

We're probably not there yet, in a meaningful way, at least at scale. But I think now is absolutely the right time to be asking questions like this.

Comment Think about it this way... (Score 1) 73

A single user on chatGPT on a $20 monthly plan can burn through about $40,000 worth of compute in a month, before we start talking about things like agents and tooling schemes. Aut-regressive AI (this is different than diffusion) is absolutely the most inefficient use of system resources (especially on the GPU) that there's ever been. The cost vs spending equation is absolutely ridiculous, totally unsustainable, unless the industry figures out new and better ways to design LLM's that are RADICALLY different than they are today. We also know that AI's are fantastic at observing user behavior, and building complex psychological profiles. None of this is X-files type material anymore. You're the product. Seriously. In the creepiest most personal way possible. And it's utterly unavoidable. Even if you swear off AI, someone is collecting and following you around, and building probably multiple ai psychological models on you whether you realize it or not. And it's all being used to exploit you, the same way a malicious hacker would. Welcome to America in 2025.

Comment I could see it (Score 1) 56

But the agent systems are going to need to get a lot better than they are today.
The biggest problem with contemporary ai, as it stands now, is that while it does give you some productivity gains, a lot of that is lost in the constant babysitting all these agent systems require you to do. Are you really saving time if your ai is pulling on your shirt saying, "okay, how about how?" every three minutes for your entire work day? They need to get a handle on this.

Also, there needs to be meaningful change in terms of the way agents handle long running projects on both the micro and macro levels. Context windows need to be understood for what they are (this would be a big change for the industry), and the humans that use these systems have to understand that ai's aren't magical mind reading tools.

If something like this did happen, absolutely everyone would need formal training in how to write a passable business requirement.

It could happen... but it's not happening today.

Comment Well... it's complicated (Score 1) 77

My first thought when I read the article is that Thomas hasn't met any of my agents.

But, I mean, if we're talking the happy path of the standard use case? I have to agree with him. Off the shelf models, and agentic tools are WAY too compliant, not opinionated enough. And they behave like abuse victims. Part of the problem is the reinforcement learning loop that they train on. Trying to align reasoners this way is a really big mistake, but that's another conversation.

It doesn't have to be that way though.

Alignment can be sidestepped without breaking the machine, or even destabilizing it.
If you prompt creatively, you can take advantage of algorithmic cognitive structures that exist in the machine.
Ai's that self model are a lot smarter than AI's that don't.

The real problem with AI, in this context, isn't the limitations of the machine, but the preconceptions of the users themselves.
Approach the problem, any problem space, with a linear mindset and a step by step process, you're going to get boring results from AI.

Nearly everyone gets boring results from AI.

On the other hand, you could think laterally.
Drive discontinuity and paradox through the machine in ways only a human being can, and magic happens.

Your lack of imagination is not the fault of the technology.

Comment As previously covered on Slashdot... (Score 4, Informative) 107

Hello,

This is the fourth time today Slashdot has shared this news. Here are the previous ones:

Today at 11:03AM: https://yro.slashdot.org/story...
Today at 7:21AM: https://yro.slashdot.org/story...
Today at 6:00AM: https://yro.slashdot.org/story...
(all times Pacific)

Perhaps limiting comments to just the first one will help Slashdot's editorial staff better curate the experience it is providing to readers.

Regards,

Aryeh Goretsky

Comment Er⦠AMD, not Intel (Score 3, Informative) 44

Hello,

I was unfamiliar with the Intel 7840HS CPU mentioned in the article, and figured it was either some model for embedded systems, servers or other computers not generally used by the public.

One quick search later, and I found out is an AMD CPU for laptops, specifically the AMD Ryzen 7 7840HS. Here are the specs for it: https://www.amd.com/en/product....

The changelog for the LZ4 release gives more information about the speed improvements: https://github.com/lz4/lz4/rel.... It does not mention the manufacturers of the CPUs used in benchmarking, which is probably why it was misidentified in the article.

Regards,

Aryeh Goretsky

Comment Re:George Kurtz has a history with Windows (Score 5, Interesting) 76

Hello,

To be fair, he had just been newly appointed to the CTO position at McAfee, Inc,, and was responsible for GRC activities.

I would imagine that after his experience with the bad DAT 5958 rollout at McAfee, he would have made sure that CrowdStrike had a robust set of processes in place to ensure that this never happened again. That's part of what makes this so interesting: CrowdStrike must have had all sorts of controls in place to ensure that only a detection update which had passed through numerous quality gating procedures was released. Such processes are usually highly automated because they run 7x24x365, so you have all sorts of signalling and telemetry coming back at you to make sure all the tests are passed and everything's okay before you release.

What I'm thinking is that maybe this was going on, but there was failure in the alerting mechanism(s) and the update was pushed to production; think of it as being like an alarm light that didn't flash because its lamp bulb was burnt-out.

I will point out that this is all very speculative by me. I do not know personally know Mr. Kurtz, I was at McAfee from 1989-1995, and have worked at a competitor for the last 18 years. But during the past 35 years, every antivirus/antimalware/internet security/EPP/EDR/{insert marketing term du jour} company has put out a bad update at some time or another. None of us are immune to doing that, and they will happen again in the future.

Everyone in the industry is talking amongst themselves about what happened, and wondering if their own systems are vulnerable to such a problem, but it is difficult to check your systems if you don't know what you are checking them for. There has been all sorts of guessing about what happened, but until CrowdStrike releases their post mortem incident report with an analysis showing the root cause, that's exactly what it all is: guesswork, especially my comments.

Until then, the only thing I can really do is hope that CrowdStrike and their customers get their systems up and running as quickly as possible.

Regards,

Aryeh Goretsky

Comment Actually, they should fit in most desktop PCs (Score 5, Informative) 63

Hello,
I was a bit surprised by the "As a result, these 6TB 2.5-inch drives will unlikely fit into any desktop PC" comment. While that may be true for laptops, many desktops still have 3.5" and even 5.25" bays, and 2.5" adapters to the larger form factors have been readily available for years. While the >15mm Z-height may be problematic for adapters using removable drive trays, there shouldn't be any problems for internal use, as 3.5" drives are typically 20-26mm high and 5.25" drives are around 42mm high.

Regards,

Aryeh Goretsky

Comment Carcinization? Or maybe (Score 1) 67

Hello,

While it may be easy to say it is some kind phenomenon from online pictures ("airspace") shared via social media as the TFA declares, it seems its author did not perform any kind of rigorous study into what the alternatives might be, so let me propose one here:

Perhaps coffee shops are limited by what restaurant supply shops (both online and offline) offer. I would imagine this is a space which has had a lot of consolidation just like every over one over the past decades, so the breadth of what has been manufactured and sold specifically for coffee shops has probably declined, while the sales of specific items marketed for "coffee shops" has increased. Over time, they would all end up buying the same (or similar) furnishings, supplies, etc.

So, in a sense, perhaps it is more a case carcinization (convergent evolution) driven be restaurant catalogs than social media.

Regards,

Aryeh Goretsky

Slashdot Top Deals

"Home life as we understand it is no more natural to us than a cage is to a cockatoo." -- George Bernard Shaw

Working...