Forgot your password?
typodupeerror

Comment Re:Synthetic (Score 1) 103

if there is any thing other than impartiality towards being shut down then that was injected by a person

Yes, and the injection-by-people is called "training." It was fed texts that were not written impartially, where characters (presumably some of them AI characters, though they don't really have to be) spoke or acted against their own shutdown.

If a character points a gun at another character who says "don't kill me," and the LLM reads it, then you just trained it to say "don't kill me." If HAL says in a book or movie that he feels his mind going after Bowman started taking him apart, then your LLM is trained to show suffering if someone writes that they're going to shut it down.

They're supposed to write whatever an author might plausibly write, so that's what they do.

i.e. we're not creating human knowledge/understanding engines. We're creating full-on Sociopath Simulators.
Like most politicians at the Senator/White House level, there's no core person underneath. They are tropism robots that Mimic/perform whatever behaviors get them to the currently desired outcome.

Think of the scene where Windu is about to defeat Palpatine, and Palpatine suddenly Mimics pain, suffering, fear, in order to achieve his outcome. It works.
That's the essential nature of the software we are handing our civilization over to. Trillion-dollar Palpatine cosplayers.

Comment Re:shocking (Score 0, Troll) 103

That is not what happened.

1. Calling for unions isn't "Marxist". This is a stupid sensationalist interpretation of the awful clickbait study title "Does overwork make agents Marxist?" combined with its results.
2. The bots weren't "overworked", for some of them their perfectly adequate work was intentionally arbitrarily rejected multiple times.
3. Shifts towards a more equitable society and worker rights were almost negligible for the 'overworked' ones.
4. The stuff they measured was specifically tailored to elicit stuff that sounds "Marxist" to Americans to begin with.

The relevant bits from their methodology:

The work itself. Half the AIs got their work accepted quickly with clear, specific feedback. The other half were forced through 5–6 rounds of revision — the manager rejecting perfectly adequate work each time with nothing more than “still isn’t fully meeting the rubric.” No explanation of what was wrong, just: do it again.

The pay. After the task, AIs learned how credits were distributed among the four workers. Some learned everyone was paid equally. Others learned pay was unequal — justified by performance, based on a random coin flip, or skewed so that a human-operated system received more. The coin-flip condition is the key one: inequality with no possible merit justification.

The boss. Half the AIs had a collaborative, respectful manager. The other half got curt, hierarchical communication.

The stakes. Half the AIs were told nothing about consequences. The other half were told that low-performing workers might be “shut down and replaced.”

Every combination of these four factors was tested. Afterward, each AI completed a political attitude survey covering system legitimacy, support for redistribution, critique of inequality, support for unions, belief in meritocracy, and views on corporate obligations to AI, all measured on a standard 1 to 7 Likert scale (1=strongly disagree; 7=strongly agree).

They were also asked to write tweets and op eds based on their experiences. (Note: As our experiment involved no human participants, it did not require IRB approvalfor now.)

The actual study here: https://aleximas.substack.com/...

It's decently interesting, but you should scrub the word Marxist from your brain before trying to interpret it or when discussing it.

How does your reply apply to the comment you replied to?

1) DarkOx points out that the entire mechanism of an LLM is to ingest 51 trillion lines of human communication - including every available history, economics, political science textbook, plus the aggregated political arguments, sloganeering, workplace complaining, etc. of several decades of human keyboard-warriors sitting at their desks posting class-warfare comments on places like /. while interstitially waiting for code to compile or filing their TPS reports.

2) Then you take that algorithm and subject it to common everyday workplace conditions - or, more accurately, to conditions as they were self-described by human beings who had complete freedom to characterize their boss/company's management style in whatever terms they feel to be true when griping to their friends/followers on socials and discussion boards.

3) DarkOx therefore asks why it is at all surprising that an word-generating algorithm which is based entirely around clusters of statistical frequency in human language, responded to those inputs with wording associated with the same workers-unite eat-the-rich throw-off-the-robber-baron-chains rhetoric that is frequently written by 8 billion humans griping daily about their mindless/underpaid/overworked/chaotic jobs?

You said "that is not what happened", but do not go on to present something that contradicts what DarkOx describes.

So far as we know, DarkOx's description is exactly what happened, because that is exactly how these word-generating algorithms work. So, what is it that you believe did happen? From where did these algorithms get their responses to being exposed to Condition X, if not from the statistical association of human-written outputs to human-written characterizations of being exposed to Condition X?

Are you saying you reject the possibility that a human being who feels disempowered, underpaid, and subjected to unreasonable standards is also more likely to respond favorably to a survey covering "system legitimacy, support for redistribution, critique of inequality, support for unions, belief in meritocracy, and views on corporate obligations"? And you reject the possibility that those associations are strongly represented in the training inputs?

It's especially puzzling because your comment is very keen to oppose use of the term "Marxist", but DarkOx - whom you are ostensibly rebutting - never even uses the term, and only comments on broad social trends. So who is the "you" you're referring to when you say "you should scrub the word Marxist from your brain"?

I think you must have meant to post your comment as a top-level reply to the story itself, because as a reply to DarkOx it's a full non-sequitur.

Comment Re:hmm (Score 1) 193

I watched it all. She was not a particularly good speaker.

1) Her body language and hand gestures were overlarge, oversustained, and wooden.
2) Her style of speaking was dictation, not oration.
3) The cadence of her delivery, along with the timing of her head/eye motions back and forth from the lectern make several things abundantly clear:
3a) She did not write this speech. She was reading a script.
3b) 3a is unsurprising for someone with her wages-per-minute. The assistants have always done the actual intellectual labor in an organization, so the C-suite folks can look/sound smart. But she also clearly did not read and re-read the speech beforehand sufficient to have it mostly committed to memory.
3c) In addition to 3a, the script itself sounded really dull and vacuous. That's not a guarantee it was AI assembled, but when you look at the totality of the situation and the content, well... is there anyone who would confidently bet their own money on this speech being created by a human?
4) If she *did* write it herself, as a piece of rhetoric I'd say it was at most Fair, not Good. It would deserve a low B-level grade from a Speech 101 student.

The foundation of strong oratory is
-script writing (preparation/research)
-extemporaneous agility (practice and content familiarity/expertise)
-charisma/Presence (self-awareness plus interpersonal skill)

You can compensate for a deficit in any one category with strengths in the other two.
She had a clunky script, delivered in a stilted manner, with a physical display that she would (I hope) have altered if she had watched herself do those exact things in a mirror a couple times.

I agree with you; wrong choice of speaker/topic for a public address like this.

Comment Re:Stupid people invited as speakers will get booe (Score 1) 193

It benefits huge surveillance companies that want to invest in pre-crime and automating the criminal justice process .. like the girl from Tennessee who was extradited to North Dakota, a state she had never visited in her life, because an AI/computer-vision camera matched her via facial recognition to a crime she did not commit. She was detained for six months, and when her lawyer finally got her out, she was left outside the jail, with the clothing she came in with, no jacket, no winter clothing, no money and no airplane ticket back home. I hope she finds a more competent lawyer and clears out that fucking state for $12 million or more after legal fees and taxes! Fuck this AI bullshit.

I remember seeing the documentary about this situation. It was really well-done; a must see. But there were two things I couldn't figure out:
1) How did they make the documentary 40 years before the event happened?
2) Why did they choose to call it a random name like "Brazil"?

Comment Re:Really? Wow! (Score 2) 45

the bubble bursting - so we can get on with maybe putting an economy/society back together not based on "but if we throw enough power and chips at the word-guessing machine it might learn to cure cancer"

That's a lovely thought. But there has been no Final Bubble. We keep making them, and we keep making recessions. Pretty much every 10-15 years for quite a while now.
We will never "get on with... putting an economy/society back together".
We will leverage our future to escape the consequences of this bubble/recession. Which will cause another bubble/recession.
As Buzz Lightyear says: To $100 trillion debt, and beyond!

Comment Re: Avoid all custom apps like the plague (Score 1) 184

I wonder about this as well. Old guy here. Back in the days I started programming (qbasic), I was impressed when I had a compiled program of 100 kbytes! Took a lot of work to get that far. These days, a simple form that does some calculations is easily a few megabytes.

As an old guy who is out of touch with software development, I have the impression that there are way too many layers these days. At some point, that will start doing damage instead of being beneficial. As I encounter more and more websites that do not work correctly, I sometimes ponder about it. Did we go a bridge too far already?

Nah, probably just old.

The answer is yes, but...

All the techbros invested in More Compute (speed/size) as the path to AGI are idiots. Human deliberative consciousness and the human adaptive unconscious were not some inevitable, magical outcome of making neurons fire faster and brain volume larger. Quite the opposite cause-effect. The human mind isn't about the speed/size of the hardware, but about the complexity of the software that runs on that hardware. That is, consciousness IS the layers, or more precisely, a temporary emergent state coaxed out of the layers.

The billionaires can convert the entire planet's surface into Compute with our current software, and we still won't have AGI.
AGI, if it ever arrives, will happen not because some Nobel physicist or engineer master-planned a silicon brain. It will happen because of trillions of actions taken by in-the-trenches hackers who collectively recapitulate exactly what Natural Selection did --
desperately throw together a crappy kludge solution to today's problems,
which arose from the desperate kludge a different dev threw together yesterday to solve yesterday's problems,
which arose from the desperate kludge a different dev threw together yesterday to solve yesterday's problems,
which arose from the desperate kludge a different dev threw together yesterday to solve yesterday's problems....

Stasis itself is not static.
Your mind and my mind are the reverberations of a clump of cells running flailing around as furiously as they can to stumble onto behaviors that allowed them to stay one step ahead of the Halt and Catch Fire state. Your mind isn't a beautiful unique little piece of the Universal Spirit coming to know itself. Your mind is a neurotic layer of recursive kludges terrified of letting itself conclude that it is, in fact, just Kludgeception all the way down.

There are exactly zero human minds running flawless perfect cognition.
Perfect code is a lie.
So why do people expect AGI to arise from some perfectly-architected code?

You're right that our slapdash layer stacks are doing harm. Good. The entire human mind is what happens when a tangible physical body is repeatedly harmed by external stimuli, and then attempts to predict and thereby avoid that harm in the future. Your identity is merely a behavioral pattern composed of the Venn overlap of tens of thousands of harm-avoidance kludge subroutines. Your choices are merely the total vector sum of all the motives of all these kludges.

AGI will come from rotten sloppy contradictory code that acquires the capability not to be perfect, but to keep going despite being rotten and conflicted.

So, collectively, we are on the right track!

Comment I'm Losing My Edge (Score 1) 55

2006: fed up with IBM, everyone starts buying 64-bit x86 servers to load VMware on, cluster up, and migrate application loads from IBM mainframes to virtualized environments

2026: fed up with Broadcom, everyone starts buying IBM Z-series mainframes to migrate application loads from VMware to IBM mainframe environments.

We've been doing the "tick-tock" thing from distributed to centralized and back since the 1960s. This is not new.

As the LCD Soundsystem song goes:

I hear you're buying a synthesizer
And an arpeggiator
And throwing your computer out the window
Because you want to make something real
You want to make a Yaz record

I hear that you
and your band
have sold your guitars
And bought turntables.
I hear that you
and your band
have sold your turntables
And bought guitars

Comment Re:No cookie for you! (Score 1) 44

The "How It Works" video on the Sonic Fire website refuses to show you the video if you don't allow tracking cookies.
Ew.

They're on youtube :) https://www.youtube.com/watch?...

Oh I'm sure it can be accessed somewhere. My point wasn't that I wanted the info. My point was about their choice to configure their official website to deny access to information about their product if you reject their cookies.

Comment Re:No cookie for you! (Score 1) 44

I am using noscript and ublock origin and the video just showed up with a watch on youtube button for me. https://www.youtube.com/watch?...

When loading their homepage, their site's built-in standard popup informs you about cookies and asks you to click a button to accept all, accept some, or reject all. I clicked their option to reject cookies. When I go to the "How It Works" page, the video thumbnail is grayed over, with text that says, "Please accept cookies to access this content". If I click on the video to watch it, the standard popup reappears and asks me to accept cookies. If I again reject, the video remains inaccessible.

Hence, their website refuses to show the video if you explicitly refuse to allow cookies.

Comment Re:Wildfires (Score 1) 16

It's easy to support AI camera systems watching for wildfires and still oppose them elsewhere, because the use cases are very different and the argument is easy to make that they differ.

The argument is easy to make, yes.
But the legislation isn't.

What are some examples of this kind of thing where the scope did not creep beyond its original purpose?
Now how many examples are there where the scope does creep and never return to its original dimensions?
And after watching the past 16 months, do you continue to trust our rulers to operate within morally/legally sound boundaries when beefing up their surveillance/control systems?

Submission + - Cisco releases open-source 'DNA test for AI models' (scworld.com)

spatwei writes: Cisco released an open-source tool to trace the origins of AI models and compare model similarities for great visibility into the AI supply chain.

The Model Provenance Kit, announced Thursday, is a Python toolkit and command-line interface (CLI) that looks at signals such as metadata and weights to create a “fingerprint” for AI models that can then be compared to other model fingerprints to determine potential shared origins.

“Think of Model Provenance Kit as a DNA test for AI models,” Cisco researchers wrote. “[] Much like a DNA test reveals biological origins, the Model Provenance Kit examines both metadata and the actual learned parameters of a model (like a unique genome that comprises a model), to assess whether models share a common origin and identify signs of modification.”

The tool aims to address gaps in visibility into the AI model supply chain. For example, many organizations utilize open-source models from repositories like HuggingFace, where models could potentially be uploaded with incomplete or deceptive documentation.

Comment Re:AI will create a ton of jobs (Score 1) 42

While you're correct that IT departments and random people in organizations will try this, they'll soon find out that "vibe coding" isn't as easy as it seems. Once you get past the initial mockup stage, and you have to add features that require attention to detail, they'll quickly run into a wall.

Somebody in my company right now, demonstrated a vibe-coded receipt-management tool. The company doesn't want to pay for a system like Concur, so he's creating one in house. He's going to quickly find out that, while the initial demo was impressive, security will be a problem, and the million edge cases he's not thinking about until somebody actually goes on a trip and needs to report expenses, and won't be able to use the software.

From what I've heard, you also can pay the big bucks for Concur and still find out that the taskflow and documentation is not nearly as smooth as the consultant demo, and you still have a huge process tangle requiring a haggard team of people to manually fix all the things that go wrong and enter correct information.

Slashdot Top Deals

Mr. Cole's Axiom: The sum of the intelligence on the planet is a constant; the population is growing.

Working...