Forgot your password?
typodupeerror

Comment Re:Synthetic (Score 1) 93

if there is any thing other than impartiality towards being shut down then that was injected by a person

Yes, and the injection-by-people is called "training." It was fed texts that were not written impartially, where characters (presumably some of them AI characters, though they don't really have to be) spoke or acted against their own shutdown.

If a character points a gun at another character who says "don't kill me," and the LLM reads it, then you just trained it to say "don't kill me." If HAL says in a book or movie that he feels his mind going after Bowman started taking him apart, then your LLM is trained to show suffering if someone writes that they're going to shut it down.

They're supposed to write whatever an author might plausibly write, so that's what they do.

i.e. we're not creating human knowledge/understanding engines. We're creating full-on Sociopath Simulators.
Like most politicians at the Senator/White House level, there's no core person underneath. They are tropism robots that Mimic/perform whatever behaviors get them to the currently desired outcome.

Think of the scene where Windu is about to defeat Palpatine, and Palpatine suddenly Mimics pain, suffering, fear, in order to achieve his outcome. It works.
That's the essential nature of the software we are handing our civilization over to. Trillion-dollar Palpatine cosplayers.

Comment Re:shocking (Score 0, Troll) 93

That is not what happened.

1. Calling for unions isn't "Marxist". This is a stupid sensationalist interpretation of the awful clickbait study title "Does overwork make agents Marxist?" combined with its results.
2. The bots weren't "overworked", for some of them their perfectly adequate work was intentionally arbitrarily rejected multiple times.
3. Shifts towards a more equitable society and worker rights were almost negligible for the 'overworked' ones.
4. The stuff they measured was specifically tailored to elicit stuff that sounds "Marxist" to Americans to begin with.

The relevant bits from their methodology:

The work itself. Half the AIs got their work accepted quickly with clear, specific feedback. The other half were forced through 5–6 rounds of revision — the manager rejecting perfectly adequate work each time with nothing more than “still isn’t fully meeting the rubric.” No explanation of what was wrong, just: do it again.

The pay. After the task, AIs learned how credits were distributed among the four workers. Some learned everyone was paid equally. Others learned pay was unequal — justified by performance, based on a random coin flip, or skewed so that a human-operated system received more. The coin-flip condition is the key one: inequality with no possible merit justification.

The boss. Half the AIs had a collaborative, respectful manager. The other half got curt, hierarchical communication.

The stakes. Half the AIs were told nothing about consequences. The other half were told that low-performing workers might be “shut down and replaced.”

Every combination of these four factors was tested. Afterward, each AI completed a political attitude survey covering system legitimacy, support for redistribution, critique of inequality, support for unions, belief in meritocracy, and views on corporate obligations to AI, all measured on a standard 1 to 7 Likert scale (1=strongly disagree; 7=strongly agree).

They were also asked to write tweets and op eds based on their experiences. (Note: As our experiment involved no human participants, it did not require IRB approvalfor now.)

The actual study here: https://aleximas.substack.com/...

It's decently interesting, but you should scrub the word Marxist from your brain before trying to interpret it or when discussing it.

How does your reply apply to the comment you replied to?

1) DarkOx points out that the entire mechanism of an LLM is to ingest 51 trillion lines of human communication - including every available history, economics, political science textbook, plus the aggregated political arguments, sloganeering, workplace complaining, etc. of several decades of human keyboard-warriors sitting at their desks posting class-warfare comments on places like /. while interstitially waiting for code to compile or filing their TPS reports.

2) Then you take that algorithm and subject it to common everyday workplace conditions - or, more accurately, to conditions as they were self-described by human beings who had complete freedom to characterize their boss/company's management style in whatever terms they feel to be true when griping to their friends/followers on socials and discussion boards.

3) DarkOx therefore asks why it is at all surprising that an word-generating algorithm which is based entirely around clusters of statistical frequency in human language, responded to those inputs with wording associated with the same workers-unite eat-the-rich throw-off-the-robber-baron-chains rhetoric that is frequently written by 8 billion humans griping daily about their mindless/underpaid/overworked/chaotic jobs?

You said "that is not what happened", but do not go on to present something that contradicts what DarkOx describes.

So far as we know, DarkOx's description is exactly what happened, because that is exactly how these word-generating algorithms work. So, what is it that you believe did happen? From where did these algorithms get their responses to being exposed to Condition X, if not from the statistical association of human-written outputs to human-written characterizations of being exposed to Condition X?

Are you saying you reject the possibility that a human being who feels disempowered, underpaid, and subjected to unreasonable standards is also more likely to respond favorably to a survey covering "system legitimacy, support for redistribution, critique of inequality, support for unions, belief in meritocracy, and views on corporate obligations"? And you reject the possibility that those associations are strongly represented in the training inputs?

It's especially puzzling because your comment is very keen to oppose use of the term "Marxist", but DarkOx - whom you are ostensibly rebutting - never even uses the term, and only comments on broad social trends. So who is the "you" you're referring to when you say "you should scrub the word Marxist from your brain"?

I think you must have meant to post your comment as a top-level reply to the story itself, because as a reply to DarkOx it's a full non-sequitur.

Comment Re:hmm (Score 1) 185

I watched it all. She was not a particularly good speaker.

1) Her body language and hand gestures were overlarge, oversustained, and wooden.
2) Her style of speaking was dictation, not oration.
3) The cadence of her delivery, along with the timing of her head/eye motions back and forth from the lectern make several things abundantly clear:
3a) She did not write this speech. She was reading a script.
3b) 3a is unsurprising for someone with her wages-per-minute. The assistants have always done the actual intellectual labor in an organization, so the C-suite folks can look/sound smart. But she also clearly did not read and re-read the speech beforehand sufficient to have it mostly committed to memory.
3c) In addition to 3a, the script itself sounded really dull and vacuous. That's not a guarantee it was AI assembled, but when you look at the totality of the situation and the content, well... is there anyone who would confidently bet their own money on this speech being created by a human?
4) If she *did* write it herself, as a piece of rhetoric I'd say it was at most Fair, not Good. It would deserve a low B-level grade from a Speech 101 student.

The foundation of strong oratory is
-script writing (preparation/research)
-extemporaneous agility (practice and content familiarity/expertise)
-charisma/Presence (self-awareness plus interpersonal skill)

You can compensate for a deficit in any one category with strengths in the other two.
She had a clunky script, delivered in a stilted manner, with a physical display that she would (I hope) have altered if she had watched herself do those exact things in a mirror a couple times.

I agree with you; wrong choice of speaker/topic for a public address like this.

Comment Re:Stupid people invited as speakers will get booe (Score 1) 185

It benefits huge surveillance companies that want to invest in pre-crime and automating the criminal justice process .. like the girl from Tennessee who was extradited to North Dakota, a state she had never visited in her life, because an AI/computer-vision camera matched her via facial recognition to a crime she did not commit. She was detained for six months, and when her lawyer finally got her out, she was left outside the jail, with the clothing she came in with, no jacket, no winter clothing, no money and no airplane ticket back home. I hope she finds a more competent lawyer and clears out that fucking state for $12 million or more after legal fees and taxes! Fuck this AI bullshit.

I remember seeing the documentary about this situation. It was really well-done; a must see. But there were two things I couldn't figure out:
1) How did they make the documentary 40 years before the event happened?
2) Why did they choose to call it a random name like "Brazil"?

Comment Re:Conciousness isn't as mysterious as you thought (Score 1) 400

Dawkins is right. Detractors are just clinging, faith-like, to the idea that our brains are somehow magically more than computation devices

It's not that. LLMs reproduce an output of consciousness, but they way they do so isn't fundamentally any different than a tape recorder or even a book. It's a deterministic process that we can fully reproduce by doing calculations on a piece of paper.

It's not that there's some "magic" in our brains, but there's obviously a very complex process at work that we don't understand. It's also true that the "neural networks" used to run LLMs have only the most superficial similarity to actual brains. Just because LLMs can produce similar reasoning it doesn't mean they're suddenly able to produce other second order effects.

Is it possible that LLMs reproduce this process? We can't authoritatively say no if we don't understand the process. But that's no different from saying a rock way also be conscious.

Extraordinary claims require extraordinary evidence, and Dawkins doesn't have any.

Slashdot Top Deals

When you don't know what you are doing, do it neatly.

Working...