Forgot your password?
typodupeerror

Comment Re:Who cares? You don't need 5GB of storage! (Score 1) 57

"Destroying communications you have a legal requirement to retain for X number of years can land you massive fines or much much worse."

But which ones are those? A tiny percentage of the whole. Attached files that are important get downloaded and saved in the appropriate project file. Receipts get stored for six months. Everything else over 90 days gets deleted. I checked and currently my total email storage is about 25 MB.

Comment Re:40 NVME ? (Score 1) 13

Yes, though I don't know about nvmeof. I feel like san style block is overall less popular than other sorts of software approaches to distributed storage nowadays.

Storage people keep pushing the way it was done with fiber channel attached controllers abstracting things to generic block devices. Shared sas, fcoe, iscsi/iser... Have seen so many tries at bringing the concept and being ignored in favor of things like clustered filesystems and object store.

Just like hardware raid controllers are nearly non existent in nvme world, and folks are managing multiple disk redundancy in the os, people are looking for more transparent storage solutions and I just don't think nvmeof plays a role instead of direct attached storage to open ended operating systems..

Comment Re:Don't be a Luddite (Score 1) 62

Agreed, I always remember I think it was with jellybeans. A 2 year old was told if they can not eat the jellybean on the table for some short period, and then the observer leaves the room for a few minutes, returns and sees if the jellybean was eaten. The kids who could resist temptation were tracked for decades and the ones that could resist temptation all their life. I wonder if genetic. For me easy, I look at friends in wonderment at the bad decisions they make because they can't delay the impulse purchase. I figured out long ago the rush from buying something is not worth the downer from buyers remorse.

Comment Re:Future != Past (Score 1) 62

Just to expound on your point a bit. I'd expect there is a great deal of information (online and print) on the WWII atrocities committed by the Axis. Much less is written I expect on the good things people did in order to defeat it. So net effect I expect since LLM's don't get told what is "good" and what is "bad", and that "bad" wins because of more data is slurped down. What traditional teaching to people does is call out what is bad behavior and good behavior by the teacher.

You can see it in cults with people how things can go wrong when bad people end up deciding what information is spread to the masses. You also see what happens in people when you can identify a group, ideally thru appearance, when you demonize the group. Mobs form quickly to expel the "intruders".

Comment Re: No wonder (Score 2) 104

What you obviously don't understand about EV batteries is they spend most of their lives between 80 and 90 percent of the original capacity. Even with very little use, they'll lose more than the first 10% within the first year.

The important takeaway from this is that modern EVs will retain 85%+ of their battery capacity thru DECADES of normal use. The vehicle will physically wear out and rot away before the battery wears down. Even when they are old enough to trickle down to average-poor people they will be sufficient for everyday use.

Comment Re:Fix for that (Score 1) 22

Don't ban from arvix for not proofreading something like citations on a hot result. Where do you draw the line? Drawing the line is hard to do so just push them into the secondary queue. Then they have to redeem themselves to get back into the main one.

This is so obvious, if they start banning people a second competing service will emerge.

Comment Re:Fix for that (Score 2, Interesting) 22

A better idea is to simply have two submission queues. If you get banned from the high quality one you have to use the low quality queue. If they don't do that someone else is going to do it. It is a fact of life that papers are going to have AI generated writing in them going forward. Just accept this fact and make two queues.

Comment Re:Synthetic (Score 1) 102

if there is any thing other than impartiality towards being shut down then that was injected by a person

Yes, and the injection-by-people is called "training." It was fed texts that were not written impartially, where characters (presumably some of them AI characters, though they don't really have to be) spoke or acted against their own shutdown.

If a character points a gun at another character who says "don't kill me," and the LLM reads it, then you just trained it to say "don't kill me." If HAL says in a book or movie that he feels his mind going after Bowman started taking him apart, then your LLM is trained to show suffering if someone writes that they're going to shut it down.

They're supposed to write whatever an author might plausibly write, so that's what they do.

i.e. we're not creating human knowledge/understanding engines. We're creating full-on Sociopath Simulators.
Like most politicians at the Senator/White House level, there's no core person underneath. They are tropism robots that Mimic/perform whatever behaviors get them to the currently desired outcome.

Think of the scene where Windu is about to defeat Palpatine, and Palpatine suddenly Mimics pain, suffering, fear, in order to achieve his outcome. It works.
That's the essential nature of the software we are handing our civilization over to. Trillion-dollar Palpatine cosplayers.

Comment Re:shocking (Score 0, Troll) 102

That is not what happened.

1. Calling for unions isn't "Marxist". This is a stupid sensationalist interpretation of the awful clickbait study title "Does overwork make agents Marxist?" combined with its results.
2. The bots weren't "overworked", for some of them their perfectly adequate work was intentionally arbitrarily rejected multiple times.
3. Shifts towards a more equitable society and worker rights were almost negligible for the 'overworked' ones.
4. The stuff they measured was specifically tailored to elicit stuff that sounds "Marxist" to Americans to begin with.

The relevant bits from their methodology:

The work itself. Half the AIs got their work accepted quickly with clear, specific feedback. The other half were forced through 5–6 rounds of revision — the manager rejecting perfectly adequate work each time with nothing more than “still isn’t fully meeting the rubric.” No explanation of what was wrong, just: do it again.

The pay. After the task, AIs learned how credits were distributed among the four workers. Some learned everyone was paid equally. Others learned pay was unequal — justified by performance, based on a random coin flip, or skewed so that a human-operated system received more. The coin-flip condition is the key one: inequality with no possible merit justification.

The boss. Half the AIs had a collaborative, respectful manager. The other half got curt, hierarchical communication.

The stakes. Half the AIs were told nothing about consequences. The other half were told that low-performing workers might be “shut down and replaced.”

Every combination of these four factors was tested. Afterward, each AI completed a political attitude survey covering system legitimacy, support for redistribution, critique of inequality, support for unions, belief in meritocracy, and views on corporate obligations to AI, all measured on a standard 1 to 7 Likert scale (1=strongly disagree; 7=strongly agree).

They were also asked to write tweets and op eds based on their experiences. (Note: As our experiment involved no human participants, it did not require IRB approvalfor now.)

The actual study here: https://aleximas.substack.com/...

It's decently interesting, but you should scrub the word Marxist from your brain before trying to interpret it or when discussing it.

How does your reply apply to the comment you replied to?

1) DarkOx points out that the entire mechanism of an LLM is to ingest 51 trillion lines of human communication - including every available history, economics, political science textbook, plus the aggregated political arguments, sloganeering, workplace complaining, etc. of several decades of human keyboard-warriors sitting at their desks posting class-warfare comments on places like /. while interstitially waiting for code to compile or filing their TPS reports.

2) Then you take that algorithm and subject it to common everyday workplace conditions - or, more accurately, to conditions as they were self-described by human beings who had complete freedom to characterize their boss/company's management style in whatever terms they feel to be true when griping to their friends/followers on socials and discussion boards.

3) DarkOx therefore asks why it is at all surprising that an word-generating algorithm which is based entirely around clusters of statistical frequency in human language, responded to those inputs with wording associated with the same workers-unite eat-the-rich throw-off-the-robber-baron-chains rhetoric that is frequently written by 8 billion humans griping daily about their mindless/underpaid/overworked/chaotic jobs?

You said "that is not what happened", but do not go on to present something that contradicts what DarkOx describes.

So far as we know, DarkOx's description is exactly what happened, because that is exactly how these word-generating algorithms work. So, what is it that you believe did happen? From where did these algorithms get their responses to being exposed to Condition X, if not from the statistical association of human-written outputs to human-written characterizations of being exposed to Condition X?

Are you saying you reject the possibility that a human being who feels disempowered, underpaid, and subjected to unreasonable standards is also more likely to respond favorably to a survey covering "system legitimacy, support for redistribution, critique of inequality, support for unions, belief in meritocracy, and views on corporate obligations"? And you reject the possibility that those associations are strongly represented in the training inputs?

It's especially puzzling because your comment is very keen to oppose use of the term "Marxist", but DarkOx - whom you are ostensibly rebutting - never even uses the term, and only comments on broad social trends. So who is the "you" you're referring to when you say "you should scrub the word Marxist from your brain"?

I think you must have meant to post your comment as a top-level reply to the story itself, because as a reply to DarkOx it's a full non-sequitur.

Slashdot Top Deals

Nothing ever becomes real until it is experienced. - John Keats

Working...