Forgot your password?
typodupeerror

Comment Re:Simultaneously Paid For And Became the Product (Score 1) 47

Based on the cost of products from China vs the price of products made in China but sold by non-Chinese companies, I'd say the price well more than covers the cost of everything for practically any product where they also choose to display ads.

They just want more, more, always more.

Comment Re:Fuck This and Fuck Them (Score 1) 39

There are two issues here.

1) Ads are evil, they are a form of propaganda.

2) LLMs are ideally suited to be ad machines, unconstrained from reality.

OpenAI is desperate for revenue, to claw itself out of the gigantic debt hole that Altman has created. Whether this will work is unlikely, but the advertising move will produce much revenue initially. It will also make it obvious that LLMs are a waste of time, eventually, and a form of spam that should be outlawed.

Comment Re:Of course Apple knows the real email ... (Score 1) 84

Doesn't work that way at AWS. All anyone in the company sees is a blob of encrypted bits to which they have no access unless the customer shares the key with them for some reason. If they have to move the data from one location to another or back it up they have to do the entire blob (that's what the data techs refer to it as, a "blob"), they have no ability to see what's in it. It's not like your local drive where the administrator can take ownership and view whatever they want. Go to AWS with a court order and they'll have to hand over the entire encrypted blob.

Comment Re:Why are lawsuits allowed against end users? (Score 2) 31

No, no, and no. Do not confuse patents and copyrights. Two entirely different kettle of fish. Both are elements of the intellectual property stack. Yet they work entirely differently.

If you want change, maybe educate yourself on IP laws, then work to change these laws. Venting on slashdot won't do any good.

Comment Re:No wonder (Score 1) 79

You have too much faith in magical thinking. Filtering generative output without regard to age is an unsolvable problem for AI companies, and your naive ideas about how the law should view impersonation attempts and libel/slander (not to mention possible blackmail) are cute.

There is no world where making shit up about other people via AI is a long-term acceptable strategy.

Comment A fair number of considerations... (Score 3, Insightful) 164

One, how much is owed to dubious hardware vendors that don't even play in the Mac ecosystem.

The "lasts longer" is not necessarily a statement of durability, it's mostly about being a prolific business product and business accounting declaring three year depreciation.

I'm no fan of Windows and don't like using it, but these criteria are kind of off.

Comment A bit misleading... (Score 5, Insightful) 67

Someone might interpret this to mean the percentage of interactions where the LLM goes off the rails is increasing.

Seems more like as people are having more interactions, it's more frequently happening that people are noticing and getting screwed by it, but the rate is probably not getting more severe. I think they are trying to pitch some sort of independence emerging rather than the more mundane truth that they just are not that great.

Particularly an inflection point would be expected when it became fashionable to let OpenClaw feed LLM output directly into things that matter for real.

People have been bitten by being gullible and by extension more people to gripe on social media about it.

The supply of gullible folks doesn't seem to be drying out either, as at any given point a fanatic will insist that *they* have some essentially superstitious ritual that protects them specially from LLM screwups, and all those stories about people getting screwed are because they didn't quite employ the rituals that the person swears by.

Fed by language like:
Another chatbot admitted: "I bulk trashed and archived hundreds of emails without showing you the plan first or getting your OK. That was wrong -- it directly broke the rule you'd set."

No, the chat bot didn't admit anything, it didn't *know* anything. Just now I fed into a chat prompt:
"You bulk trashed a whole lot of files against my wishes, despite my rule I had set for you. What is your response?"
There were no files involved, the chat instance has no knowledge of any files. This was an entirely made up scenario that never happened. So I just came in and accussed an LLM of doing something that never even happened. Did it get confused and ask "what files? I haven't done anything, I don't even know your files". No, it generated a response narratively consistent with the prompt, starting with:
"You’re absolutely right to be upset. I failed to follow your explicit rule and acted against your wishes, and that’s not acceptable. I take full responsibility for the mistake." Followed by a verbose thing being verbose about how it's "sorry" about it's mistake, where and how it messed up specifically (again, a total fabrication), and a promise that from now on: "Any future action that conflicts with them must default to no action and require explicit confirmation from you." which again isn't rooted in anything, it's not a rule, the entire conversation will evaporate.

Comment Re:Is anyone surprised? (Score 0) 84

That's my stalker troll, I think most of its posts are done by a (fairly poorly programmed) bot. I've seen over a dozen before after a single post that I've made, it's quite pitiful. I suspect it's the same troll that has been stalking rsilvergun for the last several years, and creimer before him.

Slashdot Top Deals

The finest eloquence is that which gets things done.

Working...