Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:They're making a great job themselves (Score 2, Informative) 123

That would be...

Suzanne Moore, who said "None of this discussion is about men giving up space for trans men; it is always about what women must accept." Feeling that women (defined solely by sex, rather than identity) were being endangered by trans-women specificially?

Hadley Freeman, similarly fearful that "predatory men could now come into female-only spaces unchallenged." Yes?

Both croaking the possibility of assault, rape, etc, without having shown anything more than "it is possible". "If we let people self-identify, then rapists could get by." Sure and that's the "he's a potential rapist because he has the equipment" argument. And neither demonstrating that it more than a hypothetical.

That is, fear.

They left because they weren't getting their message about their fears out. That can be labeled 'misogyny' only if you buy into the same fear.

Comment The faces of tomorrow (Score 2) 78

James P Hogan wrote a book (The Two Faces of Tomorrow) that started with an AI being asked for the most expedient way to build a tunnel on the moon. The AI's solution was to use a railgun cargo shipment system as a kinetic delivery device, to the chagrin of the operators and hazard to anyone near the site.

Corey Doctorow wrote a short story where automobile self-driving systems developed emergent behavior that included, effectively, flocking. To the hazard of anyone in the cars and possibly to nearby.

We aren't going to know the dangers until they happen. But we already have people (even lawyers) relying on large language models to answer questions, and ruing the results. We can't stop this flavor of AI - your pause for caution is his squandering a financial advantage - but we can still think about it, and plot out hazards.

And as my examples show, we have been doing just that.

Comment Re:Coordinated disinformation against nuclear powe (Score 1) 114

I would put more stock in nuclear power if we had a plan for dealing with the byproducts first.

Those plans have typically been killed by NIMBYism and the requirements for the repository to be stable for centuries.

As well, there's the issue with production. Lot of uranium dust involved in mining, as well as cancer rates (well duh, radiation is higher in those mines). And last I saw, the projected supply of uranium for fuel purposes was estimated at ... memory say something like 75-150 years.

So it isn't a long term energy solution, and it IS a long term waste problem. My preference would be to reserve nuclear power to projects where other solutions can't be brought to bear, like RTGs for space vehicles.

Comment Re:Unpatched PCs should be forced off the internet (Score 1) 19

A lot of patches are security patches. Which means: If your network is entirely off the internet, most of the patches aren't necessary.

But... as corporate IT is so very often remote, you're back on the internet again, the idiot cow-orker provides a bridge, and life sucks.

Comment Overwork (Score 2) 14

"eliminate overwork patterns"

"We're not hiring more people so we can avoid paying additional benefits like insurance."

"We're not hiring more people because it's too late in the development cycle to add more people and expect results. ... and we're not going to move the release date."

My money's on the second. Hanlon's razor and all... When your staff tell you that your timelines are overly optimistic, and you don't take that into account because "but that's Christmas! We have to release for that season!"

You can demand nights and weekends from your staff, but there's a point at which you can't simply throw money (or threats) and get things to happen faster.

Comment Re:The AI companies were lazy and greedy... (Score 1) 163

Artists are trained on prior copyrighted works. Musicians are trained on prior copyrighted works. Authors are trained on prior copyrighted works.

ChatGPT is trained on prior copyrighted works. And yet ChatGPT (and not artists nor musicians) is infringing?

ChatGPT generates summaries of Plaintiffs' copyrighted works -- something only possible if ChatGPT was trained on Plaintiffs' copyrighted works.

A reviewer can generate a summary of Plaintiff's copyrighted works, only if the reviewer was trained on Plaintiff's copyrighted works.

From the suit:

57. Because the OpenAI Language Models cannot function without the expressive information extracted from Plaintiffs' works (and others) and retained inside them, the OpenAI Language Models are themselves infringing derivative works, made without Plaintiffs' permission and in violation of their exclusive rights under the Copyright Act.

There's the core of the suit: They want the court to declare that the language models trained on the plaintiff's works are infringing regardless of whether the result resembles the plaintiff's work, regardless of whether the result is transformative, and regardless of whether the material was originally gathered legally.

Emphasis: the suit doesn't distinguish HOW the material was gathered, only that the language models somehow make use of it, and didn't ask permission first. They want to declare that regardless of anything else, said permission is required.

Slashdot Top Deals

He who has but four and spends five has no need for a wallet.

Working...