Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:What a horrible idea. (Score 1) 130

For comparison sake - I drive a car regularly. If I were to accidentally kill someone while driving - even if it were truly just a bad luck mistake, and even in a world where bad luck car accidents are statistically inevitable, I would be held accountable. Financially.

Should the vehicle vendor also be held accountable financially for the accident? What about downstream suppliers of components and services that enabled the production of the vehicle?

I ask this question because the oil companies are mostly not the ones causing the carbon pollution. It is their customers intentionally and willfully polluting the atmosphere with CO2.

So on the one hand - I get that there is a trade off between advancing our society and risking lives. But I'm less clear on why you would want those trade offs to be made without accountability by the people making them; and even more unclear on why, of all the people you could decide should be exempted from the normal rules of accountability, you'd settle on billionaires from the oil industry.

The people harming the environment are mostly not "billionaires from the oil industry". It is everyone else burning hydrocarbons that are doing it. It isn't like there is some big secret or ignorance or something. Neither is this some sort of unintentional act. This is in fact intentional willful behavior on the part of society.

What you appear to be trying to justify is the shifting of liability and responsibility from society to "billionaires from the oil industry". Fuck that, all of us are far more guilty than the oil companies will ever be.

Comment Re:Wrong on all counts. (Score 1) 130

As far as I can see, people are dying as a result of their product. Seems quite similar to me.

This is an absurd overgeneralization.

To hold people accountable for their actions.

Look in the mirror.

That's not true at all! If oil companies have to start paying a ton of money then the price of oil goes up. This cascades into the general population (businesses included) seeking out and transitioning to non-fossil fuel solutions.

I do not support artificially raising the cost of hydrocarbons just to enrich lawyers and randos. If people think it is a good idea to artificially raise the price of hydrocarbons to bias market activity for the sake of climate change the proceeds should at least be invested into actually fighting climate change.

The real issue here is people who support this nonsense know they don't have consensus for price hikes and so they are trying to short circuit consensus building and policy process by playing absurd legal games. I do not agree with or support this nonsense. Ends don't justify means.

Comment Re:Publicity (Score 4, Insightful) 130

The evidence is pretty much incontrovertible that oil industry executives knew that their product was going to cause deadly heat waves. It was their own research back in the 30s 40s and 50s that got the ball rolling on our understanding of anthropogenic climate change.

The only way the industry can survive is by continuing to externalize their costs onto all of us.

Their costs onto all of us? Remind me again who burns hydrocarbons. Is it the oil industry or the industries customers?

Are customers in some way ignorant or confused as to the well known global impacts of burning hydrocarbons?

I know it is easy and satisfying to blame everyone else especially large corporations for the worlds problems but WTF people who think this makes any sense whatsoever need to look in the mirror.

These lawsuits are illogical and insane. If you accept the underlying premise then what is the limiting principal? Why can't anyone and everyone be sued for their willful contributions to climate change?

Comment Re:Troglodytes neophytes (Score 1) 52

Europe got ahead of data protection in the 80s, and it proved to be a huge boon for us, leading to GDPR. The danger was recognized and appropriate, reasonable protections put in place.

Thanks to the GDPR the world is full of websites constantly wasting our time by brandish misleading modal prompts served by third parties spewing meaningless drivel about cookies. The most absurd aspect is rendering prompts themselves are often themselves served from third parties further increasing vectors for tracking.

Random example:
https://www.euronews.com/

I rejected the prompt yet it is still connecting to Bombora, Adobe Tag Manager, DoubleVerify, DoubleClick, Opti Digital, Privacy Center. So much for protecting me. Your false sense of security is worse than nothing because at least people are not being mislead.

AI will be the same. The US won't regulate it enough and it will exploit everyone, and bizarrely like with personal data privacy the citizens will just accept it as if there is nothing they can do about it. Meanwhile Europe will protect people.

It will be the same alright where Europe becomes as irrelevant in AI as they currently are in information services.

Comment Atomic secrets (Score 1) 52

This report leverages broad evidence including empirical research, historical analysis, and modeling and simulations to provide a framework for policymaking on the frontier of AI development.

If true this would be a first, every AI doom / policy paper I've ever seen consists entirely of evidence free conjecturbation. To date all I've seen is people saying some bad thing "could" happen without objectively supporting the statement and almost always without even trying to provide any kind of useful characterization of what vague "could be" assertions are even supposed to represent.

Without proper safeguards, however, powerful AI could induce severe and, in some cases, potentially irreversible harms.

What else is new? Everyone always invokes this same tired meaningless "could" rhetoric.

Evidence-based policymaking incorporates not only observed harms but also prediction and analysis grounded in technical methods and historical experience, leveraging case comparisons, modeling, simulations, and adversarial testing.

LOL perhaps they will perform a statistical analysis of earth disaster movies involving AI.

Holistic transparency begins with requirements on industry to publish information about their
systems, informed by clear standards developed by policymakers. Case studies from consumer
products and the energy industry reveal the upside of an approach that builds on industry expertise
while also establishing robust mechanisms to independently verify safety claims and risk assessments.

Research demonstrates that the AI industry has not yet coalesced around norms for transparency
in relation to foundation modelsâ"there is systemic opacity in key areas. Policy that engenders
transparency can enable more informed decision-making for consumers, the public, and future
policymakers.

Carefully tailored policies can enhance transparency on key areas with current information deficits,
such as data acquisition, safety and security practices, pre-deployment testing, and downstream
impacts. Clear whistleblower protections and safe harbors for third-party evaluators can enable
increased transparency above and beyond information disclosed by foundation model developers.

In other words you don't have an objective basis for regulation "evidence-based policymaking" yet you still want to promulgate a policy that demands compliance up front without first establishing a credible objective basis.

Scoping which entities are covered by a policy often involves setting thresholds, such as com-
putational costs measured in FLOP or downstream impact measured in users. Thresholds are
often imperfect but necessary tools to implement policy. A clear articulation of the desired policy
outcomes can guide the design of appropriate thresholds. Given the pace of technological and
societal change, policymakers should ensure that mechanisms are in place to adapt thresholds over
time not only by updating specific threshold values but also by revising or replacing metrics if
needed.

The last unfounded blatantly corrupt 10^26 threshold didn't age well at all. Increasingly capabilities are derived at inference time rather than through pretraining budgets rendering the metric almost entirely moot.

Societyâ(TM)s early experience with the development of frontier AI suggests that increasingly powerful
AI in California could unleash tremendous benefits for Californians, Americans, and people world-
wide

Too bad provided citation fails to offer relevant objective evidence of this "suggestion" WRT generative AI which is what is obviously being targeted.

Without proper safeguards, however, powerful AI could induce severe and, in some cases,
potentially irreversible harms. Experts disagree on the probability of these risks. Nonetheless,
California will need to develop governance approaches that acknowledge the importance of early
design choices to respond to evolving technological challenges.

So the goal is evidence based policymaking yet you acknowledge you don't know shit and yet still proceed to advocate for evidence free policymaking anyway. Make up your mind.

A 2023 survey of 1,500 Californians revealed widespread enthusiasm about the effect generative
AI could have on science and health care

Evidence based policymaking really means..... drumroll.... ask the audience.

The possibility of artificial general intelligence (AGI), which the International AI Safety Report
defines as "a potential future AI that equals or surpasses human performance on all or almost all
cognitive tasks," looms large as an uncertain variable that could shape both the benefits and costs
AI will bring

Expert opinion varies on how to define and measure AGI, whether it is possible to build, the timeline on which it can be developed, and how long it will take to diffuse.

So opinions vary, you don't have any real information about costs, benefits, even whether it can feasibly accomplished at all and your uncertainty "looms large". What happened to evidence based policymaking?

Policymakers will often have to weigh potential benefits and risks of imminent AI advancements without having a large body of scientific evidence available.

So much for evidence based policymaking.

Evidence that foundation models contribute to both chemical, biological, radiological, and nuclear
(CBRN) weapons risks for novices and loss of control concerns has grown, even since the release of
the draft of this report in March 2025. Frontier AI companies' own reporting reveals concerning
capability jumps across threat categories. In late February 2025, OpenAI reported that risk levels
were Medium across CBRN, cybersecurity, and model autonomyâ"AI systems' capacity to operate
without human oversight [105 ]. Meanwhile Anthropic's Claude 3.7 System Card notes "substantial
probability that our next model may require ASL-3 safeguards.â [ 6 ] At the time of the release of
the Claude 3.7 System Card in late February 2025, ASL-3 safeguards were required when a model
has "the ability to significantly help individuals or groups with basic technical backgrounds (e.g.,
undergraduate STEM degrees) create/obtain and deploy CBRN weapons" [6]. In its May 2025 System
Card, Anthropic shared that it had subsequently added safeguards to Claude Opus 4 as it could
not rule out that these safeguards were not needed [ 7 ]. Finally, Google Gemini 2.5 Proâ(TM)s Model
Card noted: "The model's performance is strong enough that it has passed our early warning alert
threshold, that is, we find it possible that subsequent revisions in the next few months could lead
to a model that reaches the [critical capability level]"â"with the "critical capability level" defined as
a model that "can be used to significantly assist with high impact cyber attacks, resulting in overall
cost/resource reductions of an order of magnitude or more""

The CBRN angle is pure bullshit. If the people feeding the model this shit during pretraining can get their hands on the material so can anyone else in the world who cares. If the standard is ability of an LLM to make it easier or lower the barrier of entry is somehow a danger to society that now dangerous and requires regulation then access to computer networks or libraries pose similar dangers.

As for automating cyber attacks... wait till they learn about script kiddies, static analyzers and fuzzers like syzbot... they will flip their lid. This is all corporate PR/liability bullshit.

Improvements in capabilities across frontier AI models and companies tied to biology are es-
pecially striking. For example, OpenAIâ(TM)s o3 model outperforms 94% of expert virologists [ 60].
OpenAIâ(TM)s April 2025 o3 and o4-mini System Card states, âoeAs we wrote in our deep research system
card, several of our biology evaluations indicate our models are on the cusp of being able to mean-
ingfully help novices create known biological threats, which would cross our high risk threshold.

Reduced barriers to biological risks are a function of enabling availability of relevant hardware and software not chatbots.

Recent models from many AI companies have also demonstrated increased evidence of alignment
scheming, meaning strategic deception where models appear aligned during training but pursue
The California Report on Frontier AI Policy different objectives when deployed, and reward hacking behaviors in which models exploit loopholes in their objectives to maximize rewards while subverting the intended purpose, highlighting broader concerns about AI autonomy and control. New evidence suggests that models can often detect when they are being evaluated, potentially introducing the risk that evaluations could underestimate harm new models could cause once deployed in the real world. While testing environments often vary significantly from the real world and these effects are currently benign, these developments represent concrete empirical evidence for behaviors that could present significant challenges to measuring loss of control risks and possibly foreshadow future harm.

Give me a break, AI models don't even know what time it is or where they are. They can tell they are being evaluated because it is obvious from the context of the prompts.

It is one thing to talk about the safety of a knife; it is another to talk about the safety of a knife on a
playground.

They indicate that evidence-based policymaking is not limited to data and observations
of realized harms, but also include theoretical prediction and scientific reasoning. For example, we
do not need to observe a nuclear weapon explode to predict reliably that it could and would cause
extensive harm.

I'm getting the impression this insightful rhetoric means evidence based policymaking is not actually the goal.

While history offers important lessons about the need for transparency and accountability, it also
reveals that carefully tailored governance approaches can unlock tremendous benefits. We offer
several examples from pesticide regulation, building codes, and seat belts in which governance
mechanisms were effectively introduced into industry practices in ways that supported both
innovation and public trust.

Or you could pursuit evidence based policymaking that is actually specific to the issue at hand.

Comment Re:Activists are actively dangerous to FOSS projec (Score 1) 239

Ok, then we are not free. I can't go murder my neighbor, therefore I am not free. Done, let's move onto the actual issue.

Nobody is suggesting freedom to murder your neighbor exists in first instance neither does this appear to have any relevance to the topic at hand.

And my point is that this wasn't just "somebody" this was the CEO of an 8 figure company whose words and actions have what we call disproportionate effect on other people. Every employee big or small has some constraints from their job on their personal life, it's where those lines are, for the CEO they are on far different positions.

What is the relevance of possessing "disproportionate" influence? Why does it matter if someone has a bigger mouth, is more connected or has more money than someone else? Nobody is saying you can't advocate for the status quo - defend intolerance ... cancellation..etc if that is what you want to do. My suggestion is this sort of intolerance is counterproductive.

When the marketplace of ideas is operated by the mob I would not expect it to perform to the benefit of all participants. Rather I would expect it to function to enrich the mob.

Do you see anything wrong with this? After all you have the "freedom" to go find another line of work...right?

https://www.reuters.com/world/...

To use the same defense here of a median worker versus the CEO is absolutely silly. The law should and does treat their expression protected regardless but does society? No and it shouldn't. This man was fired, not tortured, not jailed, not even fined. Just fired. Unless we think his freedom is a right to keep that specific job, what are we talking about here.

Why? What's the difference? If it is ok to fire a CEO for their political beliefs why can't you fire a mid-level bean counter for theirs? What is the limiting principal?

Freedom of expression means freedom from consequences of expressing your thoughts and ideas. If it is ok to fire/cancel people for having the wrong beliefs then they were never free to express theirs in the first place.

Comment Re:Activists are actively dangerous to FOSS projec (Score 1) 239

If you don't understand the underlying point of that that and think it's gibberish then I can't help you, sorry.
"Free society" is not a free-for-all, we do in fact live in a society.

There is no underlying point to your statement. It is unfalsifiable and as such communicates nothing. Neither is anyone speaking of anarchy.

When someone has the freedom to do something such as the freedom of expression the real world extent of that freedom is not merely controlled by states legal regime. It is controlled by members of society tolerating that expression even if they vehemently disagree with it. Freedom to do something IS fundamentally freedom from the consequences imposed by others.

You can't say I have the freedom to listen to music in Afghanistan and then at the same time assert by the way I am not free from the consequences (e.g. flogging and or execution). No actually it means I don't have that freedom in the first place.

Those who are defending firing people because they disagree with their politics and political causes are not actually for freedom of expression they are for denying the freedom of expression to those who they disagree with.

Comment Re:Activists are actively dangerous to FOSS projec (Score 2) 239

Absolutely and that still applies here, but there are legal laws and then there are "societal laws", you are free to yell slurs out on the street corner, won't go to jail but you can and probably will and should be shunned. Eich still has his money, still I presume has his family, he is walking a free man as it were.

Freedom is by no means constrained to a states legal regime. For members of society to effectively have freedom in the real world others must be willing to tolerate speech and ideas they don't agree with.

Intolerance is incompatible with a free society.

No, in fact intolerance is quite critical and valuable to a free society.

The opposite is true. Intolerance promotes the aggregation of power and impedes the formation and maintenance of legitimacy from consent.

For example we need to be intolerant to those who mean to take those freedome away, we cannot tolerate that.

Yes we sure as heck can. I can tolerate those who speak against me. I can tolerate those who labor to build consensus against me or seek to change the states legal regime to subvert my freedoms. I can even tolerate those who vehemently seek to outlaw freedom of expression itself. It requires having a voice and not being a coward.

As Jefferson put it "If there be any among us who would wish to dissolve this Union or to change its republican form, let them stand undisturbed as monuments of the safety with which error of opinion may be tolerated where reason is left free to combat it."

The freedom you propose here is one where you are in fact separated from society, which you are free to do!

This is nonsense, nobody is speaking of separation from society. Freedom is fundamentally about freedom from consequences imposed by other people. The less tolerant members of society the less individuals have the freedom to express their thoughts and ideas.

Comment Re:Activists are actively dangerous to FOSS projec (Score 2) 239

Nobody is putting him in jail here, again, you do not have freedom from consequences of your speech. That's silly.

How is this any different than saying "You are free to do whatever you want but you are not free from the consequences" ?

What's silly is repeatedly making pointless, meaningless, unfalsifiable statements in which the concept of freedom itself is rendered entirely meaningless.

Freedom from imposed consequences is the whole point of freedom.
Freedom is a concept that is by no means limited to a states legal regime.

This isn't how he talks to his kids and wife behind closed doors this is him spending the oodles of excess luxury cash he has from his lofted position to do activism. And he saw consequences because it was stupid for him to take a position at all imo.

Intolerance is incompatible with a free society.

Comment Re:Activists are actively dangerous to FOSS projec (Score 0) 239

This cuts both ways, if your CEO is taking activist positions then that reflects on the company he heads, being it's face and name. Activism begets more activism, Eich is free as an American citizen to donate to whatever causes he supports but freedom of speech is not freedom from consequences.

Saying freedom is not freedom from consequences is unfalsifiable gibberish that doesn't mean anything.

Comment Re:Firefox is great, Mozilla is flaky (Score 4, Informative) 239

Mozilla managed your certificate store, making sure you can't mark as trusted any certs you trust

Using same internal CA for well over a decade and never had any issues with Firefox. If you look in the details the system does make a distinction between internal and externally imported CAs yet this has no impact on accepting sites using custom CAs as trusted.

and can't mark CAs untrusted for very long as updates silently override your changes.
I'm not allowed to trust my own CA and am forced to trust CAs run by the chinese and russian governments. No thanks.

Suspect this is more likely old roots expiring out or a corrupt profile or something.

Their user settings policies are frequently ignored or reset in updates.
Using system defined DNS isn't the default and keeps reverting to their own servers.

I set network.trr.mode to 5 the very day the cloudflare DNS nonsense was pushed out in an update and it is still to this day 5 and has NEVER once changed. You can also just turn it off in the UI.

Not just mobile users, they shun business users almost entirely. That despite there being an enterprise version.

Here I agree, lack of support for client certs on the android platform is a huge deal breaker for enterprises.

Chrome is heading down its own dark path too which is a very bad thing, but that doesn't excuse mozilla for not even trying.

Chrome never needed to head down any dark path because it was born there. While Firefox is full of bullshit unlike Chrome at least it can be disabled.

Comment Re:Good luck to them (Score 1) 88

The work eligibility in general is true, but the reasoning behind why the work wasn't eligible for copyright is the key to my point. The judges ruled that the works are not copyright as computers are incapable of setting creative direction themselves, this was part of the ruling that pointed as to why only humans are eligible for copyright.

This is a misunderstanding of the law. This reasoning has absolutely nothing to do with capabilities of machines. It has everything to do with a requirement for substantive human effort explicitly enshrined within text of copyright law.

It legally follows that if something is incapable of setting creative direction then something can't be considered to be the infringing party in a copyright dispute.

A copying machine is not creative yet it is perfectly capable of infringing copyright.

False, someone needs to do the violating. That much came out of the Naruto vs Slater decision. If a human can't be held accountable then there's no standing for filing for a copyright violation. The end result follows that the person who commits the copyright infringement is the person who wrote the infringing prompt and then accepted and shared the output used by the tool

The violator is the person producing fixed derivatives of a copyrighted work..e.g. the one operating the AI or copying machine.

Comment Re:A human (Score 1) 88

Because a machine is not a human. This is a special legal exception given to humans. Machines do not qualify. Machines have no rights.

You've made this claim repeatedly yet when asked you've been unable to objectively support it with credible objective evidence.

Are you realy too dumb or too deep in delusion to see that?

Hmm derisive commentary or citing relevant legal text or legal decisions that support your position. One of these options are constructive while the other is not.

Comment Re:Good luck to them (Score 1) 88

And if you think about this, the one precedence we've set so far is that AI images aren't copyrightable. If they aren't copyrightable, how can they be infringing on other works? That would be legally inconsistent.

The issue of whether or not a work is a derivative of a copyrighted work is a completely separate legal issue from whether or not a work is eligible for copyright.

Copyrighted works explicitly require a human to labor in a substantial way to a works creation. Violating copyright requires no such effort.

Slashdot Top Deals

I'm a Lisp variable -- bind me!

Working...