Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re:Seems like Airbus's 737 Max (Score 1) 33

Sounds like it's more likely an issue with the engines. Engine oil is getting into the cabin air, which is outside air that comes into the engine, gets heated, and a small amount of siphoned off to the cabin. The leak is in the engine somewhere.

There will be filters for that air, and it sounds like airlines have been pressuring Airbus to reduce the maintenance on them, which now and then means they fail to stop the vaporized oil getting through.

Simply increasing the maintenance should be enough to resolve the issue.

Comment If you're talking about Charlie Kirk (Score 1) 71

He was killed by maga. The kid wasn't attacked by Charlie Kirk. The kid was radicalized by right wingers in right-wing spaces. The memes of the kids shared are pipeline memes. Their little bits of nastiness that look apolitical but are designed to lure disaffected young men into right-wing extremism.

It's part of a cult called gryoper and for the sake of your own mental stability I recommend you don't look that up.

What we have with Charlie Kirk is something called stochastic terrorism. It's terrorism created by throwing out a constant stream of undirected incitement to violence.

You never actually call for violence with stochastic terrorism you just constantly hint at it.

There is usually a Target but because you never directly go after the target what can and in the case of Charlie Kirk did happen is that when one of the mentally ill people you are trying to trigger into terrorism pops off that person can go after anyone.

The right wing normally would let this go once they figure it out that the kid wasn't a minority they could attack but they are hoping to trigger more stochastic terrorism so they can do a crackdown and maybe install Trump as president for Life after suspending elections.

The problem the right wing is having is the centrists and left wing just won't do violence. It's just not in us anymore if it ever was.

So what they are doing is trying to get their own side to commit random acts of violence because we won't.

For anyone who doesn't have hundreds of millions if not billions of dollars to buy security that is of course incredibly dangerous because by covering this shooter so much they are encouraging copycats and those copycats are likely to go after right wing personalities because that's where the attention is. But these people get paid a lot of money to create violence so they're willing to take the risk.

Comment It will of course still audit regular taxpayers (Score 1) 19

The Republican party has consistently cut enforcement on elite ultra rich people but made it a point to make sure there was still enforcement for regular slobs like you and me.

It's like the old saying goes, fascism requires an in group that the law protects but does not bind and in out group that the law binds but does not protect.

Lots of people who are in the out group seem to be under the mistaken impression that they are in the in group
Privacy

Google Releases VaultGemma, Its First Privacy-Preserving LLM 7

An anonymous reader quotes a report from Ars Technica: The companies seeking to build larger AI models have been increasingly stymied by a lack of high-quality training data. As tech firms scour the web for more data to feed their models, they could increasingly rely on potentially sensitive user data. A team at Google Research is exploring new techniques to make the resulting large language models (LLMs) less likely to 'memorize' any of that content. LLMs have non-deterministic outputs, meaning you can't exactly predict what they'll say. While the output varies even for identical inputs, models do sometimes regurgitate something from their training data -- if trained with personal data, the output could be a violation of user privacy. In the event copyrighted data makes it into training data (either accidentally or on purpose), its appearance in outputs can cause a different kind of headache for devs. Differential privacy can prevent such memorization by introducing calibrated noise during the training phase.

Adding differential privacy to a model comes with drawbacks in terms of accuracy and compute requirements. No one has bothered to figure out the degree to which that alters the scaling laws of AI models until now. The team worked from the assumption that model performance would be primarily affected by the noise-batch ratio, which compares the volume of randomized noise to the size of the original training data. By running experiments with varying model sizes and noise-batch ratios, the team established a basic understanding of differential privacy scaling laws, which is a balance between the compute budget, privacy budget, and data budget. In short, more noise leads to lower-quality outputs unless offset with a higher compute budget (FLOPs) or data budget (tokens). The paper details the scaling laws for private LLMs, which could help developers find an ideal noise-batch ratio to make a model more private.
The work the team has done here has led to a new Google model called VaultGemma, its first open-weight model trained with differential privacy to minimize memorization risks. It's built on the older Gemma 2 foundation and sized at 1 billion parameters, which the company says performs comparably to non-private models of similar size.

It's available now from Hugging Face and Kaggle.
Privacy

UK's MI5 'Unlawfully' Obtained Data From Former BBC Journalist (theguardian.com) 17

Bruce66423 shares a report from The Guardian: MI5 has conceded it "unlawfully" obtained the communications data of a former BBC journalist, in what was claimed to be an unprecedented admission from the security services. The BBC said it was a "matter of grave concern" that the agency had obtained communications data from the mobile phone of Vincent Kearney, a former BBC Northern Ireland home affairs correspondent. The admission came in a letter to the BBC and to Kearney, in relation to a tribunal examining claims that several reporters in Northern Ireland were subjected to unlawful scrutiny by the police. It related to work carried out by Kearney for a documentary into the independence of the Office of the Police Ombudsman for Northern Ireland (PONI). Kearney is now the northern editor at Irish broadcaster RTE.

In documents submitted to the Investigatory Powers Tribunal (IPT), MI5 conceded it obtained phone data from Kearney on two occasions in 2006 and 2009. Jude Bunting KC, representing Kearney and the BBC, told a hearing on Monday: "The MI5 now confirms publicly that in 2006 and 2009 MI5 obtained communications data in relation to Vincent Kearney." He said the security service accepted it had breached Kearney's rights under article 8 and article 10 of the European convention on human rights. They relate to the right to private correspondence and the right to impart information without interference from public authorities. "This appears to be the first time in any tribunal proceedings in which MI5 publicly accept interference with a journalist's communications data, and also publicly accept that they acted unlawfully in doing so," Bunting said. He claimed the concessions that it accessed the journalist's data represented "serious and sustained illegality on the part of MI5."
Bruce66423 comments: "The good news is that it's come out. The bad news is that it has taken 16 years to do so. The interesting question is whether there will be any meaningful consequences for individuals within MI5; there's a nice charge of 'malfeasance in public office' that can be used to get such individuals into a criminal court. Or will the outcome be like that of when the CIA hacked the US Senate's computers, lied about it, and nothing happened?"

Comment Re:Either the recordings are still available or no (Score 1) 34

IA is in a bad place right now. Not enough staff, ancient and brittle code base that frequently breaks, very poor connectivity outside parts of the US, and of course huge legal problems due to a combination of bad decisions and apparently ignoring legal advice (if they ever took it).

It's unfortunately very difficult to build an archive like that, but it should be a priority. Located in Europe somewhere.

Businesses

Online Marketplace Fiverr To Lay Off 30% of Workforce In AI Push 29

Fiverr is laying off 250 employees, or about 30% of its workforce, as it restructures to become an "AI-first" company. "We are launching a transformation for Fiverr, to turn Fiverr into an AI-first company that's leaner, faster, with a modern AI-focused tech infrastructure, a smaller team, each with substantially greater productivity, and far fewer management layers," CEO Micha Kaufman said. Reuters reports: While it isn't clear what kinds of jobs will be impacted, Fiverr operates a self-service digital marketplace where freelancers can connect with businesses or individuals requiring digital services like graphic design, editing or programming. Most processes on the platform take place with minimal employee intervention as ordering, delivery and payments are automated.

The company's name comes from most gigs starting at $5 initially, but as the business grew, the firm has introduced subscription services and raised the bar for service prices. Fiverr said it does not expect the job cuts to materially impact business activities across the marketplace in the near term and plans to reinvest part of the savings in the business.

Comment Re:Politics poisoned your mind (Score 1, Interesting) 19

They have basically gutted the SEC and virtually all regulation enforcement.

Remember it doesn't matter if something is a crime if there's nobody enforcing the law.

Instead of going after a scam artists JD Vance and Donald Trump are now openly attacking left wing organizations full force of the Department of justice. Their words not mine.

Under those circumstances scam artists know they can get away with all sorts of crap.

If you have elderly relatives get ready to have them moving in with you after they have their retirement stolen from them.
AI

OpenAI's First Study On ChatGPT Usage (arstechnica.com) 14

An anonymous reader quotes a report from Ars Technica: Today, OpenAI's Economic Research Team went a long way toward answering that question, on a population level, releasing a first-of-its-kind National Bureau of Economic Research working paper (in association with Harvard economist David Denning) detailing how people end up using ChatGPT across time and tasks. While other research has sought to estimate this kind of usage data using self-reported surveys, this is the first such paper with direct access to OpenAI's internal user data. As such, it gives us an unprecedented direct window into reliable usage stats for what is still the most popular application of LLMs by far. After digging through the dense 65-page paper, here are seven of the most interesting and/or surprising things we discovered about how people are using OpenAI today. Here are the seven most interesting and surprising findings from the study:

1. ChatGPT is now used by "nearly 10% of the world's adult population," up from 100 million users in early 2024 to over 700 million users in 2025. Daily traffic is about one-fifth of Google's at 2.6 billion GPT messages per day.

2. Long-term users' daily activity has plateaued since June 2025. Almost all recent growth comes from new sign-ups experimenting with ChatGPT, not from established users increasing their usage.

3. 46% of users are aged 18-25, making ChatGPT especially popular among the youngest adult cohort. Factoring in under-18 users (not counted in the study), the majority of ChatGPT users likely weren't alive in the 20th century.

4. At launch in 2022, ChatGPT was 80% male-dominated. By late 2025, the balance has shifted: 52.4% of users are now female.

5. In 2024, work vs. personal use was close to even. By mid-2025, 72% of usage is non-work related -- people are using ChatGPT more for personal, creative, and casual needs than for productivity.

6. 28% of all conversations involve writing assistance (emails, edits, translations). For work-related queries, that jumps to 42% overall, and 52% among business/management jobs. Furthermore, the report found that editing and critiquing text is more common than generating text from scratch.

7. 14.9% of work-related usage is dealt with "making decisions and solving problems." This shows people don't just use ChatGPT to do tasks -- they use it as an advisor or co-pilot to help weigh options and guide choices.

Comment This is great news (Score 2, Informative) 19

Now idiots who don't know any better can lose money even faster!

Venture capitalists can afford to lose lots and lots of money and make it all back on one big hit. Small retail investors can't do that.

Furthermore small retail investors generally aren't sophisticated enough to tell the difference between scam artists and actual businesses. Hell we just had a run of Chinese scam artists covered by the greatest rap channel on YouTube. One of them managed to pump their value up 60,000% before cashing out with everybody's money.

Once again this is a chesterton's fence moment. Don't take down the fence if you don't know why it was put up.

But with the current administration Jesus fucking Christ it's open season. If you're a scam artist it's like shooting fish in a barrel. As long as you keep a few million around for a pardon and make sure you do your crimes in a red state so there's a governor to bribe on the top of the president you are probably okay.

Slashdot Top Deals

"This isn't brain surgery; it's just television." - David Letterman

Working...