Comment Re:Delusional partisanship (Score 1) 60

it is our business if he expects bring it up in a political debate use it as a motivating anecdote. If he can't say why than its not a real argument.

Almost everything else he said was total lies as well in terms of the authorization for domestic spying and military powers. Democrats had plenty of political opportunities to role back surveillance powers in the Obama era and they did the opposite.

Oh but that was just 'representing us', funny whenever lefists win elections it is always 'elections have consequences this is what EVERYONE WANTED' but someone in the center like Trump wins and then it is 'he's facist!'

Just STFU moron.

Privacy

Google Releases VaultGemma, Its First Privacy-Preserving LLM 2

An anonymous reader quotes a report from Ars Technica: The companies seeking to build larger AI models have been increasingly stymied by a lack of high-quality training data. As tech firms scour the web for more data to feed their models, they could increasingly rely on potentially sensitive user data. A team at Google Research is exploring new techniques to make the resulting large language models (LLMs) less likely to 'memorize' any of that content. LLMs have non-deterministic outputs, meaning you can't exactly predict what they'll say. While the output varies even for identical inputs, models do sometimes regurgitate something from their training data -- if trained with personal data, the output could be a violation of user privacy. In the event copyrighted data makes it into training data (either accidentally or on purpose), its appearance in outputs can cause a different kind of headache for devs. Differential privacy can prevent such memorization by introducing calibrated noise during the training phase.

Adding differential privacy to a model comes with drawbacks in terms of accuracy and compute requirements. No one has bothered to figure out the degree to which that alters the scaling laws of AI models until now. The team worked from the assumption that model performance would be primarily affected by the noise-batch ratio, which compares the volume of randomized noise to the size of the original training data. By running experiments with varying model sizes and noise-batch ratios, the team established a basic understanding of differential privacy scaling laws, which is a balance between the compute budget, privacy budget, and data budget. In short, more noise leads to lower-quality outputs unless offset with a higher compute budget (FLOPs) or data budget (tokens). The paper details the scaling laws for private LLMs, which could help developers find an ideal noise-batch ratio to make a model more private.
The work the team has done here has led to a new Google model called VaultGemma, its first open-weight model trained with differential privacy to minimize memorization risks. It's built on the older Gemma 2 foundation and sized at 1 billion parameters, which the company says performs comparably to non-private models of similar size.

It's available now from Hugging Face and Kaggle.

Comment Re: Why 'better algorithms'? (Score 1) 101

That idea seems naive, but I very much agree. So much has changed since then, and I feel rather naive myself say that Instagram, as a refuge for photographers first, then artists seemed rather benevolent and non evil before Facebook bought it and drowned it in the bathtub.

Follow the content you want without interference , voting and therefore popularity, only by people. It seemed to work.

F*ck KimK and the beautiful people they forced to the top.

Feed Google News Sci Tech: The Fed is likely to cut rates for the first time this year. Here's what that means for credit cards and housing costs - Business Insider (google.com)

Feed Google News Sci Tech: Apple Preparing Four New 2nm Chipsets In 2026, With At Least Two Of Them Adopting An Advanced Packaging Technology - Wccftech (google.com)

Feed Google News Sci Tech: Robert Redford, actor, director, environmentalist, dead at 89 - CNN (google.com)

Slashdot Top Deals