Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:Maybe (Score 1) 104

The upstream Linux kernel doesn't differentiate between security bugs and "normal" bug fixes. So the new kernel.org CNA just assigns CVE's to all fixes. They don't score them.

Look at the numbers from the whitepaper:

"In March 2024 there were 270 new CVEs created for the stable Linux kernel. So far in April 2024 there are 342 new CVEs:"

Comment Re:Yeah (Score 1) 104

Yes ! That's exactly the point. Trying to curate and select patches for a "frozen" kernel fails due to the firehose of fixes going in upstream.

And in the kernel many of these could be security bugs. No one is doing evaluation on that, there are simply too many fixes in such a complex code base to check.

Comment Re:Maybe (Score 1) 104

You're missing something.

New bugs are discovered upstream, but the vendor kernel maintainers either aren't tracking, or are being discouraged from putting these back into the "frozen" kernel.

We even discovered one case where a RHEL maintainer fixed a bug upstream, but then neglected to apply it to the vulnerable vendor kernel. So it isn't like they didn't know about the bug. Maybe they just didn't check the vendor kernel was vulnerable.

I'm guessing management policy discouraged such things. It's easier to just ignore such bugs if customer haven't noticed.

Submission + - Why a 'frozen' distribution Linux kernel isn't the safest choice for security (zdnet.com) 1

Jeremy Allison - Sam writes: Cracks in the Ice: Why a 'frozen' distribution Linux kernel isn't the safest choice for security

https://ciq.com/blog/why-a-fro...

This is an executive summary of research that my colleagues Ronnie Sahlberg and Jonathan Maple did, published as a whitepaper with all the numeric details here:

https://ciq.com/whitepaper/ven...

Steven Vaughan-Nichols is covering the release of this
data here:

https://www.zdnet.com/article/...

Comment Linus Torvalds and I both enjoyed the QL (Score 1) 124

(Comment I also added to the register article - but I like /. too :-).

I offered to go with Linus to Sao Paulo zoo once to help him avoid having to meet Lula, the president of Brasil which he really didn't want to do :-). I did so only on the condition he do an interview with me. I was fed up of people asking Linus about Linux, so I only asked him questions about the Sinclair QL, which both he and I enjoyed. Interview is still available on youtube here:

https://www.youtube.com/watch?...

AI

'What Kind of Bubble Is AI?' (locusmag.com) 100

"Of course AI is a bubble," argues tech activist/blogger/science fiction author Cory Doctorow.

The real question is what happens when it bursts?

Doctorow examines history — the "irrational exuberance" of the dotcom bubble, 2008's financial derivatives, NFTs, and even cryptocurrency. ("A few programmers were trained in Rust... but otherwise, the residue from crypto is a lot of bad digital art and worse Austrian economics.") So would an AI bubble leave anything useful behind? The largest of these models are incredibly expensive. They're expensive to make, with billions spent acquiring training data, labelling it, and running it through massive computing arrays to turn it into models. Even more important, these models are expensive to run.... Do the potential paying customers for these large models add up to enough money to keep the servers on? That's the 13 trillion dollar question, and the answer is the difference between WorldCom and Enron, or dotcoms and cryptocurrency. Though I don't have a certain answer to this question, I am skeptical.

AI decision support is potentially valuable to practitioners. Accountants might value an AI tool's ability to draft a tax return. Radiologists might value the AI's guess about whether an X-ray suggests a cancerous mass. But with AIs' tendency to "hallucinate" and confabulate, there's an increasing recognition that these AI judgments require a "human in the loop" to carefully review their judgments... There just aren't that many customers for a product that makes their own high-stakes projects betÂter, but more expensive. There are many low-stakes applications — say, selling kids access to a cheap subscription that generates pictures of their RPG characters in action — but they don't pay much. The universe of low-stakes, high-dollar applications for AI is so small that I can't think of anything that belongs in it.

There are some promising avenues, like "federated learning," that hypothetically combine a lot of commodity consumer hardware to replicate some of the features of those big, capital-intensive models from the bubble's beneficiaries. It may be that — as with the interregnum after the dotcom bust — AI practitioners will use their all-expenses-paid education in PyTorch and TensorFlow (AI's answer to Perl and Python) to push the limits on federated learning and small-scale AI models to new places, driven by playfulness, scientific curiosity, and a desire to solve real problems. There will also be a lot more people who understand statistical analysis at scale and how to wrangle large amounts of data. There will be a lot of people who know PyTorch and TensorFlow, too — both of these are "open source" projects, but are effectively controlled by Meta and Google, respectively. Perhaps they'll be wrestled away from their corporate owners, forked and made more broadly applicable, after those corporate behemoths move on from their money-losing Big AI bets.

Our policymakers are putting a lot of energy into thinking about what they'll do if the AI bubble doesn't pop — wrangling about "AI ethics" and "AI safety." But — as with all the previous tech bubbles — very few people are talking about what we'll be able to salvage when the bubble is over.

Thanks to long-time Slashdot reader mspohr for sharing the article.
AI

Meta's New Rule: If Your Political Ad Uses AI Trickery, You Must Confess (techxplore.com) 110

Press2ToContinue writes: Starting next year, Meta will play the role of a strict schoolteacher for political ads, making them fess up if they've used AI to tweak images or sounds. This new 'honesty policy' will kick in worldwide on Facebook and Instagram, aiming to prevent voters from being duped by digitally doctored candidates or made-up events. Meanwhile, Microsoft is jumping on the integrity bandwagon, rolling out anti-tampering tech and a support squad to shield elections from AI mischief.

Slashdot Top Deals

RAM wasn't built in a day.

Working...