Forgot your password?
typodupeerror

Comment Re:Disclosure Timing Drama Part 2.0 (Score 1) 21

Now anyone can throw the kernel source code, and any publicly submitted patches, at AI, the idea that you can just keep quiet about a vulnerability until everyone gets around to patching it is questionable at best. The chances of the same flaw being discovered in parallel have massively increased.

Big companies that run millions of servers can at least detect when vulnerabilities are being exploited in the wild, and delay disclosure until that point or until the patch is widely implemented. Not so easy for open source developers.

Comment Re: Year of the Patch (Score 2) 21

It's the realization that the old "many eyes make all bugs shallow" thing was never really true. Once code is working, people tend to ignore it. Only NSA types were doing proper security audits. Once AI tools became available to find bugs, this was inevitable.

Same thing happened with Firefox. Turned AI on it, found hundreds of bugs, many of the security related. The fact that only 3 people use Firefox now is probably all that saved it from being exploited earlier.

Comment Re:Amazing (Score 1) 38

To be fair they can't realistically test all the hardware configurations out there. They could have systems with AMD and Nvidia GPUs, but how many different generations, how many different configurations of GPU architecture, memory, power management? How many different brand SSDs, going back how many years?

Then you have the interaction between the integrated Intel GPU and the discrete Nvidia GPU, when a particular chipset is used. The number of possible configurations grows exponentially every year, and people whine if you deprecate 8 year old hardware support.

At the scale they are operating, the best they can do is test the most common configurations, and some known problematic ones, and then react to issues as they appear.

Nobody else is doing better. There isn't some guy testing open source Linux drivers on 100 different configurations for every release. Apple has very tightly controlled hardware so realistically can test every configuration, and still occasionally screws it up. Google introduces bugs affecting its Pixel phones when updating the OS.

If you can think of a better way of doing this, let us know.

Comment Re:Blue Screens (Score 1) 38

Microsoft has been steadily fixing this, but it's taken decades.

Vista started moving some drivers out of the kernel, and providing crash recovery for the ones that couldn't be extracted. Subsequent versions pushed it even further, to the point where in Windows 11 it's basically as little as possible without sacrificing massive amounts of performance running in the kernel, and most of what is in there is provided by Microsoft. Even things like graphics drivers are mostly outside the kernel now.

Comment Re:As the late Grumpy Cat would've said (Score 5, Interesting) 20

The idea that Musk is worried about other people abusing AI is... Well, he seems to believe his own hype, put it that way.

This is the guy who pushed Tesla to release "full self driving" long before it was ready, killing people. More than a decade later he has been forced to scale back its capabilities significantly, and seems to have abandoned all the people who paid him for the feature all those years ago because their cars don't have the hardware for it.

Comment Re:Captains of industry (Score 1) 20

AI has some interesting and useful capabilities that put it way beyond a mere spell checker. It also has some major limitations.

I've been using it to get started researching new topics that I'm not familiar with, but I have to make an effort to verify what it is saying and only use it as the starting point. It will often design solutions without being prompted to do so, or try to get you to ask it to do more work for you, but you have to ignore it and make sure you understand the subject.

I also use it for reviewing schematics and code. Like rules based tools such as the ones built into CAD software, or cppcheck, it often finds issues that aren't really issues, or which are the result of it not knowing things about the wider system. It does however come up with some useful suggestions sometimes, so like those rule based tools it is useful to someone who understands how to interpret and evaluate what it is saying.

Comment Re: It's all about definitions. (Score 1) 165

Problem is that it's very hard to produce a unique exam every year that is exactly as difficult as all the previous years.

That's why they usually adjust the grade boundaries so that some fixed percentage get each grade, based on the expected bell curve you would see when testing thousands of students. It starts to break down when only testing a single class though, as it may just be that the quality of the teaching was better compared to another class, or the students were unusually good/bad that year.

Comment Re: It's all about definitions. (Score 2) 165

We used to take advantage of that back at school. There were two different groups, an upper and lower group, and the lower group took a less challenging exam that was more suited to C and B level students. I think the idea was not to make them feel lost with the more complex stuff, which could get them into a doom loop of thinking they were just no good at the subject.

Of course if you were good at the subject then being in that lower group pushed you to the upper end of the bell curve, and theoretically made it easier to get an A. The exam board swore that it didn't, but they claimed a lot of obvious BS about fairness and quality.

Comment Re:Stupid; but cynical. (Score 1) 25

As best I can tell the target market is the ignorant and/or confused; even by the standards of openclaw enthusiasts.

If you want 'local' those specs are going to be a fairly harsh limit; I suspect it is not for nothing that they avoid anything that even resembles a benchmark or a performance claim; while if you aren't doing the bot stuff locally the fact that the hardware is sitting on your desk is getting you basically nothing in security or privacy vs. having an EC2 nano instance or whatever VPS is cheap spilling its guts to Sam Altman on your behalf.

Depending on who they are rebadging this thing might even be a perfectly fine low end mini PC, if you want one of those; people have been making them for years with whatever why-care-more CPU occupies the bottom of Intel's range; and they can be entirely suitable if you just need a generalist appliance and don't really want to play embedded ARM just to get the thing running; but it is absolutely being insinuated that it's suitable for things that it is not; and that it offers benefits that it won't in the expected configuration.

Comment Re:Taxpayer-funded should always mean Open Source (Score 1) 64

It's not that simple, because some of the data in there might be proprietary. Sometimes manufacturers of electronic components only let you have them if you sign an NDA. Accidentally releasing even stuff like schematic symbols and associated notes could cause problems for CERN.

I had a quick look because this is of great interest to me, and it seems like they don't have any 3D models. My guess would be that it's for licencing reasons, because even though the models are often freely provided on manufacturer websites, that doesn't mean they are free to distribute. Presumably the footprints and schematic symbols were all made by CERN.

Slashdot Top Deals

"If Diet Coke did not exist it would have been neccessary to invent it." -- Karl Lehenbauer

Working...