Comment Re:As the late Grumpy Cat would've said (Score 1) 19
That's actually a very insightful interpretation.
That's actually a very insightful interpretation.
Actually, at the extreme scales, which is the total volume of the observable universe, the universe is quite homogeneous. As I recall, to the order of 1-in-10000 variance. This is why Inflationary cosmology was developed, to explain the distinct lack of lumpiness in the universe, which is what we would expect if the Big Bang alone were responsible.
Now anyone can throw the kernel source code, and any publicly submitted patches, at AI, the idea that you can just keep quiet about a vulnerability until everyone gets around to patching it is questionable at best. The chances of the same flaw being discovered in parallel have massively increased.
Big companies that run millions of servers can at least detect when vulnerabilities are being exploited in the wild, and delay disclosure until that point or until the patch is widely implemented. Not so easy for open source developers.
It's the realization that the old "many eyes make all bugs shallow" thing was never really true. Once code is working, people tend to ignore it. Only NSA types were doing proper security audits. Once AI tools became available to find bugs, this was inevitable.
Same thing happened with Firefox. Turned AI on it, found hundreds of bugs, many of the security related. The fact that only 3 people use Firefox now is probably all that saved it from being exploited earlier.
To be fair they can't realistically test all the hardware configurations out there. They could have systems with AMD and Nvidia GPUs, but how many different generations, how many different configurations of GPU architecture, memory, power management? How many different brand SSDs, going back how many years?
Then you have the interaction between the integrated Intel GPU and the discrete Nvidia GPU, when a particular chipset is used. The number of possible configurations grows exponentially every year, and people whine if you deprecate 8 year old hardware support.
At the scale they are operating, the best they can do is test the most common configurations, and some known problematic ones, and then react to issues as they appear.
Nobody else is doing better. There isn't some guy testing open source Linux drivers on 100 different configurations for every release. Apple has very tightly controlled hardware so realistically can test every configuration, and still occasionally screws it up. Google introduces bugs affecting its Pixel phones when updating the OS.
If you can think of a better way of doing this, let us know.
Microsoft has been steadily fixing this, but it's taken decades.
Vista started moving some drivers out of the kernel, and providing crash recovery for the ones that couldn't be extracted. Subsequent versions pushed it even further, to the point where in Windows 11 it's basically as little as possible without sacrificing massive amounts of performance running in the kernel, and most of what is in there is provided by Microsoft. Even things like graphics drivers are mostly outside the kernel now.
I use calculators all the time, but I'm glad that I did have to do maths by hand at school. It gives me the ability to estimate or have a feel for what I'm expecting the result to be, which helps me spot errors made operating the calculator or in the numbers/assumption I'm plugging into it.
The idea that Musk is worried about other people abusing AI is... Well, he seems to believe his own hype, put it that way.
This is the guy who pushed Tesla to release "full self driving" long before it was ready, killing people. More than a decade later he has been forced to scale back its capabilities significantly, and seems to have abandoned all the people who paid him for the feature all those years ago because their cars don't have the hardware for it.
AI has some interesting and useful capabilities that put it way beyond a mere spell checker. It also has some major limitations.
I've been using it to get started researching new topics that I'm not familiar with, but I have to make an effort to verify what it is saying and only use it as the starting point. It will often design solutions without being prompted to do so, or try to get you to ask it to do more work for you, but you have to ignore it and make sure you understand the subject.
I also use it for reviewing schematics and code. Like rules based tools such as the ones built into CAD software, or cppcheck, it often finds issues that aren't really issues, or which are the result of it not knowing things about the wider system. It does however come up with some useful suggestions sometimes, so like those rule based tools it is useful to someone who understands how to interpret and evaluate what it is saying.
Problem is that it's very hard to produce a unique exam every year that is exactly as difficult as all the previous years.
That's why they usually adjust the grade boundaries so that some fixed percentage get each grade, based on the expected bell curve you would see when testing thousands of students. It starts to break down when only testing a single class though, as it may just be that the quality of the teaching was better compared to another class, or the students were unusually good/bad that year.
We used to take advantage of that back at school. There were two different groups, an upper and lower group, and the lower group took a less challenging exam that was more suited to C and B level students. I think the idea was not to make them feel lost with the more complex stuff, which could get them into a doom loop of thinking they were just no good at the subject.
Of course if you were good at the subject then being in that lower group pushed you to the upper end of the bell curve, and theoretically made it easier to get an A. The exam board swore that it didn't, but they claimed a lot of obvious BS about fairness and quality.
Nuclear reactors use most surface water, not ground water.
Datacentres are no pickier. You can even cool a datacentre with saltwater, you just need a heat exchanger.
Also, closed loop does not evaporate. The loop is not closed if stuff escapes from it.
You're arguing with the actual terminology used in the nuclear industry. "Closed loop" or "closed cycle" designs have the water pumped in a cycle through cooling towers. The towers lose water to evaporation, taking heat with them, but the rest of the water is returned to be reheated again. "Open loop" or "open cycle" designs have no cooling towers. The water is heated and just discharged hot. They consume much more water (over an order of magnitude more), but most of that is returned. Closed loop are more common, but you see open loop in some older designs, and in seawater-cooled reactors.
"How often do you think I print?"
Seemingly not very.
I've printed many hundreds of kg on my P1S, thanks.
I do not consider having to write data out to a card and transport it back and forth between the printer and the computer to be the pinnacle of convenience. That's something that would be considered embarrassingly inconvenient for a 1980s printer, let alone a modern net-connected device. And it's designed to be inconvenient for non-cloud prints for a reason.
Sure, but they probably had to spend time and money checking before they could release it.
Congratulations! You are the one-millionth user to log into our system. If there's anything special we can do for you, anything at all, don't hesitate to ask!