Forgot your password?
typodupeerror

Comment Re:Free market (Score 2) 107

Natural gas price will rise when data centers increase consumption. Producers will increase their production as natural gas price goes up. That is basic economics. Therefore more CO2 will be generated from natural gas.

Also the summary notes that data centers use less efficient gas turbines because more efficient are not currently available in the required numbers. Again more CO2 because of sudden data center buildup.

Comment Re:We need humility, not arrogance (Score 1) 172

Correction: the only way to prove you have found all bugs is with formal verification. It's completely possible for other tools to find all of them. You just won't know for sure whether it found them all.

How can you claim that it is possible for some tool to find all the bugs if you cannot know that the tool found all the bugs?
You cannot claim a tool found all the bugs without a proof that the tool found all the bugs.

Comment Re:25,000 lines of code (Score 1) 78

The LLM and the compiler and the formatter will get the low-level details right.

Maybe in about 90% if you are lucky. That still leaves about 10% error rate which is way too much.

Your job is to make sure the structure is correct and maintainable, and that the test suites cover all the bases,

Depends on the definition of "bases". Passing test suite does not show your program correct. And if your test suite is also AI generated then you are again at the problem whether the tests themselves are correct.

and then to scan the code for anomalies that make your antennas twitch,

Vibe error detection goes nicely with vibe programming. That being said, experienced programmers have a talent to detect errors. But detecting some errors here and there is far from full code review. Well, you can ask LLM to do it as well and many proposals it provides are good. Greg Kroah-Hartman estimates about 2/3 are good and the rest is marginally somewhat usable.

then dig into those and start asking questions -- not of product managers and developers, usually, but of the LLM!

Nothing goes as nicely as discussing with LLM. The longer you are at it the more askew it goes.

My point is that 25k LOC a month (god forbid a week) is a lot. It may look working on the outside but it is likely full of hopefully only small errors. Especially when you decide that you do not need to human-review all the LLM generated code. But if you consider e.g. lines of an XML file defining your UI (which you have drawn in some GUI designer) to be valid LOC then yeah. 25k is not a big deal. Not all LOCs are equal.

Comment Split space bar is a poor start to thumb clusters (Score 1) 58

Split space bar is better than a standard one but it is only a poor start to separated thumb clusters which provide somewhere from 6 to 9 easily reachable keys per thumb.

Check out e.g. K80CS layout. That is a custom build and there are many similar (and from my point of view slightly worse) custom keyboards in the community. People mostly do not want to build their own keyboard. They can get a usable commercial alternative which is not too bad. There are more of them, e.g. Kinesis Advantage 360.

A split keyboard with thumb clusters is the most important feature. If it is contoured then even better.

Comment Re:renewables (Score 2) 184

It is dispatch-able if you overbuild cooling to 100% of thermal power.

Most reactors build cooling only to about 50% of thermal power. They do it since they assume they can always sell at least 50% of electrical power to grid. They assume this since they can lower price to zero (and out compete others) when needed. They can afford this since nuclear fuel is only about 2% of the overall costs. So there is no serious problem to waste fuel into heating the environment around the plant ... provided that they do not cool in a local river only which they cannot overheat without killing the river life.
Look at nuclear electricity production in France around summer solstice. You will see that it fluctuates quite a lot.

Comment Re:renewables (Score 1) 184

They did from about 9.6% in 2004 to about 25.2% in 2024. This if for European Union only. Eurostat data. I do not know what was the renewable percentage in 1990. Definitely lower than in 2004.

Of course, it does not help them much since renewable sources are not dispatch-able. The result is that electricity prices are about 10 times lower at noon when compared to early morning or late evening. Sometimes they may be negative at noon. This is true for spring/summer. The price difference is not so big in autumn/winter when solar does not work that well and fossil fuels must work more ... i.e. prices are high even at noon.

Comment Re:No shit (Score 1) 100

I guess his point is that LLMs only do rote memorization with so little of proper reasoning steps that we may as well consider them incapable of understanding.

It is also very hard to distinguish between an LLM to simply spitting out a learned answer instead of doing some reasoning from a more generic model to come to the answer. If the LLM was taught an answer to your question then it can just provide the learned text without any (deeper) understanding of it. It may have only done some simple substitutions to the memorized data to give output tailored to your specific question. This is a big deal from my point of view. We do not know whether the model inside LLM is simple enough compared to the model humans have (i.e. Kolmogorov complexity of the model is not too much bigger in LLM than in a human).

It has been shown that LLMs can reason to at least a very limited level. It is not only memorization of the training data. They can do at least one reasoning step (e.g. a simple substitution rule or a simple modus ponens rule ... here and there ... mostly correct :-D ). But it is hard for users to estimate how much of some LLM response is rote memorization and how much of it is a reasoned response from a smaller more generic model. We do not know whether about the same question as we are asking was in the training data.

Comment Re:No shit (Score 1) 100

AI models don't "understand" anything.

A popular sentiment, it seems. Can you please explain what you mean by the word in scare-quotes? What is the intended point? I really can't understand what you mean, and I'm human.

Understanding comes from learning (symbolic) models of reality in our brains and an ability to reason about those models to an arbitrary degree. The reasoning allows us to validate our internal models, update them with newer facts and to derive proper consequences (i.e predict the likely future based on them). That is the whole point of intelligence. Predict the future so that we can optimize our current behavior to do better in the future (i.e. increase our chance of survival into the future).

Additional data collection and the reasoning about the future happens in steps. Each step must be performed correctly to reach the right conclusion. LLM AI can properly execute smaller number of steps than skilled humans. LLMs reason only within their context window size. LLMs discard any data that overflows this context window. LLMs more likely ignore the data more deeper (more ancient) in their context window. The more this context window is filled the more likely they make mistake in each particular step. The result is that LLMs tend to go awry sooner than skilled humans over time.

Comment Re:the race continues (Score 3, Informative) 26

Already, at Intel's 1.8nm, we're looking at ~16 atoms.

The process numbers do not mean feature size for a long time already. They are more like: What feature size the old process would have if we achieved the same part count per unit of area? Lets call this number the new "feature size".

Slashdot Top Deals

For every bloke who makes his mark, there's half a dozen waiting to rub it out. -- Andy Capp

Working...