Comment Re:Here it comes (Score 1) 71
Lifetime vs Orbital Height Ballistic Coefficient m/(Cd*A) = 166.67 kg/m^2
mean lifetime @ 400 km initial height is 110 days, maximum is 7 years
The LLM and the compiler and the formatter will get the low-level details right.
Maybe in about 90% if you are lucky. That still leaves about 10% error rate which is way too much.
Your job is to make sure the structure is correct and maintainable, and that the test suites cover all the bases,
Depends on the definition of "bases". Passing test suite does not show your program correct. And if your test suite is also AI generated then you are again at the problem whether the tests themselves are correct.
and then to scan the code for anomalies that make your antennas twitch,
Vibe error detection goes nicely with vibe programming. That being said, experienced programmers have a talent to detect errors. But detecting some errors here and there is far from full code review. Well, you can ask LLM to do it as well and many proposals it provides are good. Greg Kroah-Hartman estimates about 2/3 are good and the rest is marginally somewhat usable.
then dig into those and start asking questions -- not of product managers and developers, usually, but of the LLM!
Nothing goes as nicely as discussing with LLM. The longer you are at it the more askew it goes.
My point is that 25k LOC a month (god forbid a week) is a lot. It may look working on the outside but it is likely full of hopefully only small errors. Especially when you decide that you do not need to human-review all the LLM generated code. But if you consider e.g. lines of an XML file defining your UI (which you have drawn in some GUI designer) to be valid LOC then yeah. 25k is not a big deal. Not all LOCs are equal.
Split space bar is better than a standard one but it is only a poor start to separated thumb clusters which provide somewhere from 6 to 9 easily reachable keys per thumb.
Check out e.g. K80CS layout. That is a custom build and there are many similar (and from my point of view slightly worse) custom keyboards in the community. People mostly do not want to build their own keyboard. They can get a usable commercial alternative which is not too bad. There are more of them, e.g. Kinesis Advantage 360.
A split keyboard with thumb clusters is the most important feature. If it is contoured then even better.
It is dispatch-able if you overbuild cooling to 100% of thermal power.
Most reactors build cooling only to about 50% of thermal power. They do it since they assume they can always sell at least 50% of electrical power to grid. They assume this since they can lower price to zero (and out compete others) when needed. They can afford this since nuclear fuel is only about 2% of the overall costs. So there is no serious problem to waste fuel into heating the environment around the plant
Look at nuclear electricity production in France around summer solstice. You will see that it fluctuates quite a lot.
They did from about 9.6% in 2004 to about 25.2% in 2024. This if for European Union only. Eurostat data. I do not know what was the renewable percentage in 1990. Definitely lower than in 2004.
Of course, it does not help them much since renewable sources are not dispatch-able. The result is that electricity prices are about 10 times lower at noon when compared to early morning or late evening. Sometimes they may be negative at noon. This is true for spring/summer. The price difference is not so big in autumn/winter when solar does not work that well and fossil fuels must work more
I guess his point is that LLMs only do rote memorization with so little of proper reasoning steps that we may as well consider them incapable of understanding.
It is also very hard to distinguish between an LLM to simply spitting out a learned answer instead of doing some reasoning from a more generic model to come to the answer. If the LLM was taught an answer to your question then it can just provide the learned text without any (deeper) understanding of it. It may have only done some simple substitutions to the memorized data to give output tailored to your specific question. This is a big deal from my point of view. We do not know whether the model inside LLM is simple enough compared to the model humans have (i.e. Kolmogorov complexity of the model is not too much bigger in LLM than in a human).
It has been shown that LLMs can reason to at least a very limited level. It is not only memorization of the training data. They can do at least one reasoning step (e.g. a simple substitution rule or a simple modus ponens rule
AI models don't "understand" anything.
A popular sentiment, it seems. Can you please explain what you mean by the word in scare-quotes? What is the intended point? I really can't understand what you mean, and I'm human.
Understanding comes from learning (symbolic) models of reality in our brains and an ability to reason about those models to an arbitrary degree. The reasoning allows us to validate our internal models, update them with newer facts and to derive proper consequences (i.e predict the likely future based on them). That is the whole point of intelligence. Predict the future so that we can optimize our current behavior to do better in the future (i.e. increase our chance of survival into the future).
Additional data collection and the reasoning about the future happens in steps. Each step must be performed correctly to reach the right conclusion. LLM AI can properly execute smaller number of steps than skilled humans. LLMs reason only within their context window size. LLMs discard any data that overflows this context window. LLMs more likely ignore the data more deeper (more ancient) in their context window. The more this context window is filled the more likely they make mistake in each particular step. The result is that LLMs tend to go awry sooner than skilled humans over time.
Already, at Intel's 1.8nm, we're looking at ~16 atoms.
The process numbers do not mean feature size for a long time already. They are more like: What feature size the old process would have if we achieved the same part count per unit of area? Lets call this number the new "feature size".
They produce more tire particles since they are heavier and can accelerate more aggressively.
They also use slower-wearing tires, and wear their tires less to accelerate (hence why they can use lower-wearing tires) on account of their advanced traction and throttle control.
Well, one can use slower wearing tires on ICEs too. That is a feature of tires and not cars. It is easy to replace tires. There may be something considerable in your argument that they have better traction control (which is possibly harder to do with ICEs, maybe only not as common with ICEs). But does this alone compensate for higher weight and higher accelerations of EVs? If so, do you have some good links explaining this?
allowing for an equivalent-mass EV to perform better on a much slower-wearing tire
EVs are typically 30% heavier (not equivalent mass). They produce more tire particles since they are heavier and can accelerate more aggressively. They produce less brake pad/disc particles since they use regenerative braking.
There is no distinction between any AI program and some existent game.