Comment Re:Why would warming... (Score 1) 106
Globally, higher temperatures result in increased humidity, so more clouds. It may be drier in certain places, but it's more than made up for by the rest.
Globally, higher temperatures result in increased humidity, so more clouds. It may be drier in certain places, but it's more than made up for by the rest.
Well, stratospheric aerosol injection is exactly that. Except it uses gasses and not a solid shield.
All warfare is based on deception. There's a concept called need-to-know where you don't tell all your troops the exact date.
You can also spread misinformation. Each brigade gets a different date. Now when you see a date show up in public, you can point to one specific brigade that leaked it.
Your enemy will notice the hardware and personnel build up anyways, so the best you can do is fake it a few times to get them to drop their guard against the real thing.
I'd say touchscreens are worse.
I haven't tried it on an ICE-only car, but it's great for hybrids that don't need the engine right away. The engine doesn't kick in until you're over 25 mph anyways.
Probably because the title had "Chinese robots" in it and triggered a lot of people's subconscious negative emotions.
Whether the robots are Chinese or not makes no difference to the story, so I don't know why it's even included.
What you're describing is the appearance of reasoning, which is not the same as actually reasoning. Joe Weizenbaum's Eliza program gave the appearance of understanding and empathizing with the user. That illusion was so convincing that even people who understood how the program worked were taken in, a fact that Weizenbaum found disturbing.
You're going to have to define "reasoning" if you want to make that argument. Otherwise it's a no true Scotsman fallacy.
This was easier to see with earlier models, where it took very little effort to show that the system was just producing text that looked like reasoning, not actually reasoning. For example, while the model would initially appear to be able to solve river-crossing puzzles, it would fail in amusing ways if you made small changes to the problem. Something as simple as changing the order of the items or the kinds of items would result in silly things like the risk of the cabbage eating the wolf or leaving the goat alone with the cabbage to spare the wolf. While newer models seem better, it's important to remember that nothing fundamental has changed.
The newer model, operating agentically, is able to generate code to solve the river-crossing puzzle. That is no longer vulnerable to more complex set of inputs or the swapping of order. Moreover, an example of an LLM making a mistake is actually not a good counter argument for intelligence. Humans also make mistakes all the time. Given enough time and effort, I'm sure you can find a human that will make the exact same mistake as the LLM.
This really should come as no surprise as we designed these things operate on statistical relationships between tokens, not on facts and concepts. They really do produce text one token at a time, functionally retaining no internal state between them. That is, they have no mechanism by which planning a response beyond the current token could be managed. If that wasn't enough, the model proper doesn't even select the next token, it only produces next-token probabilities from which the next token is ultimately selected. (Imagine trying to write a response when all you can do is roll the dice on a set of probable next words!) While they give the appearance of producing a well-considered holistic response to your prompt, such a thing is very clearly not possible.
Tokens represents facts and concepts. If something is unable to operate on those, even statistically, then it is thinking in some sense. An agent built on top of LLMs is able to leverage the reasoning capabilities of programming, so it is no longer simply selecting from statistical probabilities.
That's why I said that a quick look "behind the curtain" at how LLMs function should be more than enough to completely dispel the notion that anything remotely like factual reasoning is happening. Like Weizenbaum, I also find it disturbing that people cling to those mistaken beliefs even though they should know better. The illusion is compelling, sure, but we know that it's just an illusion.
If you look "behind the curtain" at the human brain, you might come to the same conclusion. After all, your brain operates on a purely physical level. All of its neurons are subject to the laws of quantum mechanics. There is nothing particularly intelligent about that. However, when you put them all together, we have what people call intelligence. That's emergence, which is when a complex entity gains properties that its constituent parts do not have on their own.
It's true that the neurons in our brain operate under a different model than probabilistic inference. However, it's not at all obvious to me that emergence can only occur in one of these and not the other.
LLM's training data does contain facts. They're fully capable of regurgitating those facts given the correct prompt. If regurgitation is sufficient to count as "operate", then they do operate on facts. However, a book does that too, so it's not a novel (heh) capability.
What LLMs do on top of regurgitation is merging those facts into coherent sentences. This is also a sort of operation on top of facts. However, what people think of as intelligence is more than simply forming sentences. Intelligent operations might include deductive reasoning, consistency checking, modeling / simulating, abstraction, and probabilistic reasoning. When you build an AI agent on top of an LLM, you can get the combined system to do a very basic form of these.
Earlier today I asked Gemini to estimate the amount of warships China can build if they redirected their unemployed population towards shipbuilding. It wrote a bit of software, then ran the numbers and gave me an answer (about 10 million tons displacement per year, or more than 2 entire US navies). While that is a very simple form of modeling, I would argue that is an intelligent operation on top of facts.
Of course it's also very limited. It initially produced a completely unreasonable estimate of 4 billion tons per year. I had to tell it that a significant number of people would have to go into producing the raw materials and supply chain, as well as to discount productivity because building military ships are significantly more complex. However, once I provided constraints, it was able to successfully model the hypothetical situation.
No. The minimal amount of information they need is whether you are of age or not. That's a boolean value. They don't need to know the specific number. They definitely don't need your selfie or government ID.
They've been failing to enforce that since the moment they started. The law might as well not exist.
If a tiny country like Israel can do it, so can anyone else. It would be grossly irresponsible to buy anything from a potential adversary or hostile-aligned nation, especially if you don't have the expertise to reverse engineer their product. And I'd argue for software, reverse engineering it to ascertain its safety is basically impossible.
As for the morality of the Middle East situation, I'm not going to judge. My government may no longer be sovereign and they have no problems ignoring the First Amendment (or the rest of the Constitution for that matter).
Israelis did penetrate a supply chain of pagers that originated in China
Only if you think Taiwan is a part of China. But even with that, your statement is incorrect. The pagers were made by the Israelis, not by Taiwan.
Even looking from the Hezbollah side, they ordered the pagers from what they thought was a European company. If they actually ordered from the PRC, they would've been fine. In fact Iran used PRC-produced communication devices without any issues. Why Iran didn't simply give those devices to Hezbollah, I'm not sure, but perhaps the Iran-Hezbollah connection is not nearly as strong as some would like us to believe.
Sooner or later we'll have geopolitically aligned software. The Israeli pager attack showed how dangerous it is to not have a friendly superpower produce your electronics. The same problem exists in software, perhaps to an even greater degree.
In that world I expect open source to win, because that's the only way to create trustworthy software while avoiding doing a huge amount of duplicate work.
So if you just lift the cheesy poof fast enough, it's a life-extending workout?
You would have to accelerate the cheesy poof and your arm to 220 mph and then bring it to a stop without losing your arm.
The case of Google is interesting because they're not protecting themselves from small investors, but rather defending against large investors. The 2 founders have the final say in all decisions because they control more than 50% of the votes.
It's not that removing the death penalty makes people safer, it's the opposite. Safe countries are able to remove the death penalty. Countries with general lawlessness and high homicide rates have to use stronger policy tools.
If you compare equally stable countries, such as Japan and Norway, Japan has a lower crime rate in every category. It's even worse if you compare Norway to the infamously strict Singapore. Norway has 20x as many assaults, 80% more murders and 8 times as many rapes.
They need to be better than humans in all situations.
They already are in most, especially the safety-critical ones, but as long as the remaining ones haven't been handled, a lot of people will distrust it. The social inertia of human driving is very strong.
Dealing with the problem of pure staff accumulation, all our researches ... point to an average increase of 5.75% per year. -- C.N. Parkinson