Comment Re: They sound nice. (Score 1) 71
If I were the journalist, I would post "after numerous threats to my personal safety, I retract my earlier report. The incoming missile was intercepted successfully."
If I were the journalist, I would post "after numerous threats to my personal safety, I retract my earlier report. The incoming missile was intercepted successfully."
Before, they had "think of the children". Now they have "Russia" and "China".
They really never needed any reason to assert control over people. They only needed the excuse.
Offensive ability is always ahead of defense. Stick requires another stick to defend against. Arrows require shields. Guns require heavy armor. Bombs require bunkers. Every step, offense moves further ahead of defense.
At the limit of offensive tech, you have bioweapons, which are nearly impossible to defend against, and relativistic impactors, which have no defense at all. It's physically impossible to predict the impactor's course because it's outside your light cone (i.e. it doesn't exist yet in your frame of reference). Nor is it possible for you to move a black hole into place quickly enough to block it even if you magically detect its approach.
But yeah, please keep thinking one party is so much better then the other.
No. One side definitely starts more wars than the other.
You do know where Russia has been getting its drones, right? And that there aren't any more coming?
Yes, China and Europe. They buy off-the-shelf parts and assemble it themselves. There's plenty more where that came from.
The US elite has a lot of interest. Ordinary people do not. We are self-sufficient on energy, if you count Canada then we're a major energy exporter. There's nothing in the Middle East that we want.
Meanwhile, it just so happens that the US elite is composed of a huge number of Israeli supporters who receive money from AIPAC. And Israel happens to be situated in the Middle East.
Missiles? Nukes? Big deal. Russia and China have way more and their missiles can actually reach us, unlike Iran's. As for "allies", they're all dictatorships. Iran is actually one of the more democratic Muslim nations.
Would you call it a successful regime change without disposing the ayatollah?
The emperor is not simply a figurehead. The Japanese attach great ideological importance to him, so if he agreed to US demands, then the Japanese would go along with it. If he had been removed, there could've been loyalist resistance all over Japan.
We had previously convinced Khamenei to issue a fatwa against nuclear weapons. If we kept going down that route, Iran would never have one. Now that he has been (reportedly) removed, I suspect we will start seeing attempts to over turn the fatwa.
Globally, higher temperatures result in increased humidity, so more clouds. It may be drier in certain places, but it's more than made up for by the rest.
Well, stratospheric aerosol injection is exactly that. Except it uses gasses and not a solid shield.
All warfare is based on deception. There's a concept called need-to-know where you don't tell all your troops the exact date.
You can also spread misinformation. Each brigade gets a different date. Now when you see a date show up in public, you can point to one specific brigade that leaked it.
Your enemy will notice the hardware and personnel build up anyways, so the best you can do is fake it a few times to get them to drop their guard against the real thing.
I'd say touchscreens are worse.
I haven't tried it on an ICE-only car, but it's great for hybrids that don't need the engine right away. The engine doesn't kick in until you're over 25 mph anyways.
Probably because the title had "Chinese robots" in it and triggered a lot of people's subconscious negative emotions.
Whether the robots are Chinese or not makes no difference to the story, so I don't know why it's even included.
What you're describing is the appearance of reasoning, which is not the same as actually reasoning. Joe Weizenbaum's Eliza program gave the appearance of understanding and empathizing with the user. That illusion was so convincing that even people who understood how the program worked were taken in, a fact that Weizenbaum found disturbing.
You're going to have to define "reasoning" if you want to make that argument. Otherwise it's a no true Scotsman fallacy.
This was easier to see with earlier models, where it took very little effort to show that the system was just producing text that looked like reasoning, not actually reasoning. For example, while the model would initially appear to be able to solve river-crossing puzzles, it would fail in amusing ways if you made small changes to the problem. Something as simple as changing the order of the items or the kinds of items would result in silly things like the risk of the cabbage eating the wolf or leaving the goat alone with the cabbage to spare the wolf. While newer models seem better, it's important to remember that nothing fundamental has changed.
The newer model, operating agentically, is able to generate code to solve the river-crossing puzzle. That is no longer vulnerable to more complex set of inputs or the swapping of order. Moreover, an example of an LLM making a mistake is actually not a good counter argument for intelligence. Humans also make mistakes all the time. Given enough time and effort, I'm sure you can find a human that will make the exact same mistake as the LLM.
This really should come as no surprise as we designed these things operate on statistical relationships between tokens, not on facts and concepts. They really do produce text one token at a time, functionally retaining no internal state between them. That is, they have no mechanism by which planning a response beyond the current token could be managed. If that wasn't enough, the model proper doesn't even select the next token, it only produces next-token probabilities from which the next token is ultimately selected. (Imagine trying to write a response when all you can do is roll the dice on a set of probable next words!) While they give the appearance of producing a well-considered holistic response to your prompt, such a thing is very clearly not possible.
Tokens represents facts and concepts. If something is unable to operate on those, even statistically, then it is thinking in some sense. An agent built on top of LLMs is able to leverage the reasoning capabilities of programming, so it is no longer simply selecting from statistical probabilities.
That's why I said that a quick look "behind the curtain" at how LLMs function should be more than enough to completely dispel the notion that anything remotely like factual reasoning is happening. Like Weizenbaum, I also find it disturbing that people cling to those mistaken beliefs even though they should know better. The illusion is compelling, sure, but we know that it's just an illusion.
If you look "behind the curtain" at the human brain, you might come to the same conclusion. After all, your brain operates on a purely physical level. All of its neurons are subject to the laws of quantum mechanics. There is nothing particularly intelligent about that. However, when you put them all together, we have what people call intelligence. That's emergence, which is when a complex entity gains properties that its constituent parts do not have on their own.
It's true that the neurons in our brain operate under a different model than probabilistic inference. However, it's not at all obvious to me that emergence can only occur in one of these and not the other.
LLM's training data does contain facts. They're fully capable of regurgitating those facts given the correct prompt. If regurgitation is sufficient to count as "operate", then they do operate on facts. However, a book does that too, so it's not a novel (heh) capability.
What LLMs do on top of regurgitation is merging those facts into coherent sentences. This is also a sort of operation on top of facts. However, what people think of as intelligence is more than simply forming sentences. Intelligent operations might include deductive reasoning, consistency checking, modeling / simulating, abstraction, and probabilistic reasoning. When you build an AI agent on top of an LLM, you can get the combined system to do a very basic form of these.
Earlier today I asked Gemini to estimate the amount of warships China can build if they redirected their unemployed population towards shipbuilding. It wrote a bit of software, then ran the numbers and gave me an answer (about 10 million tons displacement per year, or more than 2 entire US navies). While that is a very simple form of modeling, I would argue that is an intelligent operation on top of facts.
Of course it's also very limited. It initially produced a completely unreasonable estimate of 4 billion tons per year. I had to tell it that a significant number of people would have to go into producing the raw materials and supply chain, as well as to discount productivity because building military ships are significantly more complex. However, once I provided constraints, it was able to successfully model the hypothetical situation.
No. The minimal amount of information they need is whether you are of age or not. That's a boolean value. They don't need to know the specific number. They definitely don't need your selfie or government ID.
The world is coming to an end ... SAVE YOUR BUFFERS!!!