Comment Re:Crazy (Score 1) 173
It's illegal most places to do business with criminal organizations. If not, its easy enough to make it so.
It's illegal most places to do business with criminal organizations. If not, its easy enough to make it so.
The quote you provided didn't say LLM, it said neural network. Neural networks, like any model, can interpolate or extrapolate, depending on whether the inference is between training samples or not.
LLMs are neural networks. You seem to be referring to a particular method of producing output where they predict the next token based on their conditioning and their previously generated text. It's true in the simplest sense that they're extrapolating, and reasonable for pure LLMs, but probably not really true for the larger models that use LLMs as their inputs and outputs. The models have complex states that have been shown to represent concepts larger than just the next token.
WTF is a proper pronoun?
Are you referring to the "altruistic" in the summary? It's used correctly, as an adjective. They could have said "behave altruistically" but they did not.
Gas powered cars don't explode, but they definitely burn sometimes.
You must have read a lot of Slashdot: there's no elemental lithium in lithium batteries. The stuff that burns is the electrolyte, which is basically an oil.
This isn't some kind of 'our neutrino observatory is bigger than your neutrino observatory' contest.
That's exactly what it is. When your science depends on a big expensive piece of hardware that most or (best) nobody else has, that's what you tend to talk about. Especially in press releases and grant applications.
Neural networks generally don't extrapolate, they interpolate
You could test that if someone were willing to define what they mean by "generally" I suppose. I think it's fairly safe to say that they work best when they're interpolating, like any model, but you can certainly ask them to extrapolate as well.
I thought not. Your "main point" is based on two logical fallacies. You might be familiar with the saying "two wrongs don't make a right." Your "reply" was a third.
It was based on solving a maths equation.
True.
There's a big and very obvious difference between "scientific research" and "mathematics".
Ehhhhh
Nobody was out there putting clocks on satellites
Technically true, but they were definitely doing experiments. The inconsistencies in Maxwell's electrodynamics and previous physics were the hot topic of late 19th century physics. To the point where various people thought resolving them would put the finishing touches on physics. Even the popular account includes the Michealson-Morely experiment.
Einstein himself says in "On the Electrodynamics of Moving Bodies" (i.e. the special relativity paper):
It is known that Maxwell’s electrodynamics—as usually understood at the
present time—when applied to moving bodies, leads to asymmetries which do
not appear to be inherent in the phenomena. Take, for example, the reciprocal electrodynamic action of a magnet and a conductor. The observable phenomenon here depends only on the relative motion of the conductor and the
magnet, whereas the customary view draws a sharp distinction between the two
cases in which either the one or the other of these bodies is in motion....
Examples of this sort, together with the unsuccessful attempts to discover
any motion of the earth relatively to the “light medium,” suggest that the
phenomena of electrodynamics as well as of mechanics possess no properties
corresponding to the idea of absolute rest. They suggest rather that, as has
already been shown to the first order of small quantities, the same laws of
electrodynamics and optics will be valid for all frames of reference for which the
equations of mechanics hold good.
There were a whole bunch of relevant experiments. Lorentz reviews many of them in "On the influence of the earth's motion on luminiferous phenomena”, published in 1886.
Anyway, the author's point is not that AI can't think because it can't find the consequences of equations. Regular old numerical simulations and logic engines are pretty good at that, no AI required. His point is that AI can't think because it cannot generate ideas out of thin air, presumably the "pure reason" of ancient greek philosophy, and he uses Einstein as an example.
And as a supporting argument he used a fallacy. That's my point.
As such, they remain functional because nobody is weaponizing their state of indebtedness.
No weaponization necessary. For domestic debt, as long as your citizens keep buying bonds you're fine, and Japan's citizens keep buying bonds. If they stopped then you'd have to cut back government services. It's kind of a tax, maybe.
Foreign debt requires foreigners to keep buying your bonds. If they stop then you have to cut back not their government services, but those of your own citizens. It's not entirely unlike the sitation Saudi Arabia is in, except with debt instead of oil.
The US has the additional issue that a decent amount of that foreign debt is held by countries they have declared to be their enemies, which does add the possibility of hostile action. Most of it is held by allies they have decided to attack though, which I think in American baseball is called an "unforced error."
$207 billion over five years is one Northrop revenue.
Ah, clever. Would you also care to argue that rocks can't do arithmetic, which is what I actually said?
I'm not sure how any of that makes "it right though." It rather sounds like you're arguing against the author's apparent point that such things emerge out of whole cloth from the magic that is human intelligence.
I don't think we're having the same conversation. The OP asked about how not buying stuff decreases productivity. I explained that "productivity" in this sense is GDP / capita and not buying stuff decreases GDP. I'm not discussing social policy and certainly have not "missed the wealth gap." If you would like to discuss social policy, there are lots of Slashdot articles where such things happen.
The UNIX philosophy basically involves giving you enough rope to hang yourself. And then a couple of feet more, just to be sure.