Although this is a small first step, long overdue, it augers the beginning of the end of total immunity for social media conglomerates.
augurs. Verb. What an augur does (an ancient fortune teller).
augers. Plural. From auger (an ancient drilling tool used in agriculture).
There's no actual way to know if a game was made with AI or not.
Just look for the copied artwork. It's pretty simple, really.
Unless the part of the penalty that doesn't compensate Indian consumers and businesses is the punitive damages to assure no repeats of the behavior.
Gas powered cars don't explode, but they definitely burn sometimes.
You must have read a lot of Slashdot: there's no elemental lithium in lithium batteries. The stuff that burns is the electrolyte, which is basically an oil.
This isn't some kind of 'our neutrino observatory is bigger than your neutrino observatory' contest.
That's exactly what it is. When your science depends on a big expensive piece of hardware that most or (best) nobody else has, that's what you tend to talk about. Especially in press releases and grant applications.
Well, right in the summary it says ChatGPT gave the kid a "pep talk" encouraging him to actually carry out the suicide.
Neural networks generally don't extrapolate, they interpolate
You could test that if someone were willing to define what they mean by "generally" I suppose. I think it's fairly safe to say that they work best when they're interpolating, like any model, but you can certainly ask them to extrapolate as well.
I thought not. Your "main point" is based on two logical fallacies. You might be familiar with the saying "two wrongs don't make a right." Your "reply" was a third.
It was based on solving a maths equation.
True.
There's a big and very obvious difference between "scientific research" and "mathematics".
Ehhhhh
Nobody was out there putting clocks on satellites
Technically true, but they were definitely doing experiments. The inconsistencies in Maxwell's electrodynamics and previous physics were the hot topic of late 19th century physics. To the point where various people thought resolving them would put the finishing touches on physics. Even the popular account includes the Michealson-Morely experiment.
Einstein himself says in "On the Electrodynamics of Moving Bodies" (i.e. the special relativity paper):
It is known that Maxwell’s electrodynamics—as usually understood at the
present time—when applied to moving bodies, leads to asymmetries which do
not appear to be inherent in the phenomena. Take, for example, the reciprocal electrodynamic action of a magnet and a conductor. The observable phenomenon here depends only on the relative motion of the conductor and the
magnet, whereas the customary view draws a sharp distinction between the two
cases in which either the one or the other of these bodies is in motion....
Examples of this sort, together with the unsuccessful attempts to discover
any motion of the earth relatively to the “light medium,” suggest that the
phenomena of electrodynamics as well as of mechanics possess no properties
corresponding to the idea of absolute rest. They suggest rather that, as has
already been shown to the first order of small quantities, the same laws of
electrodynamics and optics will be valid for all frames of reference for which the
equations of mechanics hold good.
There were a whole bunch of relevant experiments. Lorentz reviews many of them in "On the influence of the earth's motion on luminiferous phenomena”, published in 1886.
Anyway, the author's point is not that AI can't think because it can't find the consequences of equations. Regular old numerical simulations and logic engines are pretty good at that, no AI required. His point is that AI can't think because it cannot generate ideas out of thin air, presumably the "pure reason" of ancient greek philosophy, and he uses Einstein as an example.
And as a supporting argument he used a fallacy. That's my point.
You're conflating several things, imho, which are important to note.
Firstly, not everyone is entitled to voice their experiences publicly, it depends on where one lives, and what legal precedents there are. For the avoidance of any doubt, even in a country such as the USA there are limits to voicing one's experiences (such as when a judge issues a gag order).
Secondly, fake experiences are extremely likely, and in fact, have been common for at least one generation (20 years) on the Internet. Businesses use fake reviews to hurt competitors all the time. Businesses also use fake reviews to mislead customers. Individuals with an axe to grind do it all the time, too.
Historically speaking, the original justification by Google and Facebook for requiring real identities in the late 2000s was precisely as a silver bullet proposal to fight anonymous fake reviews, which were causing real damage to real businesses and people.
Thirdly, user reviews are simply a bad idea. The system suffers from all sorts of statistical biases, including survivorship bias, self selection bias, payola, etc.
The idea itself of a review, ie a critical account from personal experience by someone you trust, is actually sound.
The Internet companies however do not offer sound reviews, they offer accounts that may or may not be critical from experiences that may or may not be made up by people whose identity may or may not be made up and whose motives you may or may not want to trust. That is imho an accurate description of user reviews. Allowing deliberate anonymity only compounds the problems.
Gravity is a myth, the Earth sucks.