Although this is a small first step, long overdue, it augers the beginning of the end of total immunity for social media conglomerates.
augurs. Verb. What an augur does (an ancient fortune teller).
augers. Plural. From auger (an ancient drilling tool used in agriculture).
There's no actual way to know if a game was made with AI or not.
Just look for the copied artwork. It's pretty simple, really.
You're conflating several things, imho, which are important to note.
Firstly, not everyone is entitled to voice their experiences publicly, it depends on where one lives, and what legal precedents there are. For the avoidance of any doubt, even in a country such as the USA there are limits to voicing one's experiences (such as when a judge issues a gag order).
Secondly, fake experiences are extremely likely, and in fact, have been common for at least one generation (20 years) on the Internet. Businesses use fake reviews to hurt competitors all the time. Businesses also use fake reviews to mislead customers. Individuals with an axe to grind do it all the time, too.
Historically speaking, the original justification by Google and Facebook for requiring real identities in the late 2000s was precisely as a silver bullet proposal to fight anonymous fake reviews, which were causing real damage to real businesses and people.
Thirdly, user reviews are simply a bad idea. The system suffers from all sorts of statistical biases, including survivorship bias, self selection bias, payola, etc.
The idea itself of a review, ie a critical account from personal experience by someone you trust, is actually sound.
The Internet companies however do not offer sound reviews, they offer accounts that may or may not be critical from experiences that may or may not be made up by people whose identity may or may not be made up and whose motives you may or may not want to trust. That is imho an accurate description of user reviews. Allowing deliberate anonymity only compounds the problems.
In other news, the catholic church is suing OpenAI because they had the idea to simply make suicide illegal a thousand years ago and have been using it ever since.
Not sure if it's a trade secret or a copyright case, the news often don't mention the fine details.
That's a good point. Here on
The movie analogy is old and outdated.
I'd compare it to a computer game. In any open world game, it seems that there are people living a life - going to work, doing chores, going home, etc. - but it's a carefully crafted illusion. "Carefully crafted" in so far as the developers having put exactly that into the game that is needed to suspend your disbelief and let you think, at least while playing, that there are real people. But behind the facade, they are not. They just disappear when entering their homes, they have no actual desires just a few numbers and conditional statements to switch between different pre-programmed behaviour patterns.
If done well, it can be a very, very convincing illusion. I'm sure that someone who hasn't seen a computer game before might think that they are actual people, but anyone with a bit of background knowledge knows they are not.
For AI, most of the people simply don't (yet?) have that bit of background knowledge.
And yet, when asked if the world is flat, they correctly say that it's not.
Despite hundreds of flat-earthers who are quite active online.
And it doesn't even budge on the point if you argue with it. So for whatever it's worth, it has learned more from scraping the Internet than at least some humans.
It's almost as if we shouldn't have included "intelligence" in the actual fucking name.
We didn't. The media and the PR departments did. In the tech and academia worlds that seriously work with it, the terms are LLMs, machine learning, etc. - the actual terms describing what the thing does. "AI" is the marketing term used by marketing people. You know, the people who professionally lie about everything in order to sell things.
professions that most certainly require a lot of critical thinking. While I would say that that is ludicrous
It is not just ludicrous, it is irrationally dangerous.
For any (current) LLM, whenever you interact with them you need to remember one rule-of-thumb (not my invention, read it somewhere and agree): The LLM was trained to generate "expected output". So always think that implicitly your prompt starts with "give me the answer you think I want to read on the following question".
Giving an EXPECTED answer instead of the most likely to be true answer is literally life-threatening in a medical context.
To the landlord belongs the doorknobs.