Comment Re:Poor decisions? (Score 1) 55
Indeed. Not that this is a surprise.
Indeed. Not that this is a surprise.
Yes. But shocking? Not really. There were ample warnings and nobody that did any real research would have ever used that sub unless they were suicidal.
Probably. And the human in charge screwed up on top of that. They were lucky to catch this early. I wonder what other mistakes that will get triggered at some time in the future they have in there now. Better not depend on Google for anything.
So Google has joined the cloud version of the race to the bottom now. Not much of a surprise.
The orange felon can just pardon himself if not...
I mean, even thinking about this seriously is far more expensive than paying the fee. There must be some extreme corporate dysfunctionality at Novo Nordisk.
Yep, went like that in the last few AI hypes as well. Grande promises, tons of morons thinking the world will fundamentally change, small actual results and impact.
Average american still dumb as fuck.
First, there is no need. And second, ther is no proof that quantum effects are random at all. So far it is just the only _model_ we have. The headline is a lie.
Yep. "Countable" and "non-countable" are distinct and it is reflected in grammar. My English classes did cover that. Apparently some supposed native speakers (?) did not get that taught or failed to understand it when the subject was on.
No it is not Plagiarism.
It actually would be. "Plagiarism is the representation of another person's language, thoughts, ideas, or expressions as one's own original work." It does not matter whether an LLM told you it was fine. If you cite the LLM in a manner that shows you agree (!) and the LLM attributes a work of somebody else to you, then the conditions are fulfilled.
Why is it that so many people lose all natural intelligence when a problem with LLMs gets pointed out?
The law is already in place, you are just too ignorant to know it. Humans get an exception for memorization as that does legally not count as data processing. But as soon as a human publicly performs a copyrighted work from memory, they must have a license. Look up "Happy Birthday"
This is a key point. A human can memorize the entire contents of a book, and that act of memorization is neither plagiarism nor copyright violation. It's only when that memorized information is externalized and distributed that legal issues might come into play. Even if that human externalized the entire book by reciting it to himself, that wouldn't be a violation. If the human answered questions from 1000 people and quoted excerpts that were individually fair use, simply answering more questions is not necessarily a breach of fair use.
I imagine that an AI model would have to be treated the same. Simply knowing the entire book should not be a violation. However, how that information is externalized and shared is the question.
No. An AI that "knows" the entire book actually has the book stored in digital form. It does not matter if the storage is indirect. And that happens to be an unauthorized copy, because an AI is a machine and what it has stored is a copy of that data.
However, if one quotes an LLM, no matter what the LLM produces, no matter where it comes from, it cannot be plagiarism, and that's simply an immutable fact.
Obviously, that is untrue. You very likely can get an LLM to claim that you authored something with quotes of the work. Then quoting that is plagiarism.
Ah, sure. But who in their right mind commits commercial plagiarism? Oh, my bad, LLMs are involved. Of course, then all bets are off.
Humans with photographic memopry are for sure copyright violators as soon as they perform those memories publicly. And that is legally what this is about. LLAMA may privately hallucinate as much as it likes, but this is about ther version offered publicly.
grep me no patterns and I'll tell you no lines.