OpenAI Is Faulted by Media for Using Articles To Train ChatGPT (bloomberg.com) 89
Major news outlets have begun criticizing OpenAI and its ChatGPT software, saying the lab is using their articles to train its artificial intelligence tool without paying them. From a report: "Anyone who wants to use the work of Wall Street Journal journalists to train artificial intelligence should be properly licensing the rights to do so from Dow Jones," Jason Conti, general counsel for News Corp's Dow Jones unit, said in a statement provided to Bloomberg News. "Dow Jones does not have such a deal with OpenAI." Conti added: "We take the misuse of our journalists' work seriously, and are reviewing this situation." The news groups' concerns arose when the computational journalist Francesco Marconi posted a tweet last week saying their work was being used to train ChatGPT. Marconi said he asked the chatbot for a list of news sources it was trained on and received a response naming 20 outlets including the WSJ, New York Times, Bloomberg, Associated Press, Reuters, CNN and TechCrunch.
OpenAI has at least 3 fair use claims (Score:5, Insightful)
I understand why these companies and their artists/writers are concerned and upset, but OpenAI has at least 3 very strong fair use claims in their favor.
1) Educational, the articles were used to train the AI. The Output is not a copy.
2) The use of the articles is different than originally intended, to train an AI instead of sale of publication.
3) The output is decidedly transformative.
Basically, if the publishers win, then ANY use of any copyrighted material is in violation. Artists would not be able to save a copy of a photo to create transformative art from.
Re: (Score:2, Interesting)
(1) Does educational use get them an exemption from copyright rules? I don't think teachers are allowed to copy entire articles just because they are using them to educate children, but correct me if I'm wrong.
Of course the first hurdle is getting a court to agree that it is education, and not some kind of engineering process. Education generally only applies to humans, with even animals being trained rather than educated.
(2) I don't see the relevance of. If anything it's an argument in the publisher's favo
Re: (Score:3, Insightful)
It doesn't really matter *what* they use the articles for. Copyright prevents people from redistributing the original or derivative works, or (sometimes) displaying/performing the work publicly. That's it.
And the AI systems aren't doing any of that.
If you can't point to a derivative work as say "look, this part right here is obviously copied from this piece of my work", then it's all but guaranteed to NOT be considered a derivative work.
More conceptual derivatives - "inspired by", "done in the style of", e
Re: (Score:1)
It does matter what they use the articles for. Specific purposes are exempted in the law, like education and parody.
"It educated an AI" sounds like a legal contortion that will only work with a crooked judge, though. When I download a movie off the Pirate Bay, it "educates" my laptop on how to display it. The problem is my laptop is an inanimate object and it cannot be educated.
Re:OpenAI has at least 3 fair use claims (Score:5, Insightful)
They are, but those are only relevant if you have otherwise infringed the copyright.
Exactly how is letting an AI "learn" from your article infringing copyright?
If you can point at an AI-generated work and say "this part right here was clearly copied from this work of mine", *then* you can make a copyright infringement claim on that particular AI-generated work. Until then you've got nothing.
Re: (Score:2, Insightful)
If it were allowed it would seem to open the floodgates. No point paying for an expensive celebrity voice actor, just "educate" an AI with their voice and get it to produce a "derivative" work.
The only time that would work is for parody. Maybe we will see AI voices used there soon.
Re: (Score:1)
If it were allowed it would seem to open the floodgates. No point paying for an expensive celebrity voice actor, just "educate" an AI with their voice and get it to produce a "derivative" work.
Current copyright laws weren't written with this technology in mind though. You can't currently copyright the sound of your voice or your artistic style. Trying to stop it at the training phase, which is what we're talking about, saying "don't use our copyrighted works to train your AI", idk, I'm not a lawyer, so we'l
Re: (Score:2, Insightful)
If it were allowed it would seem to open the floodgates. No point paying for an expensive celebrity voice actor, just "educate" an AI with their voice and get it to produce a "derivative" work.
In the US we have the right of publicity [cornell.edu], and you get to control use of your likeness for non-fair-use purposes. So no.
Re: (Score:1)
Re: OpenAI has at least 3 fair use claims (Score:2)
Copyright also prevents you from âoecopyingâ an asset and storing it on your internal hard drive. Eg. If you download a movie for personal use, that is making a copy, still illegal even if you donâ(TM)t further distribute (in most jurisdictions).
Which is what ChatGPT is doing in some form here, they are copying the work for commercial use without paying for the privilege and storing it in a highly compressed format (given the output from ChatGPT, it does âreproduceâ(TM) exact copies
Re: (Score:2)
We're in the digital age - you cannot read this comment without having first copied it onto your computer.
And copyright (as enforced) doesn't restrict *copying* it restricts *distribution* (including public performance, where you're "distributing" it into other people's brains). If you want to print stills from Disney images as posters for your walls, you're fine. Try to sell one of those posters (or give it away) and Disney can come down on you.
And yes, if ChatGPT makes an exact copy of something (or cle
Re: (Score:2)
Correct, but the EULA of Slashdot, NYT etc probably states that I can only do this 'copying' for personal purposes and not commercial reasons, I can't copy and paste the data, put it in a chatbot and make it appear as if it was coming up with this stuff all by itself. Even just quoting or reciting from memory, I can't do commercially, pass it off as my own or reproduce without attribution unless there is some fair use clause.
If I access NYT in the library, they often will have some sort of agreement that th
Re:OpenAI has at least 3 fair use claims (Score:5, Interesting)
The court precedent for #2 is Google books. Google was copying the books in their entirety, letting users search the books, showing the users clips of the books that matched their search criteria, and then offered to sell them the books.
The courts found that all of this was fair use.
Re: (Score:2)
(1) Does educational use get them an exemption from copyright rules? I don't think teachers are allowed to copy entire articles just because they are using them to educate children, but correct me if I'm wrong.
Here we get into the rather rubbery definition of "copy". If I read a news article on the Web, then do the contents of my computer screen constitute a copy? What about what's in my browser cache?
Given that ChatGPT was likely trained by being given URL's, I suspect that any copyright claim here is not only dead in the water, it's dead before it even gets wet.
Re: (Score:3)
What copywritable material, under copyright, can ChatGPT produce verbatim? I add that adjective because basic facts are not copywritable. Please be specific.
I'll note that Google Books scanning in copyrighted books, en masse, without permission, and showing blurbs and even whole pages to users without compensation, was deemed by the courts to be transformative and fair use.
Re: (Score:3)
Re: OpenAI has at least 3 fair use claims (Score:2)
GitHub Co-Pilot and ChatGPT both have reproduced my own code (which is a small set of open source code in a very niche field) to a level of what most would consider outright plagiarism (including mistakes). As long as you are precise enough with your search terms, you can basically get to a plagiarized version of the âsource materialâ(TM) which Iâ(TM)m assuming is what NYT and co is going to have to prove in court,
Re: (Score:2)
I can believe this.
Because Neural Nets are a form of lossy "compression" in one sense.
So at times it could reproduce code verbatim (or text from an article) or paraphrase without giving credit (plagiarism).
So this is a bit of a legal minefield.
It will be interesting to see how it can be resolved.
Re: (Score:2)
We're not talking about GitHub Co-Pilot, we're talking about ChatGPT. Let's see it. Come on, how hard is it to be specific?
Re: (Score:2)
[i]GitHub Co-Pilot and ChatGPT both[/i]
Perhaps learn to read.
Re: (Score:2)
And I'LL repeat, so YOU learn to read: we're NOT talking about GitHub Co-Pilot, so stop trying to introduce it into the conversation. Rather than trying this straw man / red herring approach, you have a simple task: present ACTUAL EXAMPLES of CHATGPT (not GitHub Co-Pilot) reproducing copywrited material in a non-transformative manner.
Re: (Score:2)
The legal standard is how substantially transformative it is, the amount of reproduced materials, the goals of reproduction, and a number of other factors.
But let's see some examples. A claim has been made, let's see them.
Re: (Score:2)
(1) Does educational use get them an exemption from copyright rules? I don't think teachers are allowed to copy entire articles just because they are using them to educate children, but correct me if I'm wrong.
Yes, they can [university...fornia.edu], if the entire article is needed for an educational purpose.
Re: (Score:2)
"Does educational use get them an exemption from copyright rules?"
I'd have to think not. Otherwise, textbooks wouldn't cost hundreds of dollars since students would/could scan them and make free copies without consequence.
Re: (Score:2)
Of course technically speaking its not the same thing but if a professor assigned their article as class reading I wonder if they would be upset?
Would they be upset if students or anyone else for that matter accepted and explored their ideas. We generally cite facts, and opinions from other authors but we don't generally cite broad widely reproduced ideas and concepts. You would not put a citation after the sentence 'Many people believe the ability to set interest rates is an important monetary tool.' you
Re: (Score:2)
But what part of GPT makes sure that it isn't paraphrasing or that it limits itself to "general knowledge".
Quite to the contrary it very well may copy niche information or expression so the argument could be made that it can violate copyright and/or anti-plagiarism standards.
This is an area where the law will need to legislate, and in the present climate where politicians see tech companies "not doing enough for X or Y", I don't see it going well for tech companies.
Re:OpenAI has at least 3 fair use claims (Score:5, Insightful)
I wonder where those journalists got their writing ideas from?
Was it from reading other journalists work?
I'm sure they didn't learn in a vacuum.
Re: (Score:3)
Eaxactly. If I learn what certain financial terms mean by reading the Financial Times, I don't owe the publisher for training me.
Re: (Score:1)
Apparently, I owe tons of publishers extra money for reading their books, journals, websites and using that accumulated knowledge in my career. Go figure. Wish i knew earlier, wouldn't have read so much.
also a cautionary tale for teachers if they use any online resources or books to teach students.
A thought about teaching it (Score:2)
Speaking of education, if someone wants to learns a skill, they need a teacher who is usually paid unless they're intelligent enough to deduce the process themselves.
An AI that doesn't know anything needs training because it can't teach itself, thus it also needs a teacher, which are in this case the writers of articles.
Re: (Score:3)
Speaking of education, if someone wants to learns a skill, they need a teacher who is usually paid unless they're intelligent enough to deduce the process themselves.
If that teacher makes you read a load of history books and you go on to become a history teacher, do you have to pay him/her and all the book authors royalties for all the things they thought you?
Does everybody pay royalties to their university after they get a job and start earning money?
Re: (Score:3)
You don't even need to get to fair use. Copyright protects a set of exclusive rights: the right to copy, to prepare derivative works, to distribute and to display publicly. If you're not doing one of those, then you're not infringing. Further, copyright only protects *expression* -- the underlying *ideas* are not protected. Derivative works are things like translations, screenplays, and so on. ChatGPT isn't doing that, AND it's not re-using the original expression.
Re: (Score:2)
copyright only protects *expression* -- the underlying *ideas* are not protected. Derivative works are things like translations, screenplays, and so on.
Derivative works are defined by a combination of their origin, and recognizable elements from it. If ChatGPT is trained on something, and then produces something literally indistinguishable from that thing, then there is an argument to be made that it's violating copyright. And it's actually capable of doing that, because unlike the image-generating diffusion models, it's producing text output.
Re: (Score:2)
Re: (Score:2)
What's more, from what I understand, unlike a corpus, LLMs don't retain the copies of the texts in the corpus, only the resulting processing parameters after training. I
Re: (Score:2)
The problem is that modern language models such as GPT doesn't just retain the rules of grammar, but actually retains the data, the style and expression of millions of articles. And thats why it can reproduce some of it verbatim if it's rare enough.
Re: (Score:2)
Re: (Score:2)
And they have one very strong argument against them: They copied the copyrighted works, and that copying diminished the value of the copyrighted works.
That they violated copyright is not disputed. That the use was "fair use" is what OpenAI will have to establish to defend themselves against the claim of copyright infringement. Since OpenAI can negatively affect the value of the original authors' works to such a large degree, effectively making it worthless, OpenAI has a rather steep hill to climb.
Re: (Score:2)
OpenAI's best argument is Authors Guild, Inc. v. Google, Inc.
In that court case, Google, through the Google Books website, was copying the books in their entirety, saving them in their entirety into a database, letting users search the books for free without notice or license to the publishers or authors, showing the users images clips of the published books that matched their search criteria, and then offered to sell them the books.
Courts decided that was totally cool.
Re: (Score:2)
Re: (Score:2)
No, that's not what happened. The deals you mention that google tried to cut fell through (and probably would have been met with major anti-trust hurdles). The lawsuits went to trial and Google won everything.
Re: OpenAI has at least 3 fair use claims (Score:2)
Because Google only showed 1 image from a book. It was narrow enough not to be a problem, because a book is 300+ pages so youâ(TM)re impacting less than 1% of the piece.
ChatGPT, given enough time and prompts can reproduce pretty much any article, if the limits are removed, it could probably even reproduce the books it has stored to a very high degree of accuracy. The question is whether rewording and reproducing an article on any particular subject is copyright infringement (give me the opinion on cert
Re: (Score:2)
ChatGPT cannot reproduce any books because that's not how the data is stored in it's database. There are no complete works stored. It stores and processed the data through patterns. When you type in a prompt, it runs an algorithm on it's pattern recognition software. This is why it has such a hard time reporting correct sources, and why asking for sources is the easiest way for teachers to catch students using ChatGPT for essays. No sources of any kind are stored to be referenced.
Re: (Score:2)
I don't believe that, it stores relations between words, which is what a book is. I don't write books, but I have seen it produce code and text verbatim from somewhere on the Internet that I myself published. Others claim it can produce similar texts to existing text.
Now, it may not 'knowingly' have done that, it simply stores the relations between search terms and if your search term is niche or precise enough, that relation is close to 1-on-1, so it reproduces that 1 relationship it "knows" in its databas
Re: (Score:2)
Re: (Score:2)
And that's what they want. They want Google to pay them for linking to their articles, and they want twitter to pay them for linking to their articles behind paywalls, which makes their links as good as spam.
Burn down any media that tries to have it both ways, a subscription model and an ad model.
I'm not saying you shouldn't pay for WaPo and NYT, but these companies kinda have no trust to them since they put the article behind a paywall, thus allowing misinformation to flow around it, since free news sites
Factual or just word prediction? (Score:4, Interesting)
Does this AI expose whether this is factual results or just derived from it's language learning algorithm? My understanding is you cannot take anything it says, even about itself, as being "real".
Re: (Score:2)
Re: (Score:2)
I'm surprised that Microsoft didn't make it include sources when they integrated it into Bing. I know nothing about the ChatGPT API so maybe it can't simply supply a list of sources.
Re:Factual or just word prediction? (Score:5, Insightful)
In some specific cases a human can identify what in the training set leads to certain behaviors. For instance, an image of a riot involving tear gas might be identified as "water buffalo" by ImageNet. Why? Because many pictures of water buffalo involve a dusty haze. So we learn that "dusty haze" was learned by the network to be a feature of "water buffalo". A human can figure this out after the fact, it is impossible for the network to understand the concept of "dusty haze" since that was never a training input much less track this decision back through the 21 DNN layers to 50 of 100 images of water buffaloes training images that contained dusty haze.
Re: Factual or just word prediction? (Score:2)
It is definitely possible, you can assign labels to your sources and make sure the labels come through, thatâ(TM)s how we debug our neural nets. Obviously, as you say, the labels will end up being a long list for anything complex but you could limit to the first 5 highest scoring hits and then give the option to scroll down. it will at least give some transparency.
Re: (Score:1)
I think I see the problem (Score:4, Insightful)
They're using mainstream news media to train chatbots? No wonder they're turning into something like David 8 from Prometheus.
Re: (Score:1)
How is this different than search engines? (Score:4, Interesting)
Re: (Score:2)
Re: (Score:1)
Search engines index their data and link to them. Chat GPT takes their copyrighted data and produces own derivative of it without any reference to original.
What happens if you train it on copyrighted code, which is publicly available under specific license and then have "AI" produce identical code without any understanding why code is doing specific actions.
Re: (Score:3)
Re: (Score:2)
Re: (Score:1)
Another cash grab by the failing news companies (Score:1, Troll)
How is this different than humans? (Score:3)
Humans, like WSJ journalists, do the same than chatbots, sometimes even with better skills.
Re:How is this different than humans? (Score:5, Insightful)
These companies don't own the news... (Score:3, Insightful)
These companies don't own the news... they never have. They used to control the distribution of the news content in the days when owning a printing press, controlling an efficient physical delivery mechanism and having access to prime space on street poles, grocery stores and busy sidewalks was a prerequisite to being able to operate a successful newspaper or magazine business. Of course I am not forgetting the importance of the people who ran around and collected and bundled all this content into interesting news stories that we would want to read... but you know what I mean.
I thought that this had been settled in the late nineties (or was it the early noughts) when these businesses realised that the internet was a way more efficient and faster distributer of the same time sensitive content (ahem... news) that they were peddling a day later. I remember almost every news outlet (even CNN) experimented with tryng to turn ordinary joe (I should also say "and jane" just to be politically correct) into a citizen journalist - they were usually the source of the story to begin with. But soon hundreds of thousands of people realised that they could tell their own story... skip the middleman. Ahhhh... I miss the internet chaos of the nineties, when blogs were cool and awesome.
I think that the few news businesses that hire passionate journalists (who need to be more than just content gatherers) who do real jourlalism (that stuff that is taught in journalism schools) - as opposed to just reporting and distributing the news - are thriving. Why... because... well... the internet thingie got out of hand... who knows what is real and what is fake these days - it's just your intepretation of what you think are the facts vs my intepretation of what I think are the facts.
Most of these companies are in the business of vacuming up new interesting content that is already out there, curating it to fit whatever bias their audience holds, calling it news and shoveling it out. Just because the public is willing to pay for that curated delivery (because heavan forbid that we should inadvertantly come across a view point from a source that is contradictory to our own bias) does not give them ownership to the content... just its curation and delivery.
Just my 2 cents worth.
Re: (Score:2)
They don't own the news, but they do own the stories written about the news. Under copyright, any written work, regardless of topic, is copyrighted unless the author specifically states that it is public domain.
Death Throes (Score:2)
There's no difference with a human reading news to gain intelligence - which is not additionally paid for.
If OpenAI is using NYT to learn, they absolutely should pay the $14/mo for the net to have a subscription. That's obviously fair.
They should also mark NYT as adversarial in their GAN and teach the AI how Fake News is lies and propaganda.
If you tell it that NYT got a Pulitzer for covering up the Holodomor perhaps it can find patterns in other coverups over time that we've never discovered.
That's the sys
How else is it going to link to your articles? (Score:3)
Me too! (Score:2)
"Major news outlets have begun criticizing OpenAI and its ChatGPT software, saying the lab is using their articles to train its artificial intelligence tool without paying them."
I train my brain the very same way!
Fair Use (Score:2)
Section 107 calls for consideration of the following four factors in evaluating a question of fair use:
Purpose and character of the use, including whether the use is of a commercial nature or is for nonprofit educational purposes: Courts look at how the party claiming fair use is using the copyrighted work, and are more likely to find that nonprofit educational and noncommercial uses are fair. This does not mean, however, that all nonprofit education and noncommercial uses are fair and all commercial uses a
Schools use newspapers to train *humans*. (Score:4, Informative)
Newspapers are used to train humans all the time. Schools use them; always have. Using newspaper contents to train AIs is no different, legally.
Once I buy their paper (or get access to it online), I'm free do anything I want with it, excepting only copying it and giving it to others (because there's copyright law about that). There are no other legal restrictions on use.
Re: (Score:1)
ChatGPT trained by ChatGPT? (Score:2)
With all the hype about various media bodies using ChatGPT to write articles, are we now going to see ChatGPT being trained by ChatGPT output?
What horrors will be produced by this incestuous vicious circle?
Facts are not copyrightable (Score:3)
It's pretty apparently clear that ChatGPT is not outputting these news articles any more than you or me describing what we heard recently -- using _completely_ different rhetoric.
Facts are not copyrightable. The rules of a game are not copyrightable. The presentation and representation are not copyrightable.
It would be a hard sell to say that the transformation of the facts into another new output is copyrightable, otherwise everything that you or I say or think or put out is a derivation of something that someone, sometime had a copyright to.
Re: (Score:2)
No, but text written about facts certainly is copyrightable. If ChatGPT is regurgitating copyrighted text written by news or other publishers, they might be in violation of copyright.
What happened to the paywalls? (Score:2)
Some of the companies complaining have paywalls, so why aren't they already covered? Seems like more of a problem for advertising-supported content, since the AIs probably don't have any spending money.
Comment removed (Score:3)
First come the beggars, then will the luddites (Score:1)
Apparently AIs Need Citations (Score:2)
Re: (Score:2)
That is a challenge to be resolved.
Right now it seems quite difficult. Maybe the back propagation would need to keep track of what % each article added to each weight.. that would be very hard and space intensive.
Re: (Score:2)
That's exactly the problem.. in some cases it will reproduce or paraphrase the source without attribution.
the internet is lost (Score:1)
Did they pay for a subscription? (Score:2)