Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Re:Is paraphrasing copyright infringement? (Score 1) 33

If I read copyrighted material and then write my own story using information in what I read is that infringement?

It depends. If you wrote the same story, but with just the names of characters and places changed, then yes. In general, it depends on how similar your story is to the original.

Do clif notes have to pay for the rights to publish notes on other authors books?

No. Their notes were written by them and so are copyrighted by them. If they give excerpts, that falls under fair use. They can't quote the entire book, however, even if they give notes on every paragraph.

Isn't this is what ChatGPT does?

There are two parts to this: whether the copy of what CharGPT was trained on was a legally obtained copy in the first place and whether ChatGPT can be induced to regurgitate verbatim copies. The NYT demonstrated that, with the right prompting, ChatGPT can be induced to regurgitate copies of NYT articles.

Once I learn about something I am able to tell people what I have learned. Am I breaking the law if I tell my grandkids about a story I read.

It depends how you tell them. If you give, for example, highlights, then no. If you write down a copy and give them the copy, then yes. Also note that, assuming you either paid for your copy of the book or you borrowed it from a library, then the copy you used was a legally authorized copy to begin with.

I just don't see a lot of difference between asking an AI a question and having it tell me what it has learned than asking a human and having them tell me what they learned assuming they were trained on the same materials.

In many cases, ChatGPT used unauthorized copies to begin with. At that point, they're already guilty of copyright infringement. If the book the human read was an unauthorized copy, then that human is guilty of copyright infringement. Whether they tell you anything is irrelevant: they're already guilty.

Assuming both legally obtained the info they were trained on.

Bingo.

Comment Re:Is paraphrasing copyright infringement? (Score 1) 33

ChatGPT doesn't copy most works

It copies every work verbatim. It has to have a copy in the first place to train from. If it made its copy from an unauthorized copy to begin with, then it's already guilty of copyright infringement. At this point, the training is irrelevant.

For example, if its web crawler came upon an unauthorized PDF of a book that somebody uploaded to some web site, then the uploader and every downloader has committed copyright infringement. What they do with the copies is irrelevant: they're already guilty.

Comment Re:Violation of civil liberties (Score 1) 15

I don't believe the government has the right to ban me from harming myself in their eyes.

If you signed a contract stating that you agreed either to forgo any treatments for health issues arising from your choice to smoke or agreed to pay for all such treatments out of your own pocket, then fine. But, and especially if you live in a country with socialized medicine, then everyone else is paying for your treatment. So to minimize everyone's expenses for your completely avoidable health issues, the government is doing what it's doing. And they haven't stopped you from harming yourself. They just put images and words on the package.

Comment Depends on the discipline (Score 1) 78

Good old fashioned AI used to be hands-on - your dissertation code had to at least work for the examples in your thesis, and your code was under development for long enough that it had to survive OS and language updates.

Being wary of code by theoreticians is definitely valid - I believe it was Knuth who said something like "I have only proven this code correct, not tested it".

Comment Systems like LLMs are amplifiers (Score 1) 52

I first heard this comparison back when IDEs were young (kudos to Larry Masinter, at Xerox PARC at the time).

Amplifiers don't really know or care what they are amplifying.
If you tell them to create good, bad, immoral, or dangerous code, they'll try to comply.
Laws against bad uses of LLMs just make them illegal - they don't make them impossible.

Mediocre programmers with IDE/LLM support will create reams of mediocre code, at best.

Comment You pays your $$, you takes your choice (Score 0) 169

I have a paid subscription to the Washington Post (I live in the DC suburbs), so I get their content sans paywall. They let me create a few non-paywall links per month, and I share them when I see something the rest of the net should see without the paywall.

I pay Reddit annually, and I get their content sans ads. Whenever I see Reddit before I log in, I want to go wash my eyes out.

The real problem is I don't want to spend the money for a full subscription to every news source I read occasionally.

If there was a way to pay, say, $10/month to get 30 links from a basket of paywalled news sources, I'd be on it in a heartbeat.

Comment They all suck at some important things (Score 4, Insightful) 100

My wife is a sign language interpreter, and does a lot of remote work, especially since covid.

To handle a meeting on Teams, sign language interpreters need to pin two video streams - the current speaker, and the deaf client(s).

It is essentially impossible to do this in Teams - they routinely open up a separate Zoom session for interpretation.

You'd think the inability to do this would be an ADA violation...

Slashdot Top Deals

"My sense of purpose is gone! I have no idea who I AM!" "Oh, my God... You've.. You've turned him into a DEMOCRAT!" -- Doonesbury

Working...