Forgot your password?
typodupeerror

Comment What is the purpose of a journalist ? (Score 1) 21

It depends on the audience who will read the articles that s/he writes.

If it is clickbait chasing nonsense about pop singers or film stars latest affair or wardrobe 'malfunction' then the readers are unlikely to be too critical unless you do not have enough pictures of naked flesh. Your editor & publisher will be happiest if you write lots of articles and care little if it is slop.

If you are writing about something supposed to be factual, eg: science; finance; politics; ... then the articles should be well researched & checked and any uncertainties noted in the article. You will be rewarded and applauded by your readers for insight, good context & few errors. Your editor/publisher will like many articles but accept that quality takes time. This is not quite true: if your publisher has a strong political bias then you will be expected to follow that bias and invent facts support that bias - ie lie and produce fake news. So as long as AI slop has the correct bias many will just publish it. Journalists of integrity will not want to work at such a publisher ... however journalists do have mortgages & kids so some 'bend' their professionalism.

Comment Prior art from expired patents (Score 1) 44

U.S. Patent No. 10,855,990
This is old technology, but was used extensively in JPEG and JPEG2000. All these patents are and have been long expired. There is no novel approach in U.S. Patent No. 10,855,990. More specifically, all the claims they're making in terms of the specific violations of this patent were covered in ITU-13818-2. Though ITU-https://www.itu.int/rec/T-REC-H.263-200501-I/en hammers the last nails in the coffin. I have read and reviewed the claim and the patent and the technologies presented in 10,855,990 are just reiterations of earlier work with scrambled wording to try and give a new name for variable sized macroblocks. They novel approach implemented in H.264, H.265 and H.266 was the method of selecting which specific pattern of "coding units" to apply. I have not checked for reuse of this, but this is neither in 10,855,990 or the claim. So, I believe they checked and found out that there was no violation. Oh and to be clear, they're completely fixated on the sharing coding parameters between blocks. Their approach is almost, barely, kinda novel, but the fact is, I'd make a strong argument that this is obvious, it's basically just macroblock grouping which has been part of standard video coding as far back as MPEG-1 and ASF. And the method applied could easily be argued to be an almost direct copy of LZW compression.

U.S. Patent No. 9,924,193
I couldn't find a copy of the original text (not wearing my glasses) and frankly their description was so TL;DR that they just started making things up. Ok... here's the argument against this. This has been a core features of all DWT based compression methods since the start. It was even the reason we used DWT. JPEG2000 is almost entirely based on what they're claiming here. If I spent an hour on this one, I could tear it to pieces without even trying. And skip mode... what in the world do you think something like Google Earth is?

U.S. Patent No. 9,596,469
Encoding data in a way that would allow independent parallel decoding of different portions, bands, blocks whatever of the image ... blah blah. Back to JPEG-2000 and Google Earth and stuff like that. The first time I saw this personally was at Disney Epcot Center when there was a Cray Supercomputer on display showing off a google earth like experience. The computer was streaming data at different spatial representations in parallel to hundreds of CPU cores who were all decoding and texturizing. The number of patents filed and expired on this one tech is immense. I haven't dug up specifics, but I can guarantee that the JPEG2000 patent pool clearly invalidates this

I just doom scrolled through the rest of this. I highly doubt I'm the only signal processing and video/image compression historian out there. I'm guessing that the LLMs could easily tear this crap apart too. But I'd be willing to make a few bucks as an advisor on this. I've either worked with, against, for, on, etc... on every technology being claimed here and I did this 15-20 years ago... and the tech was already old.

Submission + - Nissan Leaf drivers voice anger over app shutdown (theguardian.com)

Alain Williams writes: Owners of some Nissan Leaf electric vehicles are angry after the carmaker announced it would shut down an app that lets them remotely control battery charging and other functions.

Drivers of Leaf cars made before May 2019 and the e-NV200 van (produced until 2022) have been told that the NissanConnect EV app linked to their vehicles will “cease operation” from 30 March. This means they will lose remote services, including turning on the heating, and some map features.

Experts said they expected other drivers to experience similar problems in future as “connected cars” – vehicles that can connect to the internet – get older.

Submission + - grandma put in jail because of "AI" hallucinations "trying to rebuild her life" (theguardian.com)

Mr. Dollar Ton writes: Angela Lipps, 50, spent nearly six months in jail after Fargo police identified her as a suspect in an organized bank fraud case using facial recognition software, according to south-east North Dakota news outlet InForum. Lipps told the outlet she had never been to North Dakota and did not commit the crimes.

Lipps is now back home but says the experience has had lasting consequences. While jailed and unable to pay bills, Lipps lost her home, her car and her dog, she said. She also told WDAY News no one from the Fargo police department had apologized.

This isn't the first time "AI" and lazy police together have put innocent people away, concludes the article.

Comment Re:Good. (Score 1) 36

If you visit a web site then you expect to pay for the bandwidth to download the HTML, videos, etc.

If you have a PC you expect to pay for the bandwidth to download system updates; you do not expect/want it to download adverts to be pushed onto your screen thus stopping you doing what you bought the PC for -- this is what Microsoft does.

Comment LLM is a programming language (Score 1) 47

The prompts provided to the LLM should be copyrightable as code and the code generated should be protected the same way compiled or intermediate code is protected.

The issue at hand is how the model was trained. And the users of the LLM should be made clearly aware by the model trainer whether the user or the model trainer is responsible for the liability related to using other peoples code for training the model.

That said, we should soon be seeing models that are trained using training courses rather than massive amounts of code. Once that happens, the models will make use of agents to search the web and learn from stackexchange or other sources how to solve problems the same way a human would. Of course, when a human learns how to do something from searching the web, simply copy/pasting other peoples code can be an issue and we have to read license restrictions. But learning how someone did something and doing it ourselves is generally safe. If a model reads an articles while searching and then learns how to do something, it should also be protected if it's not copies verbatim.

We have a lot of legalities to deal with.

1) Massive models are going to die. I don't know whether it's with wasting time with nonsense like OpenAI and Anthropic. They won't even be in business by the time the lawsuits come through.

2) Agentic models will be the focus of the future because they work more like humans. We give them the base information needed to learn and find the answers themselves. Using cloud based solutions where companies hosting the solutions keep massive amounts of data locally cached so the model can research faster could be an issue. But these will cost money and really just won't do more than local-AI will. They'll just be faster. For legality sake you'd want to avoid the cloud models since caching can be seen as theft. But, local-AI is much different. With agentic solutions, I think most legal issues are back to the same issues with copyright we always have. We just have to make sure our models which we use are following the copyright rules. If it's allowed, copy/paste. If it's not, then learn how it's done and make your own solution. In a perfect world, we'd then have a stackexchange or alternate github for AI generated code and post that different LLMs could use to learn from each other. The problem then becomes whether that would be seen as training a large model and whether they are in violation of copyright again.

Slashdot Top Deals

Porsche: there simply is no substitute. -- Risky Business

Working...