Forgot your password?
typodupeerror

Comment Prior art from expired patents (Score 1) 44

U.S. Patent No. 10,855,990
This is old technology, but was used extensively in JPEG and JPEG2000. All these patents are and have been long expired. There is no novel approach in U.S. Patent No. 10,855,990. More specifically, all the claims they're making in terms of the specific violations of this patent were covered in ITU-13818-2. Though ITU-https://www.itu.int/rec/T-REC-H.263-200501-I/en hammers the last nails in the coffin. I have read and reviewed the claim and the patent and the technologies presented in 10,855,990 are just reiterations of earlier work with scrambled wording to try and give a new name for variable sized macroblocks. They novel approach implemented in H.264, H.265 and H.266 was the method of selecting which specific pattern of "coding units" to apply. I have not checked for reuse of this, but this is neither in 10,855,990 or the claim. So, I believe they checked and found out that there was no violation. Oh and to be clear, they're completely fixated on the sharing coding parameters between blocks. Their approach is almost, barely, kinda novel, but the fact is, I'd make a strong argument that this is obvious, it's basically just macroblock grouping which has been part of standard video coding as far back as MPEG-1 and ASF. And the method applied could easily be argued to be an almost direct copy of LZW compression.

U.S. Patent No. 9,924,193
I couldn't find a copy of the original text (not wearing my glasses) and frankly their description was so TL;DR that they just started making things up. Ok... here's the argument against this. This has been a core features of all DWT based compression methods since the start. It was even the reason we used DWT. JPEG2000 is almost entirely based on what they're claiming here. If I spent an hour on this one, I could tear it to pieces without even trying. And skip mode... what in the world do you think something like Google Earth is?

U.S. Patent No. 9,596,469
Encoding data in a way that would allow independent parallel decoding of different portions, bands, blocks whatever of the image ... blah blah. Back to JPEG-2000 and Google Earth and stuff like that. The first time I saw this personally was at Disney Epcot Center when there was a Cray Supercomputer on display showing off a google earth like experience. The computer was streaming data at different spatial representations in parallel to hundreds of CPU cores who were all decoding and texturizing. The number of patents filed and expired on this one tech is immense. I haven't dug up specifics, but I can guarantee that the JPEG2000 patent pool clearly invalidates this

I just doom scrolled through the rest of this. I highly doubt I'm the only signal processing and video/image compression historian out there. I'm guessing that the LLMs could easily tear this crap apart too. But I'd be willing to make a few bucks as an advisor on this. I've either worked with, against, for, on, etc... on every technology being claimed here and I did this 15-20 years ago... and the tech was already old.

Comment Anonymity (Score 1) 54

Lying to yourself is the biggest danger for trying to stay Anonymous. With enough patterns to recognize, the idea that one can hide is a delusional take.

The only way to win, is to run EVERYTHING you post through an AI that changes the tone and words used in all your online activity. But even then that may itself be a lie.

Comment LLM is a programming language (Score 1) 47

The prompts provided to the LLM should be copyrightable as code and the code generated should be protected the same way compiled or intermediate code is protected.

The issue at hand is how the model was trained. And the users of the LLM should be made clearly aware by the model trainer whether the user or the model trainer is responsible for the liability related to using other peoples code for training the model.

That said, we should soon be seeing models that are trained using training courses rather than massive amounts of code. Once that happens, the models will make use of agents to search the web and learn from stackexchange or other sources how to solve problems the same way a human would. Of course, when a human learns how to do something from searching the web, simply copy/pasting other peoples code can be an issue and we have to read license restrictions. But learning how someone did something and doing it ourselves is generally safe. If a model reads an articles while searching and then learns how to do something, it should also be protected if it's not copies verbatim.

We have a lot of legalities to deal with.

1) Massive models are going to die. I don't know whether it's with wasting time with nonsense like OpenAI and Anthropic. They won't even be in business by the time the lawsuits come through.

2) Agentic models will be the focus of the future because they work more like humans. We give them the base information needed to learn and find the answers themselves. Using cloud based solutions where companies hosting the solutions keep massive amounts of data locally cached so the model can research faster could be an issue. But these will cost money and really just won't do more than local-AI will. They'll just be faster. For legality sake you'd want to avoid the cloud models since caching can be seen as theft. But, local-AI is much different. With agentic solutions, I think most legal issues are back to the same issues with copyright we always have. We just have to make sure our models which we use are following the copyright rules. If it's allowed, copy/paste. If it's not, then learn how it's done and make your own solution. In a perfect world, we'd then have a stackexchange or alternate github for AI generated code and post that different LLMs could use to learn from each other. The problem then becomes whether that would be seen as training a large model and whether they are in violation of copyright again.

Comment Re:Why do they do this? (Score 1) 13

I read that and was simultaneously laughing and angry. I'd call it a load of horseshit, but that would be insulting to horseshit.

What a bunch of windbaggery. Meaningless, feckless corporate speak.

We know. They know we know. We know they know we know. They don't care.

Nothing says "fuck you" like a "well worded" press release. It was only missing the AI EM-DASH.

Comment Re: Sure Jan (Score 2) 113

That's kind of hitting on the point but not quite.

IBM mainframes are online transaction processing systems. The language hasn't been and issue for a long time and it really doesn't take more than a few days for a programmer to learn to use COBOL. The problem is that JCL, RPG, CICS. DB2 and all the surrounding infrastructure is very confusing.

The uptime you're talking about is that a mainframe is basically a special purpose computer built specifically to make it so you can suffer loss upon loss upon loss and it will still keep processing transactions. This is because they have a specific workflow which is designed specifically to support this. I have implemented the exact same topologies and workflows using more modern tools on small computers ... because frankly you don't need big computers for this. And it works. I can scale almost infinitely and lose all but 2 nodes and it will keep chugging. But I'm not going to provide support on it and I'm not going to back my project with insurance company support.

IBM is offering a lot more than computing in the price they charge. That's the real issue. And no other company has built that up.

Comment Metal etching is and always be better. (Score 1) 51

Great experiment but of course glass has some pretty obvious downsides. We tried something similar some years ago using a DVD laser and coated glass. It is extremely reliable and thankfully cheap. The problem is... how do you read it? There's no instructions.

The worst movie ever so see a screen... Contact... should have taught everyone in this business the rules of this.

"The medium is irrelevant if no one can read it"

and

"You have to leave Jodi Foster a key to be able to build a machine"

So, the beauty of metal is that you can etch it clearly at many levels. In fact, it makes an amazing analog media. What you do with the metal is that you calculate the analog image to etch on it. Then using a laser and the scanning head from a laser printer, you can pass the scanner across the metal and print. A4 paper size or a nice 210x210 square is great for this.

What will you print. This is interesting, you'll layer many images on top of one another at different resolutions.

The first image show clearly in mostly pictures that are readable with the human eye which will explain how the second layer can be read. It should be simple enough that the reader can build said machine by hand using simple tools. Or, they should be able to read it by hand by measuring. See, you're explaining binary and a simple table such as ASCII or a 5 bit subset of it. It should also contain a Rosetta stone to allow linguists to decipher the language.

The second layer explains that we're storing information that should last forever and is a history of our world. And it should describe how to read the third layer which is much denser and contains a more advanced machine. But because the materials required to read the machine may not be available, it also describes the method of storage as well as detailing the more advanced character set. It should describe that the card which has all jagged edges is a dictionary containing 10,000 commonly used words and their definitions. But that each layer and card will contain partial dictionaries around their edges at the 3rd layer.

Density wise, We've managed to simulate 6 layers on increasing density and complexity allowing for about 1tb per A4 sheet of 1mm platinum (I'm sure other metals will work, it was just a good starting point) and the layers of course last more or less time based on the frequency and detail of the layer. Of course, I included extensive error detection and correction on more detailed layers. The simulation suggested that we could store for about a million years (no real way to test) and that most data would remain readable.

Oh, and you could build the devices fairly inexpensively.

We've been testing this in 3 different nation's archival departments. It was really funny because when I met the people from the archives at a symposium, I asked why they weren't doing this. They were like "what" and I just blurted out the design while chewing my lunch. I mean, I thought the idea obvious and they had been wasting time on all kinds of silliness like what Microsoft was doing.

People need to watch more bad movies.

Comment Re:Fine (Score 1) 123

Dude, I like going to the shooting range. I think it's fun. I don't imagine I'll ever own a gun and I'm not particularly interested in making guns.
That said, I think the second amendment is definitely taken greatly out of context. We are honestly using the thoughts and ideas of people who lived 200 years ago to have the slightest idea what makes since in terms of things like state run militia. And we're also using their perspective on what makes sense when the entire population of the US was 2.5 in 1776. 2.5 million people barely counts as a single city in 2026.
3 out of 10 people I know who own guns are precisely the people I would not like owning guns. These are the people who own guns because they're really into guns and they're really into gun ownership rights. I'm pretty ok with the other 7 out of 10. So, I think gun ownership should simply be highly regulated.

Slashdot Top Deals

"I have not the slightest confidence in 'spiritual manifestations.'" -- Robert G. Ingersoll

Working...