Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Re:From TFA: bit-exact or not? (Score 1) 174

Format converters usually modify the metadata of the file, often inadvertently.
The goal here is doing something lossless, bit exact, and reversible.
I haven't found any format converters that can do *that*.
If you're willing to make it approximately the same, or have the pixels the same in the videos that's one thing. But actually keeping every bit the same after a round trip is not something I know about, other than the losslessh264 pied piper project
If you know of any, please do link here--because it might support a wider range of container formats, etc.

Comment Re:From TFA: bit-exact or not? (Score 2) 174

Fair, but not all streaming use cases require seeking within a 4MB block (depends on the application). For those applications that require sub-4MB seeking, this won't be a good algorithm.

Also there is a branch off the master repo that is exactly "h264 that is extended to use similar resources." (branch name h264priors) So yes--great idea.
h264priors does pretty well, but not quite as good as the master branch--we're still getting to the bottom of how it does on a representative set of videos-- this is a week's work so far, not a dissertation :-)

This won't work on text data since it uses state deep within a video decoder that doesn't apply to text streams (like what above-neighbor color presence bits are set).

Comment Re:To the devs (Score 1) 174

If you can find something that does better compression for H.264 videos please point me to it so the ideas can be merged--maybe something even better will come out of it!

I haven't even located a program that undoes entropy coding and redoes it for h.264 videos exactly--our code does that, and it wasn't super trivial--but clearly not rocket science either.

The idea was to make the algorithm fast by only looking at the decoder state to be able to compress (and decompress) things in a single pass.

Again if you have a link to a better algorithm I'd love to hear it-- I learned so much by reading through the packjpg source and conversing with its brilliant author who hangs out on encode.ru, but I'm sure there are other valuable insights to be had--I just haven't located them yet. Feedback is useful, but I'd like some constructive feedback with links to real, open, resources.

Comment Re: No description (Score 3, Interesting) 174

It depends if the goal is to a) market a hip algorithm or b) store movies more efficiently.

Open source makes it easy for anyone to contribute to the algorithm.
The more people contribute, the better the code will be at compressing movies.

The better it is at compressing movies, the fewer resources it will take to store them.
This isn't a zero-sum game we're talking about: it's about making the world a more efficient place, one bit at a time.

But the bottom line is that, it's a lot easier for many organizations to contribute to a code base if there are no strings attached.

Interest from an article like this can get people playing around with compression.
Maybe another 10% gain is right around the corner.

Comment Re:From TFA: bit-exact or not? (Score 5, Interesting) 174

No one has tried to undo and redo compression of video files before. There are still doom9 forum posts asking for this feature from 12 years ago. I would say that saving lossless percentage points off of real world files is novel and important. And, since it's open source, if someone else gets more %age improvement than what we have, it could become as transformative as you describe.
But the point is that we have something that's currently useful. It's out there and ready to be improved. It's lossless. And it has never before been tried.
Also we did the entire algorithm in a week and aren't out of ideas!
Besides we never claimed it was a revolution--leave that sort of spin to the marketeers...
we're engineers trying to make things more efficient, a few percentage points at a time :-)

Comment Re:From TFA: bit-exact or not? (Score 5, Informative) 174

We also use arithmetic coding...but the gist of the improvement is that we have a much better model and a much better arithmetic coder (the one that VP8 uses) than JPEG did back then. I tried putting the JPEG arithmetic coder into the algorithm and compression got several percent worse, because that table-driven Arithmetic Coder just isn't quite as accurate as keeping counts as the VP8 one.

Comment Re:From TFA: bit-exact or not? (Score 5, Interesting) 174

Very insightful comments... let me go into detail
I would say we have several advantages over H.264
a) Pied Piper has more memory to work with than an embedded device (bigger model)
b) Pied Piper does not need to seek within a 4 Megabyte block (though it must be able to stream through that block on decode) whereas H.264 requires second-by-second seekability (more samples in model).
c) Pied Piper does not need to reset the decoder state on every few macroblocks (known as a slice), whereas H.264 requires this for hardware encoders (again, more samples per model).
d) As opposed to a committee that designed H.264, Pied Piper had a team of 10 creative Dropboxers and guests, spending a whole week identifying correlations between the decoder state and the future stream. That is a big source of creativity! (design by commit, not committee)
Our algorithm is, however streaming---and it's happiest to work with 4 MB videos or bigger
Our decode window is a single previous frame--so we can pull in past information about the same macroblock-- but we only work in frequency space right now (there are some pixel space branches just to play with, but none has yielded any fruit so far) so the memory requirements are quite small.
We are doing this streaming with just the previous frame as our state--- and it may matter--but we have a lot of work to do to get very big wins on CABAC... but given that we're not limited by the very small window and encoding parallelization requirements that CABAC is tied to, Pied Piper could well be useful soon!

Comment Re:From TFA: bit-exact or not? (Score 3, Informative) 174

This is an excellent summary and spot on! Our movie reduction claims are still early on. We'll need to find a more comprehensive set of H.264 movies to test on--and that requires the algorithm to understand B-slices and CABAC. These are both very close, but the code was only very recently developed. We're confident about the JPEG size reduction, however. If you want to learn more about how the JPEG stuff works, you can start with the open source repository from Matthias Stirner here http://www.matthiasstirner.com... Our work on JPEGs is very similarly inspired, but is completely streaming and works on partial JPEGs as well

Comment Re:No description (Score 3, Informative) 174

Link to a layman's description of the algorithm here: https://raw.githubusercontent.... It's bit exact and lossless. We haven't done comprehensive studies, but on the included test files it gets 13% compression on H.264 movies. Similarly the not-committed, but similar JPEG algorithm gets 22% on a comprehensive sample set of photos from a variety of devices.

Slashdot Top Deals

"Religion is something left over from the infancy of our intelligence, it will fade away as we adopt reason and science as our guidelines." -- Bertrand Russell

Working...