Comment Re:Anonymous cannot be destroyed (Score 1) 221
You should've wrote "raises the question"
You should have written "should've written."
You should've wrote "raises the question"
You should have written "should've written."
Fortunately, ARM is little endian too
ARM can be configured to run as either little or big endian. Instructions are always stored in memory in little endian, but not necessarily so for data. Little endian is more common, but the ARM processor in the Wii, for example, is big endian.
"Freedom of speech" only applies to Government's interference in forms of speech. [...]
No. I keep seeing this repeated, but it's absolutely not true. Constitutionally-protected free speech only applies to the government's interference in forms of speech. [...]
Look at the post he was replying to:
I'm usually against listening to any far winged nut job, but this is freedom of expression which falls under the first amendment. [...]
Sentex wasn't explicit about it, but in context it's obvious that he was talking about constitutionally-protected free speech.
> But video games just let the market decide what will happen if it is too expensive
Part of the way the market decides on the price is that people complain when it is too expensive.
I haven't looked at the official changelog or the code yet, but I'm just as confused as you about that item. Moreso perhaps, as I have used IPv4 over firewire with two linux machines before. That was probably 5 years ago or so.
It's still missing an infrared port for transmitting phone numbers and such too, isn't it?
Merely resaving an image with a higher quality setting won't magically repopulate the high frequency coefficients, so I think this will still work in that case. Of course, if the image has gone through any sort of filtering or modification before being resaved, then all bets are off. But that's also beyond the scope of what the original poster asked for - he wanted a way to detect JPEG compression artifacts, not any extra filtering that might have also been applied.
JPEG works by breaking the image into 8x8 blocks and doing a two dimensional discrete cosine transform on each of the color planes for each block. At this point, no information is lost (except possibly by some slight inaccuracies converting from RGB to YUV as is used in JPEG). The step where the artifacts are introduced is in quantizing the coefficients. High frequency coefficients are considered less important and are quantized more than low frequency coefficients. The level of quantization is raised across the board to increase the level of compression.
Now, how is this useful? The reason heavily quantizing results in higher compression is because the coefficients get smaller. In fact, many become zero, which is particularly good for compression - and the high frequency coefficients in particular tend towards zero. So partially decode the images and look at the DCT coefficients. The image with more high frequency coefficients which are zero is likely the lower quality one.
So, that will show you which parts differ. How do you tell which is higher quality? Sure, you can probably do it by eye. But it sounds like the poster wants a fully automated method.
No it doesn't. This method has another problem (see my replies to it), but other than that, it could work. He's suggesting that to each copy of the image, you look at the difference between that copy and a blurred version of it. This will give you an idea of how sharp that copy is. And since JPEG throws out high frequency information first, resulting in blurring, it would appear at first glance that the sharper image should be the higher quality one.
As I said in another comment though, JPEG operates on blocks, and especially at very low qualities, you get sharp edges between each block. So the assumption that sharp image == high quality is not really valid.
Also, JPEG works on blocks. While it's true that JPEG gets rid of high frequency details first (and thus results in blurring), this is only useful within each block. You can have high contrast areas at the edge of each block, and this is actually often some of the most annoying artifacting in images compressed at very low quality. So just because it has sharp edges doesn't mean it's high quality.
Even faster is look at the DCT coefficients in the file itself. Doesn't even require decoding - JPEG compression works by quantizing the coefficients more heavily for higher compression rates, and particularly for the high frequency coefficients. If more high frequency coefficients are zero, it's been quantized more heavily, and is lower quality.
Now, it's not foolproof. If one copy went through some intermediate processing (color dithering or something) before the final JPEG version was saved, it may have lost quality in places not accounted for by this method. Comparing quality of two differently-sized images is also not as straightforward either.
Off by one. Linux deprecated OSS3, and OSS4 is now opensource.
And not only does it work better (in my admittedly little experience with it), it's also more in keeping with the UNIX philosophy of treating devices just like any other file. Sure, with ALSA you do have device files, but you pretty much have to use alsalib to use them AFAIK. With OSS, you get to use the standard UNIX file APIs.
I had heard that V8 was 32-bit only right now, which is why I was surprised that there was an amd64 package. But everything I've seen online (in the admittedly small amount of searching I've done) indicates that 64-bit support is a low priority. I even saw somewhere mentioned that the code makes various non-portable assumptions such as sizeof(int)==sizeof(void*), which if true means they really weren't planning for 64-bit support when they started. I hope they add proper 64-bit support soon, but I'm not holding my breath.
Life is a whim of several billion cells to be you for a while.