Exactly. There is a whole lot more to IT than writing software. The unix admins, windows admins, other OS admins, DBAs, network people, telecoms people, hardware people, user support people, and so on must outnumber developers by at least 10 to 1.
In my vieuw (and I have trained a dog to stay inside my parent's garden) this can't be a good thing: the dog gets punished without a clear reason (lazy people didn't take the time to make it clear to the dog it can't go into their flowerbeds). This can wreak havoc on the dog's simple "psyche": I'd expect some to grow fearfull of everything, some to grow extremely viscious and some to go completely beserk.
You have absolutely no idea what you're talking about. I'm not a fan of invisible fences (or any training based on punishment), but some dogs are just naturally runners, and it's very hard to train it out of them because they escape during the one minute you're not watching them, and you can't punish them for coming back.
Invisible fences use simple operand conditioning; It doesn't matter if you know the "reason" or not as long as the stimulus is consistent. If you think dogs need explanations to not get screwed up, you're misapplying a piece of folk psychology that isn't even accurate for humans.
Many people lock their dogs up inside the house to prevent them from running away or messing up their garden. In my view that's much more cruel than making them learn that straying too far from the house is bad. Plus dogs are pretty smart, they get it very fast. I'm sure if you could ask them they'd prefer this to being locked in permanently or to being run over by a garbage truck.
mm, so what a oldie would call a 2x4", we hip young whippersnappers now call a 50x100, thou it is in fact 50.8mm x 101.6mm.
No it's not. This may shock you, but a 2x4 is not actually a 2x4. It's a "1.5 x 3.5." And many have tried to convert Imperial Lumber Math to metric and were never heard from again, so tread carefully...
I mean if a urinal can pass for art, and a folded paper sheet can pass for art, and some smear of one colour can sell for thousands, and a glass of water on a shelf can pass for art, I can't see why Hitler couldn't do that too. Teach him that instead of painting a house he could just swish the brush left and right a bit and title it "House", and you could end up learning about him next to Duchamp and Hinn instead of WW2.
Yes, very subtle, grasshopper...
A more fun question to ask is how long they'll be able to feign ignorance calling VP8 patent-free--analysis of it has shown that it shares a lot of the same algorithms with H.264.
If VP8 ever gets widely used, I suspect we'll find out very fast...
- They selected 1,000,000 random pics from the web, without any selection for compression quality. And srsly, are they trying to tell me that *google* doesn't have access to a sufficient number of raw images?
- They compared the algorithms at PSNR around 40, which is not that highly compressed.
- They make a big deal out of the fact that the advantage of using their algorithm is greater for small (low-res) pics... I would assume (without any data to back me up) that low-res pics on the web tend to be more highly compressed to begin with. I'm assuming this because small pics would tend to not be photographs, and because if you use low resolution, you're probably trying to save bandwidth and web space, so compressing more would be logical.
And anyway, these are by no means the only problems with what they're doing.
- As others have pointed out, where are the standard pictures everybody uses to compare compression quality?
- Why did they arbitrarily compare the algorithms at PSNR=40?
- Comparing with jpeg at this point is like kicking a puppy. The comparisons with j2k is meaningless (see above).
- If they're just trying to create a better alternative to jpeg without the patent hassle, they should say so. But in that case, what's wrong with promoting any of the existing algorithms?
- The main problem with jpeg is that it's used blindly for all kinds of images, and it was simply not designed for that. Suggesting that one new algorithm should take over everything that jpeg does right now is idiotic. The right replacement at this point depends on what the image is you're trying to compress. E.g. j2k is good for large photographs at relatively high bit rates. Png is actually very good at things like line drawings. Etc...
They're comparing webP to jpeg by testing how well both algorithms can recompress (a set of almost entirely) jpeg images? Really? Really???
More to the point, jpeg compression artifacts (discontinuities) are a *nightmare* for wavelet coders, so this is in no way fair to jpeg2k.
Also, from TFA:
Predictive coding uses the values in neighboring blocks of pixels to predict the values in a block, and then encodes only the difference (residual) between the actual values and the prediction. The residuals typically contain many zero values, which can be compressed much more effectively.
WTF, this is exactly what a wavelet coder like jpeg2k does, only in a much more sophisticated way.
This whole thing is so far below any accepted standard of image compression research, it should just be silently ignored.
Do those offer route planning at all, let alone taking things like hills or noise levels into account?
Yes, I'm sure they take every single hill in holland into account