Stack Overflow reputation indicates that you're a 1337 documentation writer, not necessarily that you know how to program.
SO reputation indicates a number of things -- that you can understand and dissect problems and code from others, that you have intimate knowledge of the platforms you're answering about, that you can code reasonably well, and that you can communicate well.
Basically, someone with a high rep is very likely to be enthusiastic, knowledgable, and great to work with. Does this mean Jon Skeet can out-code an elite like John Carmack? No. Does it mean he's a good coder? Probably. One of the "top" programmers? Not enough data.
This whole article is a bit of a bonkers idea. What makes a good dev? Is it the ability to work quickly, elegantly, and robustly? Being able to pull innovative algorithms out of thin air? Is it the ability to hack together important, complicated projects even if the code itself is a mess? How about less direct things, like overall contribution to the dev community and enthusiasm for helping other people grow?
I own a HDHomeRun, and it was a bitch to set up because even Comcast customer support had never heard of it (at one point, they told me to call TiVo!)
When was the last time you did this?
I've had a HDHomeRun Prime for about three years now, and have never had an issue with Comcast's CableCard activation line. The other side of the call is seated by a weird androgynously-voiced Indian following a script, but I've never been on the phone more than about 5 minutes before my card was working.
But some say he is less hands-on in developing products than his predecessor.
The best leaders will see their own shortcomings and delegate to trusted experts to pick up their slack. Perhaps this is Cook's strategy.
I think the best way we could handle it is to create a standard high-level bytecode and package format. Then any number of languages could be translated to it easily and efficiently.
Instead of rational articles with headlines something like:
Insecure government process allows trivial unauthorized access to road infrastructure
We get ones focusing on how a game may have encouraged people to hack into the stuff. I don't think it'll ever end.
I'm hardly counting ARM out. I doubt Intel will ever try to apply themselves to all the areas ARM is in. For phones and tablets, though? There is no doubt that ARM will have some very serious competition in the near.
I realize we like to root for the underdog here, but realistically, Intel's got a leg up in the long run.
ARM has already had its 15 minutes, just like AMD's Athlon did.
There's a good possibility that Intel will wipe the floor with all the ARM offerings. Maybe not with this generation of CPUs, maybe not the one following it, but they've got the best fab in the world and extremely smart people using it.
They've been actively focusing on increasing power efficiency for a number of years now, so I have no doubt they'll be able to bring strong competition.
It's time for the principal vendors to rebuild the list of assumptions of what gpus can and should be doing, design an api around that, and build hardware specific drivers accordingly.
For the most part, they've done that. In OpenGL 3.0, all the fixed-function stuff was deprecated. In 3.1, it was removed. That was a long, long time ago.
In recent times, while AMD introduced the Mantle API and Microsoft announces vague plans for DX12, both with goals of reducing CPU overhead as much as possible, OpenGL already has significant low-overhead support.
Why don't we ever read about more useful metrics, such as the amount of (floating-point) operations per second per $ of a given CPU?
Because most people don't care about these things anymore. Take this from TFS:
Haswell may have delivered impressive gains in mobile, but it failed to impress on the desktop where it was only slightly faster than the chip it replaced.
In reality, Haswell had double the FLOPs thanks to the new FMA instructions, near double the integer throughput thanks to AVX2, and a significant boost to multithreaded code thanks to TSX.
In practice, people saw maybe a 10% speedup in what they actually do. A flops/$ metric would significantly inflate the actual value people would see from these CPUs.
The thing is, these measurements are either synthetic (who has code consisting of nothing but FMA?), hard and uncommon to use (Integer SIMD is rare and AVX2 has a confusing idea of "lanes" that splits some 256-bit ops into two 128-bit ones), or not on all CPUs (TSX is disabled on their unlocked K line for some reason).
That's a nice idea, but it won't work everywhere.
In x86, for instance, the majority of instructions affect global flag registers. You can have two instructions that operate on entirely different memory locations and GP registers, but when you swap them the flags will end up set differently.
You'll find very few instruction pairs that you can do this to without some ability to perform local analysis of the code.
I thought it was roughly comparable to AVC baseline, though I admit not AVC main.
You're right, I'm not sure why I said ASP. The difference in quality between AVC Baseline and High is still pretty immense though -- lack of proper B-frames is pretty significant just by itself
Do you disagree with Google's course of action? If so, what should Google have done instead to keep webmasters from having to pay royalties on the videos they show to users of desktop computers and Android mobile devices?
As someone who loves technology, I'd have loved to see us come to some understanding with MPEG LA to just license their stuff for all HTML5 use. VP8 is going to result in either a lot more bandwidth usage (at YouTube levels, quite an impressive lot more) or a lot less quality.
As someone who loves freedom and creativity, I'm happy that we have anything and I'm thankful for Google doing whatever undisclosed thing they did to release VP8 from patent shackles. I do disagree very much with their methods of stirring up so much support with flat out lies -- if they hadn't gone through with licensing it, there'd have been a lot of wasted work and potentially even people in legal trouble for using the format. It was very shady.
One worry I have is that the internet will become a bit like American broadcast TV. We're still sending the hugely inefficient MPEG2 over the air. A sort of "moving standard" agreement with MPEG LA would be pretty awesome -- do we want to still be using VP9 as anything but a legacy-supported format in ten years? twenty? The moment we want to jump on a new codec, or say we just continue to develop VP9 into VP10, It's all but guaranteed some of the next-gen techniques will already have been patented.
I've developed Windows drivers before and can say that while yes, it is just plain C or a subset of C++, the APIs are entirely new and come with various curveballs user-mode devs will not have ever dealt with like keeping track of what IRQ level you're at.
A simple driver is... well, fairly simple. Once you try to do anything interesting though, there's a lot to learn before you can be useful. I'm curious if Linux is any different.
VP8 is a royalty-free video codec whose rate/distortion performance is in the same league as the royalty-bearing MPEG-4 AVC.
VP8 is not in the same league as AVC. Technologically it is largely a subset of AVC with quality somewhere between ASP and AVC. It is royalty-free now, but it wasn't always. When Google announced VP8 as a grand royalty-free codec, it was actually very obviously encumbered by patents that Google had no rights to and unfortunately thus offered literally no benefits.
It was only a year ago that Google and MPEG LA settled the issue, with Google getting a full license to those patents and the ability to sub-license to anyone they want.
What you have is Google very targetedly marketing VP8 to web devs as a Free/Open/Next-Gen to have them jump on a bandwagon to "splinter the internet". It was only thanks to Google's mammoth weight being able to negotiate with MPEG LA that all the traction it gained wasn't for naught.