What made Cell a nightmare to program for was the SPU's local store. The local store is great for performance, but a pain to program since the programmer had to explicitly move data back and forth between main memory and the local store (hardware designers back then all thought compilers could solve their problems for them--see Itanium). MIC is cache coherent. All memory references are snooped on the bus(es). MIC programmers don't have to worry about what's loaded in memory and what is not. An instruction merely has to dereference a memory address, and the MIC hardware will be happy to go fetch the needed data for you, automagically. It was not so with Cell.
It's not entirely syntactical. Local shared memory is exposed to the CUDA programmer (e.g., __sync_threads()). CUDA programmers also have to be mindful of register pressure and the L1 cache. These issues directly affect the algorithms used by CUDA programmers. CUDA programmers have control over very fast local memory---I believe that this level of control is missing from MIC's available programming models. Being closer to the metal usually means a harder time programming, but higher performance potential. However, I believe NVIDIA has made CUDA pretty programmer friendly, given the architectural constraints. I'd like to hear the opinions of MIC programmers, since I have no direct experience with MIC.
I'm glad to see SteamOS has picked up PREEMPT_RT. I hope they stick with it. The PREEMPT_RT developers recently reported that they lacked the man-power to continue development (https://lwn.net/Articles/572740/). Maybe Valve can contribute money or man-power?
Also, since NVIDIA is keen to support SteamOS, this means that NVIDIA must officially support PREEMPT_RT. NVIDIA's driver support for PREEMPT_RT has always been spotty. At best, hacks to the driver's GPL layer were required to make it work. I hope those days are over. NVIDIA has really improved their Linux driver over th years in order to better serve the Android and HPC markets. PREEMPT_RT support should make it even better (PREEMPT_RT can often uncover pre-existing bugs).
The choppiest site I've visited on my 4S with iOS7 is slashdot's mobile site. The background of each story is "active" in the sense that when I thumb-down to scroll, the story's background dims to grey. The regular white background returns when I lift my thumb. This, combining this action with scrolling really makes for a choppy experience!
I can imagine reddit threads where members crowdfund promoted tweets against the most despised companies, such as cable and telco providers.
Check out the Udacity class on parallel programming. It's mostly CUDA (I believe it's taught by NVIDIA engineers): https://www.udacity.com/course/cs344
CUDA is generally easier to program than OpenCL. Of course, CUDA only runs on NVIDIA GPUs though.
I am very skeptical of the marketing claims of low-latency human input devices like gaming mice and keyboards. I understand the usefulness of special device configuration (e.g., macro buttons), but does a mouse really need to be polled every 1ms (like Razer mice)? In driving tests, the reaction time of a prepared driver is on the order of 750 to 1000ms (http://www.tandfonline.com/doi/abs/10.1207/STHF0203_1#.UeGmimR4a04 --- sorry for the paywall). Obviously, driving is not gaming, but let's suppose a gaming reaction time is half this: 375ms to 500ms. Let's compare two mice: one polls at 1ms and the other polls at 10ms. With a base reaction time of 375ms, the resulting difference is about 3% at worst, 2% at best. Is low-latency input devices where we should be optimizing a player's performance? Does it really matter all that much? Wouldn't it be better to focus on things such as network latency and possibly even OS schedulers?
I admit, I am not a serious gamer and I don't invest heavily in gaming equipment. I would be very interested in hearing objective opinion from a gamer. Does an input latency 10ms really matter? If so, do you have objective data that can rule out the placebo effect?
"The rule is that you use a before words that start with a consonant sound and an before words that start with a vowel sound."
It's all about sound. "N" is pronounced "En." Hence, "an."
Grammar Girl: http://tinyurl.com/nuj8h5a
I wonder if it's not so much a function of age, but rather that "older" programmers want to live in a place where they can own a home and raise a family. That is exceedingly hard in the Silicon Valley, even for someone with a well-paid tech job. The cost of a rundown three bedroom bungalow in Cupertino is in excess of one million dollars (Zillow link: http://tinyurl.com/lq2wpcq). A four or five bedroom home is closer to two million. Purchasing such a home is a challenge for even a family with two tech incomes, harder for a family with one tech income and one "normal" income, and damned near impossible for a family with a single breadwinner. Even if you manage to pull off purchasing a home, you've still got a rundown bungalow. Why not go somewhere where you can better enjoy the fruits of your labor?
As a tech worker in his early 30s in the Valley, guys my age talk constantly of moving to Austin, Raleigh, or some other non-Valley tech hub---some place where the idea of raising a family doesn't boggle the mind. I suggest that while age discrimination may be very real, we must also consider that "the old guys" are merely moving out of the Valley. Thus, the average employee age of any company that has the bulk of their operations in the Valley will skew towards the young side. I don't believe it's a coincidence that the average age is less than 30, since 30 is about the age many educated men start a family.
It strikes me that a 3D printed gun doesn't need to actually look like a gun at all. Indeed, a 3D printed gun could use colors/markings and form of existing toy guns (a nerf gun that fires real bullets!), or perhaps it could look like a toy dinosaur that actually shoots bullets from its head. Perhaps I am stating the obvious, but it never occurred to me during all these discussions about 3D printed guns. Something like this puts security/police/secret service officers facing people armed "toys" in a terrible position.
Electric shock in a game is not new. Tekken Torture from 2001: http://www.eddostern.com/tekken_torture_tournament.html
A capitalist economy partly guards against oversupply. However, oversupply has resulted directly from Chinese policies: http://www.nytimes.com/2012/10/05/business/global/glut-of-solar-panels-is-a-new-test-for-china.html
Now both American and Chinese solar companies are failing. Further private investment in this oversupplied economy seems unwise; there is a distaste for subsidizing failed business models in the US (at least where green tech is concerned). Perhaps university research is the best alternative investment.
With respect to throughput and multitasking, your desktop OS may be better. Theoretically, a focused game OS may take steps to reduce worst-case latency (real-time OS techniques) and optimize operations for game-related workloads (possibly game-tuned memory allocators?). Unfortunately, console makers are very secretive on how their OSs are designed and implemented. I would be interested to hear from anyone who is familiar with modern game OS development. Is there any secret sauce?
Maybe this isn't a solution for FPS games, but I would love to be able to play Civilization V from the cloud with all the graphic bells and whistles.
Kind of makes me wonder why slashdot almost never links the REAL articles and instead just links some fancy news sites with second hand information.
Maybe because of the paywall?