This initiated a feedback loop that caused the rhizobia to start fixing more atmospheric nitrogen, which the plant then used to produce more seeds. "They are bigger, grow faster and generally look better than natural soybean plants," Tegeder said.
DRM is a means of limiting the distribution of a purchased (or licensed) digital file by the owner (or licensee). Exclusively locking a subscription service to a platform is not DRM. Rather, it is a means of boosting the sale of the platform by offering additional platform-only services. We can discuss the harm and inconvenience that platform lock-in may cause. However, we should not confuse the issue with DRM. That will just inflame old passions, preventing someone from approaching this new distinct issue from a fresh perspective.
No doubt many people against DRM will also be against platform lock-in. Perhaps others may not. For instance, I am generally against DRM. I purchased a digital file; I would like to be free to make copies of it for my own use. However, with platform-based subscriptions, I just can't get all that upset about it. I don't own an Android device, so I won't subscribe to Google Play. Also, there are a wealth of quality subscription services out there that run on all of the popular platforms. So what's the big deal?
Have you spent any time in Blacksburg? It's a small town. They still remember. This student who had been there for nearly four years knew exactly what he was doing.
As long as you're all right with proprietary drivers, NVIDIA's Linux driver is quite solid. It needs to be, as it is used in all of their supercomputers.
I greatly prefer open standards as well. However, CUDA is considerably less painful to work in than OpenCL. NVIDIA has also demonstrated more commitment to capturing GPGPU business than AMD. For example, the first supercomputer on top500.org with AMD GPUs ranks in at 94th. In contrast, NVIDIA GPUs are used in the 2nd ranked supercomputer. Xeon Phi is gaining in popularity, but Intel wants you to work in CilkPlus not OpenCL.
That said, I believe the future is tight integration (i.e., cache coherence) between the GPU/accelerator and main memory. AMD's HSA is a step in the right direction. CUDA has some catching up to do in this regard.
I really appreciated the Cell BE too. I do hope that that architecture, with cache coherence (local stores are a pain to manage), becomes more common. Have you taken a look at Texas Instrument's KeyStone II? It's ARM + crazy DSPs. It doesn't seem that anyone has really noticed it though. http://www.ti.com/dsp/docs/dsp...
I agree. I like to use python instead.
"Perl is an excellent candidate, especially considering how work on Perl6, framed as a complete revamp of the language, began work in 2000 and is still inching along in development."
This does not imply that Perl is on its way out. I don't use the language myself (I despise it, personally), but I know many who use it on a daily basis. It is still a go-to language for many programmers (albeit, who may no longer be in their 20s) who need to quickly hack together a test harness for a larger system. It could merely be that Perl is "complete" for applications where it is useful. Further revision is no longer necessary.
Also, I'd hardly say that C++ is on it's way out, even though C++11 took so long to be ratified.
There is a new problem that comes with reliance on adjuncts. Departments rarely monitor the performance of instruction themselves. Departments make decisions on re-hiring or firing an adjunct based upon student reviews and evaluations. Left without recourse, adjuncts are perversely incentivized to teach easy classes and give out high marks---this helps ensure good reviews. (It also continues the trend in grade inflation.) Adjunct professors cannot challenge their students without risking being fired.
My girlfriend recently graduated with a PhD in history from a department ranked 11th by US News. She's won a number of nationally recognized awards. She still can't find a tenure-track job. She was hired as a visiting professor at a university for this past year. Pay was around $40k with benefits. She got great reviews from her students, so the university offered to re-hire her as an adjunct with the same workload (teaching four classes a semester)... but at *half* the pay and *without* benefits. Her pay and benefits were better as a graduate student! She politely declined the offer. Being valued so little by the same world that qualified you is hard to endure.
It seems like taint tracking and sanitation should be pervasive and explicit. This can be partially enforced by type enforcement, no?
Recent CNN report on the prices of beef and dairy: http://money.cnn.com/2014/04/1...
This will increase the cost to farmers too. That gets passed on to consumers. But perhaps we're all just commenting on the obvious: Production cost of X increases. The production cost of any product Y directly (or transitively) dependent upon X will also increase (or the value/quality of Y may decrease to compensate).
What made Cell a nightmare to program for was the SPU's local store. The local store is great for performance, but a pain to program since the programmer had to explicitly move data back and forth between main memory and the local store (hardware designers back then all thought compilers could solve their problems for them--see Itanium). MIC is cache coherent. All memory references are snooped on the bus(es). MIC programmers don't have to worry about what's loaded in memory and what is not. An instruction merely has to dereference a memory address, and the MIC hardware will be happy to go fetch the needed data for you, automagically. It was not so with Cell.
A language that doesn't have everything is actually easier to program in than some that do. -- Dennis M. Ritchie