Open Source will thrive when documents mandate that every device must be accompanied by a document describing how this will be programmed. In other way you buy an NVIDIA card or an smart phone and you get a PDF that gives detailed specification of how software can use the device (not how it is manufactured) to get the advertised functionality. this should extended to all sub-components that interact with the software. Using undocumented functionality should be illegal since it is a means to provide illegal competitive advantage. No blobs.
So, you prove you are lucky. Nothing else. I smoke 20 cigarettes a day I am 37 and have no cancer. So smoke 20 cigarettes a day until 37. You are safe.
I wonder about all these papers claiming fantastic performance with neural nets. This paper makes me wonder about the perturbation invariance of neural networks. The question is how much perturbation is tolerated? Yes, one can perturb a cat image but how much perturbation leaves the concept of cat invariant is another question. This is a fundamental symmetry. In any case this paper is very big news. Maybe the inherent problem is that neural networks do not approximate smooth, rather than measurable/continuous functions.
Replace the proprietary junk with an Octacore DSP as a co-processor and do software rendering assisted with extra instructions. If it works with arm, it should work with Intel.
They could pair the cores with 8-core TI DSP in order to mimize blobs (the a simple framebuffer and an A/D would be enough to have video/audio).
With Wayland/Mir people should consider pushing X totally on user space, like Xming,VcXsrv,XDarwin (and XPhoton R.I.P.) with an SDL fallback.
We need a stack and a common low level interface like USB Mass Storage for GFX. Only one driver common to all gfx. Implement OpenGL in hardware and interface with it, nothing else. Open source the GFX interface or create a spec, we need one gfx driver and nothing else.
highly interactive hardware -> you mean NVIDIA ATI hardware? They do not comply to any specification like USB mass storage which can be provided as a file. A graphics card !IS! a file , the crap by vendors is not a GFX card. The systems you talk about, are co-processors. So they are not per-se files. But with some engineering from the part of vendors you can view them generally as files, one for each co-processor. Then you upload VM-images through this file and create dynamic files (endpoints) for RPC between host and each VM. The problem is that you cannot buy a graphics card (like a PCI / USB adapter) . You buy a GFX fused with a co-processor integrated in a very strange proprietary manner. I do not defend the paradigm, like pthreads it has standed the test of time. Vendors are morons.
They seem to have stopped mingw builds and focus on clang-cl on windows. The problem is that you need the Windows SDK. With most other open source / free compilers this is not necessary. Personally I use mingw in both of its incarnations and the problem is that I cannot download a ready made binary. clang-cl is unusable without windows sdk. It is not compatible with lcc,pelles,openwatcom,digital mars c SDKs.
I am on a national project requiring to stream opengl framebuffers. I had to replace the old slow system with a new one that does not need a special decoder. Guess what I used. VP9 through gstreamer and it plays nicely with FF/Chrome. However if I have time i will convert it to the SDK provided by Google mainly as a personal project.
this is the reason. We need a standard GPU access API and vendor independent. If the DBs need a GPU to speed up, please use Parallela or support OpenGraphics.
Native graphic card decoder is an inflexible design. OpenCL is a flexible standard. You can accelerate new codecs on it. You don't have to wait for a new card. And in principle it means one less proprietary driver. Please stop this graphics card argument argument. The GFX thing is anti-competitive. Actually it should be a math-coprocessor. The display should be handled by another device, eg a framebuffer card that interfaces to and reports capabilites of the monitors. Something like USB/Firewire/Soundblaster extension card. The co-processor is just another device sitting on the bus which could accelerate opengl,openal or openvg or you could not buy it but instead rely on the CPU to do the hard work. But you could still interface to the monitor in standards compliant manner.
I would buy one if it came with Vortex86MX even if its pricier. It would be a competitor to the BeagleBoard but with hopefully more open VGA.
and no gdc integration. These are the items I am most interested about.
The best sci-fi films I have seen. I have watched all parts and I would like to see a new one or the end of the bugs.