They could pair the cores with 8-core TI DSP in order to mimize blobs (the a simple framebuffer and an A/D would be enough to have video/audio).
With Wayland/Mir people should consider pushing X totally on user space, like Xming,VcXsrv,XDarwin (and XPhoton R.I.P.) with an SDL fallback.
We need a stack and a common low level interface like USB Mass Storage for GFX. Only one driver common to all gfx. Implement OpenGL in hardware and interface with it, nothing else. Open source the GFX interface or create a spec, we need one gfx driver and nothing else.
highly interactive hardware -> you mean NVIDIA ATI hardware? They do not comply to any specification like USB mass storage which can be provided as a file. A graphics card !IS! a file , the crap by vendors is not a GFX card. The systems you talk about, are co-processors. So they are not per-se files. But with some engineering from the part of vendors you can view them generally as files, one for each co-processor. Then you upload VM-images through this file and create dynamic files (endpoints) for RPC between host and each VM. The problem is that you cannot buy a graphics card (like a PCI / USB adapter) . You buy a GFX fused with a co-processor integrated in a very strange proprietary manner. I do not defend the paradigm, like pthreads it has standed the test of time. Vendors are morons.
They seem to have stopped mingw builds and focus on clang-cl on windows. The problem is that you need the Windows SDK. With most other open source / free compilers this is not necessary. Personally I use mingw in both of its incarnations and the problem is that I cannot download a ready made binary. clang-cl is unusable without windows sdk. It is not compatible with lcc,pelles,openwatcom,digital mars c SDKs.
I am on a national project requiring to stream opengl framebuffers. I had to replace the old slow system with a new one that does not need a special decoder. Guess what I used. VP9 through gstreamer and it plays nicely with FF/Chrome. However if I have time i will convert it to the SDK provided by Google mainly as a personal project.
this is the reason. We need a standard GPU access API and vendor independent. If the DBs need a GPU to speed up, please use Parallela or support OpenGraphics.
Native graphic card decoder is an inflexible design. OpenCL is a flexible standard. You can accelerate new codecs on it. You don't have to wait for a new card. And in principle it means one less proprietary driver. Please stop this graphics card argument argument. The GFX thing is anti-competitive. Actually it should be a math-coprocessor. The display should be handled by another device, eg a framebuffer card that interfaces to and reports capabilites of the monitors. Something like USB/Firewire/Soundblaster extension card. The co-processor is just another device sitting on the bus which could accelerate opengl,openal or openvg or you could not buy it but instead rely on the CPU to do the hard work. But you could still interface to the monitor in standards compliant manner.
I would buy one if it came with Vortex86MX even if its pricier. It would be a competitor to the BeagleBoard but with hopefully more open VGA.
and no gdc integration. These are the items I am most interested about.
The best sci-fi films I have seen. I have watched all parts and I would like to see a new one or the end of the bugs.
Normally one could write the USB spec (lets say X spec) in a high level language (that specifies behavior, say HLL) as a reference implementation that could be taken by the OS (any OS) to derive the driver. This spec should be provided by the X standards body (could b even a company) . Moreover it could interface to Y spec in the same high level language and still generate a X:Y composite driver or even create two drivers at kernel level that interface with some OS specific way. The standards body could verify X or X:Y for correct operation and publish it along with the PDF spec. The problem is that many people believe that the knowledge of C or some kernel internals are a form of qualification when we live in 2013. The real qualification is to create the HLL to kernel interface translator. Science is lacking from software in many many circumstances. Companies, selfish programmers f@^(ing money are to blame. But the problem is that OSes far better than commercial ones fail because of lack of drivers. Or equivalently by the lack of mapping the spec to their kernel interface in a hand-written time consuming error prone way due to complexity. The big thing is how to write the kernel (or the microkernel) to support apps known as drivers
.... among others.
We have functional, logic and functional logic languages. But we still live in caves imposed by companies and incompetent programmers that act like priests in ancient Egypt and companies that cheat on their customers. The above article shows a big symptom of a deep problem in CS. It is lack of professionalism, lack of scientific method and lack of freedom. The HW companies should only sell hardware accompanied with documentation. They should not be allowed to give drivers for an OS which poses a competitive advantage to a company. It should be illegal. This is not about OSS, its about common sense. However the UEFI secure boot and recent NSA story say something different. It says that there are no government entities willing to pose limits on the power of companies. Instead they fuel them.
The paradox arising is that in 2013 many develop like in 70s. It is a form of autism, it is a kind of slavery, it is a sign of decay.
and reduce bugs.
I really miss the Wine-On-Windows mingw builds. The SF builds are outdated.
It makes sense. FPU is dead long live the multi-core FPUs exported by OpenCL