You need at least:
1. Kernel driver for hardware init, power management, mode setting, GPU buffer management and command submission
2. Userland library for GPU buffer management and command submission
3. OpenGL implementation
In the open source graphics stack, the kernel driver exposes KMS and DRM interfaces and potentially others. Parts 2 and 3 are part of libdrm and Mesa respectively. The display server can (I think) be built on top of KMS, libdrm and OpenGL and be independent of the hardware. However it will need an extension to OpenGL called EGL which will be specific to each display server protocol.
Currently X doesn't usually work that way for historical reasons - it used 2D acceleration first and still supports hardware that has only 2D acceleration. So it has hardware-specific drivers for each family of GPUs. However there is the 'Glamor' library that supports 2D acceleration genericallly on top of OpenGL, and I would expect to see a gradual move to that, not least because it's the only option for 2D acceleration in XWayland.
Getting back to Nvidia, their problem currently is that they don't implement the same interfaces as the open source stack and therefore don't work with the new display servers that depend on those interfaces. Implementing KMS gets them a long way there. However it sounds like they still need to reimplement the EGL, not because it's hardware-specific but because their OpenGL implementation is entirely independent of Mesa.
1. The allegations against Quinn are insinuations with no evidence behind them.
2. Sarkeesian has been loudly contradicted and claimed to be a con-woman by people that can't take criticism and are annoyed by the success of her Kickstarter.
3. This is being called "misogyny" in gaming because it is directed specifically at women.
4. The Social Justice Warriors have all supported these women because they oppose misogyny.
5. It's so cheap and easy to brand gamers basement dwelling vrigin men-children than it is to look at the facts. This is stereotyping, but it is nothing like the harrassment, online bullying, doxxing or death threats made by some gamers against feminist critics.
Fixed that for you.
ASICs generally aren't flexible enough that you could simply emulate another controller in firmware, while FPGAs suck too much power to use on commodity network adapters. Writing a new driver (or bringing an existing neglected driver up to scratch) is going to be quicker than trying to make hardware that's compatible enough to work with a driver written for another vendor's controller.
(Besides which, as that other driver is probably maintained by your competitor, do you really think they're going to make an effort to ensure that their later updates are compatible with your clone controller? You'll still have to maintain your own fork.)
I have often wondered why there isn't a vendor-neutral register-level standard for Ethernet controllers, along the lines of AHCI and xHCI. There is the virtio networking standard, but as it's designed for VMs I assume it does not cover Ethernet link management. I seem to remember that VMware tried to promote a common interface for SR-IOV virtual functions at one time, but that didn't get very far. Again that would not have included link management.
Qt by default uses native widgets wherever possible
I believe it imitates the look of native widgets but doesn't actually use them. This should allow for consistent behaviour on all platforms (unlike, say, WxWidgets).
- Make the binnmu regexp also reconize our build suffixes
- New XBox controller driver
- Disable Intel P-State driver as it causes issues with sound being choppy during BigPicture trailer video playback.
- Hard-code parallel build for now since our OBS infrastructure doesn't know how to set these options yet.
- Add postinst step to touch
There's not 64k of assembly pumping bytes into a framebuffer and twiddling the PC speaker port to synthesize digital audio.
Of course. But all the creative work is squeezed into 64K.
One thing I couldn't find in there (and I've been out of the scene for a LONG time, so I don't know how this works on new-fangled fancy computers...) -- do these write directly to the video hardware? Or do they use OS services like DirectX11, etc?
They use DirectX, because that is the only way to support a reasonable range of hardware. (Also, you can't hit the hardware without installing a new driver or exploiting a kernel bug. Neither of which is very friendly.)
But are people still getting down and counting clock cycles?
Cycle counts aren't even documented today. Now it's all about avoiding cache misses and cache invalidation.