Note the article says ARM *server* processors. In that market, GPUs are totally irrelevant, power usage is secondary to performance, and price of the CPU is a distant third.
Power usage translates directly into heat, so if you have a CPU that takes 10x less power you can cram 10x more of them onto the same server.
Unfortunately this means every write to the display now has to go via the kernel instead of going direct.
No, if you mmap() the framebuffer you (as in the application) write directly to video memory (of course, what you say is true if the application uses lseek() and write() instead, but why would it?). The reason it's slower is because drivers make the GPU do all the heavy lifting. The only abstract thing about the Linux framebuffer is that you don't have to map the memory yourself, the kernel drivers do it for you.
if it isn't ZFS, its only temporary.
Larry Ellison would be proud.
If this board had a few spare GPIO pins brought out to a header there really wouldn't be a reason for Pi to exist at this point.
Except that it costs twice as much and draws more power, so it you don't need all the extra functionality it's a waste of money and electricity.
Not that I'm saying it's not needed. There probably will be a larger, more expensive version of the Raspberry Pi.
- Jesse has been playing around a bit with some remote Wayland support using libvncserver. He's apparently had some success with this and expects to push some code upstream soon.
So there are already people working on it.
unless you create an infrastructure that regulates how many vehicles can charge at any one time
Like a limited number of refueling stations? If recharging only takes six minutes and needs special equipment (thick cables, giant power transformers, etc.), people probably won't charge their cars at home.
Long computations which yield zero are probably all for naught.