That is not what I read. Sounds like PCIe power is reduced with the new diver. In addition, an option was added to further reduce total power consumption. This option is "separate" to the PCIe power issue.
In this driver we've implemented a change to address power distribution on the Radeon RX 480 -- this change will lower current drawn from the PCIe bus. Separately, we've also included an option to reduce total power with minimal performance impact.
So they were working on the drivers and decided to add a feature. This feature is off by default but could be useful for those with limited cooling in their cases. However, the PCIe power issue is fixed in all cases.
Branch prediction integrated with the pipeline. Most CPUs do not execute both branches so much as they perform all the work required to quickly switch to the alternate branch should a branch not go as predicted. This implies an alternate pipeline into which the instructions for the alternate branch are queued. This might not sound like much but it actually constitutes >90% of the work a CPU must perform. The ALU is fast and simple but getting the correct data to and from the ALU is challenging.
CPUs can also support multiple ALUs - but this is not to speed branches. Multiple ALUs are used when the CPU detects that incoming instructions are not dependent on one another and can be executed concurrently. When detected, instructions are executed in parallel. The benefits gained are limited and it comes at the cost of extra transistors. However, because you have less movement of data, power requirements are reduced.
Look at the Apple A9 CPU compared to alternate multi-core ARM chips that are available. The A9 is just as fast while running fewer cores at lower clock rate while consuming less power. It is able to do so by using the previously mentioned techniques. It uses billions of transistors and costs more to produce then other chips that are just as fast. Not a good choice for making devices with low profit margins, but an excellent choice if you can afford it.
While I agree this is more flash then substance, it hardly deviates from the laws of physics. Unlike the nVidia example you provided, this CPU does not have much in the way of IO bandwidth. So we are talking about minimal movement of data which in turn results in impressively low power consumption. For certain applications this could be great (a previous post mentions neural networks). For the other 99% it is worthless.
One should not compare this CPU to a GPU because the underlying design goals are very different. It is possible that certain tasks would be much better serviced by this CPU. Designing appropriate algorithms will take some time so I suppose we will have to wait to see if it is actually useful.
I would assume the reasons were more technical. Apple was fully capable of working out a deal if they thought it would be of value. The problem with ZFS is that it consumes more hardware resources. This is fine for a server because with additional hardware it performs quite well. People buying a server generally do not care about a couple gigs of RAM. But considering that Apple was selling laptops outfitted with 512MB - it was not a good fit. Any filesystem supported by Apple would also have to operate well over USB. If FreeBSD support for ZFS over USB is any indication, it is a bad idea (as I experienced with FreeNAS.)
If there were no legal problems then it is possible Apple would have continued to integrate ZFS with the plan of eventually switching over. But regardless of the legal problems, that switch would not have occurred right away. Looks like Apple supported ZFS just long enough to come to the conclusion that it was not a good fit.
I love ZFS on my fileserver. I am tempted to run ZFS on my workstation. But for the majority of computers Apple sells today, it would cause users more pain then it should.
How the sensor got that reading could still be manufacturing fault, cable fatigue, or a million and one other things not the fault of the driver.
Designing a pedal sensor that errors to 0% is expected. So when one of those million things goes wrong you do not get the 100% acceleration experienced in this situation. A far more likely scenario is that something dropped onto the acceleration petal. Alternatively, when in a state of shock, the driver mistook the acceleration petal for the brake.
Apply something specific to you - such as the first 3 letters of 4 pets you have / grew up with. Take "Rufus, Hobbs, Chipper, Stinky" and turn it into "RufHobChiSti". Or how about the different street names you have to walk along to go from home to school. Lots of combinations are possible, the point is to figure out something you can remember. In order to remember it has to have some personal meaning otherwise you would just use random numbers.
What I do is I have a common password which is then tweaked for each specific website. I use the website URL to prefix or postfix the password. For example, www.slashdot.org would turn into "stog" and be prefixed onto my common password to become "stogRufHobShiSti". Easy to remember yet impossible to guess.
It is very important to use different passwords for each website because the risk of one being stolen then applied elsewhere is very high. Far too many people share passwords between websites, email, etc. Very bad - apply a simple algorithm of your own design using the URL to prevent this.
Parallels is a pain in the ass. Every time Mac OS updates you have to update to a new version of Parallels. And those updates cost (typically). I believe they have sorted out most of their driver problems now, but it used to be that installing Parallels would cause nothing but problems for me.
Bring in VirtualBox. I also do embedded development (Linux host) and VirtualBox saves me when I need a Windows app. The GPU drivers suck but this is typically not a big deal when doing embedded development. Overall, I actually prefer it. If it cost the same as Parallels I would still use VirtualBox.
it just has to be a 24 bit DAC with the analog section nicely filtered and the device shielded against interference
You mean a 1-bit DAC. This implies there is 1 least significant bit of accuracy in the resulting analog output signal. In other words, as accurate as you can get with the given input signal. You will note that the expensive CD players of old all advertised themselves as "1-bit DAC".
But even then it is not as simple as you might think. The digital data is the derivative of the original analog signal. First you have to integrate the digital data to generate the 24 bit signal that is then sent to the DAC. This can be done using analog or digital techniques. Remember, the digital signal is only 16bits - typically. By integrating you can generate a signal containing ~ 24 bits of data. This is why expensive CD players were expensive - they are more then just a DAC.
For what it is worth, this technique was originally designed for records. It has the effect of preserving the high frequency while toning down the low frequency. This is good because the vast majority of the energy is in the low frequency and this low frequency data overwhelms the high frequency data. If you don't do something the low frequency noise becomes so significant that it sounds like you are putting your music through a telephone line - but with good base.
It should be noted that our ears are not linear. As such, digital data recorded linearly does not sound very good. It'll look good in the time domain (oscilloscope) but like crap in the frequency domain (spectrum analyzer). The latter is far more important.
Decaffeinated coffee? Just Say No.