Once the DIMM packaged versions become available, thats when Optane will really start to take off - slightly slower than DRAM, but not much, but considerably cheaper than DRAM for the same capacity
35 times bigger latencey of the current Optane memory when compared to DRAM is hardly "slightly" slower.
Look at the latency comparison table in the middle of this article.
We should deal with crisis in the order that they each threaten our extinction. Climate change is the first on the list in terms of immediacy. Later we can deal with other problems that are further out in time.
Are you kidding? What about nuclear weapons and starting wars here and there willy-nilly.
because of a political regulatory process and has little relationship to reality.
In other words, nobody really knows what is actually cheaper if sweeping claims like you posted are allowed.
For reference, the max bandwidth of USB2.0 is 480mbit, or about 60MB/sec.
60MB/sec includes low level protocol overhead. You cannot achieve that number. The actual maximum theoretical transfer speed for useful data is 53 MB/s. This can be further limited by the used HW/SW. If you have really good HW/SW then you should achieve at least 43 MB/s.
But tabs I am not looking at, (by default) should use ZERO CPU. I get that I might launch several tabs quickly and want to allow them to load. So, allow some time for them to load, but then cut it to zero.
Give me the OPTION to change this behavior. Give me the OPTION to play music in background (either globally, or on a specific tab). And for gosh sake, SHOW ME how much CPU each tab is using (optionally). Then I will know to avoid the sites that are using my browser to bitcoin-mine.
This! I was reading this discussion to find out whether I need to post myself. You did it for me. Thanks
No freaking way should background tabs run scripts by default.
The man difference between a concept and an interface is in the time when the dispatching to some specific called code is resolved.
Concepts resolve the call during compile time. This can lead to binary code bloat since the calling code needs to be "cloned" for each called code.
Interfaces resolve the call during run time. It can reuse the caller code but imposes some call overhead (the run time dispatch).
And sometimes you just need the resolving in the run time. If it would not be available then one would be forced to simulate it
The amount of time between slipping on the peel and landing on the pavement is precisely 1 bananosecond.