Companies doing grid storage don't care what size the cells are. They're going to buy pre-built modules or units, and Tesla considers grid storage to be one of the biggest potential markets for the output of their GigaFactory.
Speaking as a fellow non-American, I'm thrilled. The better SpaceX does, the lower their costs will be, and the more likely that the CSA (Canadian Space Agency) will be able to afford their services.
The CSA's annual budget is only around half a billion dollars per year, around 25% per-capita what the US spends on NASA. That wouldn't have even been enough to afford a shuttle flight. But with SpaceX's pricing, Canada can afford to launch our own stuff via private industry. We've already used SpaceX to launch a satellite (CASSIOPE) much cheaper than the alternatives, and if SpaceX hits their manned spaceflight target of $20 million a seat, Canada could actually afford to do its own manned launch with SpaceX. As in, a flight with only Canadian astronauts would actually be something that our meagre budget could afford. And we can always use more Chris Hadfields
Basically, the better SpaceX does, the more Canada can do with its limited space budget. Exciting times!
They both got essentially the same contract, the dollar value represents what the companies bid for it, rather than establishing a first/second place.
Basically, the both won an equal contract. On the one hand, it sucks for SpaceX that they get less money to do the same thing, but on the other hand, it will put quite a feather in their cap to be able to demonstrate concretely that they can live up to their claims of doing it for less, which will give them a huge edge in the next round of contracts. Next time they can say "Look, we did everything just as well as Boeing, but we cost you a ton less. This time you should give us most of the flights."
I think that ship sailed when Microsoft started contributing code to the Linux kernel, although they had released lots of code under OSI-approved licenses way before that.
I once checked out the TV section of a Yodobashi Camera (and if you're ever in Japan, you really must visit a Yodobashi Camera, it's like every store of the floor is the size one or two BestBuy stores, except there's half a dozen floors or more). The brands of TVs on offer was very different from what you'd see outside of Japan. In most of the world, Korean brands like Samsung and LG are quite popular, but in that TV section (of what are probably the largest electronics stores in Japan), there was not a single non-Japanese television brand to be seen. Not a single Samsung or LG television was available.
- The difference is irrelevant, the apps are stored as platform-independent bytecode that (as of the next Android release) is then converted to machine code by ART or done on-the-fly by Dalvik itself. As a result, so long as Dalvik or ART supports the processor architecture, the application doesn't need to.
- As long as the ARM app doesn't use NEON (which I believe Intel's Houdini emulator doesn't support), it shouldn't have any problems running the ARM code on the x86 devices. In fact, you're likely to have better compatibility running emulated on x86 than you are natively on some older ARM devices.
You're accounting only for full-drive failures. IIRC BackBlaze indicates failure rates are higher than 5% per year, but that's not really relevant. The bigger problem is a read error during a resilver. That's something that the drive specs indicate should be expected at least once during any resilver, although in practice I find it less likely than that.
If you're using mirrored pairs, any resilver is (by spec) highly likely to result in corruption due to unrecoverable read errors due to lack of redundancy. Resilvering a single drive in a raidz2 array, however, still provides you with redundancy to recover from any read errors.
Yes, but write throughput is still increased, and not everybody needs more write IOPS. Furthermore, even with 8 disks you can build two raidz2 arrays and put them in a pool, at which point you've got the IOPS of two disks. And on top of that, you can use fast SSDs as ZIL cache devices.
Some points here:
- Most Android apps are Java bytecode, not native code, so the underlying processor architecture is irrelevant (for those apps)
- x86 is a supported Android platform, so many apps that do require native code have x86 binaries available
- Intel provides an ARM emulator for the x86 version of Android so that x86 Android devices can run ARM binaries
- Some ChromeOS devices use ARM processors to begin with.
Only if ZFS is communicating with that higher level. A simpler solution is to just use ZFS's native RAID instead of treating a RAID array as a block device. I can't think of a single benefit to doing that, but I can think of lots of reasons why it's a bad idea.
Sorry, brain fart. I meant "DEDUPE consumes insane amounts of RAM", not "BSD".
You shouldn't have to reinstall ZFS after any updates (apart from maybe a distro release upgrade, which on a file server running Ubuntu are probably being done every two years, or six months if you live on the edge), as it uses DKMS and will recompile the modules when you update the kernel or zfs.
Intel's best kept secret is that many of Intel's cheapest processors support ECC (including most of the i3 series), and as such enable you to build some surprisingly low-cost low-power file servers.
Here's the list of Intel desktop CPUs that support ECC:
Looks like the MSRP starts at $64 or so. The downside is that you need a chipset that supports ECC too, and those are only server chipsets. Luckily, a motherboard with one of those (like the Intel C222 chipset) start at ~$140 or so.
Slapping together a low-end server motherboard with an i3, some 8-drive HBAs, and a bunch of ECC RAM, it's a popular way to make a low-end file server.
It works if and only if the target system is also using LSI RAID controllers.
Meanwhile, I created my storage pool on Solaris UNIX, used it for years, then switched to Linux without having to do anything to the pool except "zpool export tank" on the old OS and "zpool import tank" on the new one.
Why would you run ZFS on top of two raid6 arrays instead of building a storage pool consisting of two 10-drive raidz2 vdevs? By doing what you're doing, you're effectively running with no redundancy despite being on top of raid6. If ZFS finds a checksum error, it thinks it's running on two big drives in a stripe with no redundancy, and it will be unable to recover the lost data.
What you're doing is highly inadvisable.