IE hasn't been integrated with the shell for a decade. If you type a URL into an Explorer window in Win7 or 8, it just launches your default browser, which may not be IE.
Idiot drone pilot flying around other people's property a mere 10 feet off the ground? Damn straight you should have the right to take that thing out. But it should still be illegal to shoot it down with a gun. That's just a public safety hazard far worse than the drone. Saying that it should be safe because shot is small and doesn't hurt when falling is like saying that it's safe to point a gun at somebody and pull the trigger because you think the chamber is empty. Some idiot is going to eventually make a mistake and shoot at a drone with something he shouldn't, something that isn't going to be as harmless as birdshot.
My suggestion for dealing with low-flying drones: pool skimmer. If it's just hovering there 10 feet off the ground, just grab the thing out of the air (or smack it hard enough to down it). If it's flying low enough over your property for the pole to reach it, then it's flying low enough that you should be allowed to take it out.
No, you're right. Looking into it, there appears to be no reason why what I've described couldn't happen at the ZVOL layer, but none of the layers on top of it support it. An exception would probably be block-sized inserts in the middle of a file on a ZFS filesystem that has dedupe enabled. As the "new" blocks were written out for the rest of the file, the filesystem would see that they were identical to existing blocks on disk and just point to them instead.
Of course, dedupe on ZFS is a terrible memory hungry monster that should be avoided unless you can afford to throw ridiculous amounts of RAM at it. Enterprise customers might have hardware that can feed the beast, but a home user sticking three 4TB drives in a raidz vdev would be hard pressed to feed it the 60GB of RAM that those few disks would require.
You're far from a typical user, however, and 100 petabytes of this new stuff would be just as cost prohibitive as 100 petabytes of NAND or DRAM...
Samsung started shipping 3D NAND in consumer SSDs in mid-2014 with the 850 Pro. It used a 32-layer 40nm process. It was their second-gen 3D NAND, with their first-gen 24-layer product released in 2013 for enterprise SSDs.
IMFT and Toshiba are both sampling 3D NAND, 32-layer for IMFT and 48-layer for Toshiba. Neither is saying what size, but some googling indicates the consensus is somewhere between 35nm and 50nm.
You're still going to need to do wear leveling. If you can write a byte every 2us, but you only have 1000x the endurance of NAND, you've only got 3 million write cycles, which you could burn through in around 6 seconds for a single byte. And since per-byte wear leveling is nuts, you're back to using the same sort of blocks as NAND does.
In terms of quickly eroding the market for NAND, if this stuff is still significantly more expensive than NAND, I don't see how it would have much of an impact. They're saying the cost is "in between there somewhere" in terms of DRAM and NAND. For consumer products, NAND is at ballpark $0.38 per gig, and DRAM is at around $5.00 per gig. So depending on where XPoint is in that range, it could either have a huge impact or a small impact.
Then again, $0.38 is MLC or TLC, and SLC is obviously going to cost a lot more than that. So maybe this new stuff might be more competitive with SLC and supplant that, but the SLC market is probably pretty small: Intel doesn't even use it in their enterprise drives anymore.
I would imagine where this stuff will have the biggest impact is in completely new use cases (places that needed non-volatile memory and so had to use flash, but were significantly performance bottlenecked), or in places where a combination of NAND/DRAM/supercapacitors are currently used. I can see enterprise use being a thing.
Sorry, I meant not an *nVidia* CPU.
The X1 uses a standard ARM Cortex A57 (specifically it's an A57/A53 big.LITTLE 4+4 config), so this says more about ARM's chip than anything nVidia did...
Now if you compared nVidia's Denver CPU, their in-house processor... The Denver is nearly twice as fast as the A57, but only comes in a dual-core config, so it's probably drawing a good deal more power. When you compare a quad-core A57 to a dual-core Denver, the A57 comes out slightly ahead in multicore benchmarks. Of course, single core performance is important too, so I'd be tempted to take a dual-core part over a quad-core if the dual-core had twice the performance per-core...
Why the X1 didn't use a variant of Denver isn't something that nVidia has said, but the assumption most make is that it wasn't ready for the die shrink to 20nm that the X1 entailed.
Why would most people need a terabyte of RAM? There is such a thing as diminishing returns, and you have to be doing pretty crazy stuff to need more than 32GB or so. Ultimately making RAM cheaper might save a few bucks, but that's not justifying the massive hype that non-volatile memory has had over the years.
In terms of 4K recording on a phone, video encoding is done in hardware, not software, and no amount of cache is going to solve that. If you're talking about highspeed, your 64GB cache would store around 11 seconds of raw 240FPS 4K video, and if you're encoding it first, then the speed of the storage isn't really the bottleneck.
I'm proposing that the drones be equipped with this to keep them out of the buffer area, not that the actual airplanes. Airplanes operating over cities are already required to operate 1,000 feet above the highest nearby obstacles, placing them far above any drones. Helicopters would be another story, but they are allowed to operate under your proposed 10 foot cap, so that's kind of already a thing.
Why would you expect it to be cheaper or denser than NAND? IMFT says it'll be more expensive than NAND, and even if the cost drops over time, so too will the cost of NAND.
The whole point of this, though, is integrating something new that works radically differently from existing aircraft. If other technological mechanisms can provide sufficient accuracy, why can't they be used?
It's not like they need to solely rely on AGPS either. Consumer IMUs have been advancing at a rapid pace (there's huge amounts of money being dumped in them due to gaming, VR, and mobile phones), and are capable of high accuracy when combined with an external reference. You can also use laser ranging, which is also very cheap these days (my robotic vacuum cleaner has a LIDAR turret on it, although the range would be less than the few hundred feet required here). If you know where you are, and you know the height above sea level at that location, and you know how far you are from the ground...
There are many tough problems to solve to make what Amazon is proposing practical, but accurately figuring out your altitude a few hundred feet from the ground is certainly not insoluble (or even particularly difficult). There are many things you can do to determine altitude at 300 feet than aren't possible at 30,000 feet.
Good modern GPS implementations (which often include information from multiple constellations and other sources like wifi and cellular towers) can provide altitude with far better than 100' of accuracy. They are not particularly expensive.
> Not quite: Just open a 500MB word document and insert a single character at the top of the file.
COW filesystems would have no problem with that scenario, especially when they have dynamic block sizes. There might still be some nasty write amplification (such as writing kilobytes of data to insert the one character), but it wouldn't be any slower than appending one byte to the end of the file.
Would that really make much of a difference? Computers already act, for the most part, like they have non-volatile memory. When you shut the lid on a laptop, it writes RAM to disk and goes to sleep. If you wake it up without having cut the power, it wakes up quickly. If you pull the power/battery, it takes a few seconds longer. In either case, it wakes up where it left off.
There is also nothing stopping developers from doing what you describe right now. Storage is fast enough that changes to most files can be saved directly to disk as they're made. When working in the cloud, this sort of "every keystroke saved" thing is already the norm.
I'm not trying to say that really fast and durable non-volatile memory wouldn't make some improvements in some places, but generally the workarounds currently in use have gotten so good that the impact would be relatively minor.
But at that point you're basically just replacing the current approach of a supercapacitor and DRAM with some of this new stuff. You might save a few bucks, but that's a relatively small difference.