Thats not actually true, though it could be if you hit all of the tax incentives. Even at 10k income you're still getting some tax (though again, there are probably a lot of ways to reduce the burden to $0).
Except that the vast majority of the population is NOT in population centers, but spread across the surface area.
Nukes also have an easier time leveling buildings than they do utterly decimating populations. The fireball generally is very small, the overpressure that will kill you is a bit bigger, but theres a wide zone of "buildings become unsound" where people suffer much lesser effects.
You're right that it would screw up civilization, but only for a bit. WW2 wrecked a LOT of countries, but they did bounce back because people dont tend to sit amidst rubble going "what do we do now"-- they tend to rebuild society.
I think his point is that an ASCII log is human readable with any text editor without needing an interpreter program.
And that is 100% wrong. Text files in ASCII are still binary, they just conform to a standard that uses 7 bits per character and has a character map that corresponds to written text. At the end of the day it is a stream of 1s and 0s like any other binary file. Executable code, too, is a stream of 1s and zeros that corresponds to certain glyphs, but they happen to be glyphs that are meaningful to a processor rather than to a human.
Whats crucial to really understand here is that there isnt a difference of kind here; everyone seems to be over-abstracting things to where they have some hard barrier between what makes something "binary" vs "text". The only reason we are "comfortable" with text is because there is a very wide variety of interpreters that can display ASCII in a human readable form on screen; but when you boil it down there isnt an actual qualitative difference between that and a particular record in a SQL table other than the programs that interact with that data.
Perhaps you could clarify. Was I wrong in saying that ASCII is binary? Or that bootsectors / bootloaders / partition tables are?
AFAIK even hex-based structures are fundamentally binary, but perhaps you're using a different sort of processor than the rest of us.
Thats not just wrong, its hillariously wrong. "Destroying the earth" would require several hundred thousand very high yield nukes (1MT); there arent more than a bit over 10k in the world and the info I was able to find indicates theyre generally much smaller than 1MT (so, perhaps a million nukes to be sure).
Im not sure exactly how much uranium would be required for "several million 500kt nuclear warheads), but Im quite certain noone has that much.
~1500 multiple warhead weapons is still enough to blow up the world several times over
No, its not, not even remotely close.
(figures taken from http://en.wikipedia.org/wiki/E...)
* Nuclear warheads have an area of destruction of some 180mi^2 (1MT, "destruction of buildings" = 6 miles).
* The US is 3,717,813 miles^2
* 3,717,813 / 180 = 20,000 1MT warheads to cover the US in "moderate destruction".
It gets better.
The world's land area is 57.53 million square miles. That means you need a hefty 320,000 1 MT (quite a large warhead, MUCH bigger than the ones we used at Nagasaki) warheads to "destroy the world". And you say we have that, several times over? My goodness, what countries are you supposing has that many? I had understood the US to have the most with some 6000, and other than western europe and Russia I didnt think anyone else had any. Dr Evil, perhaps?
Maybe you're talking about fallout, but thats not really what "destroy" means; a word like "contaminate" would be more accurate, if also much more vague.
Nukes dont actually scorch that big a patch of earth. I think theres some ~12k nukes on the earth, and if all of them were aimed perfectly spaced at the US I think you could take out most of the buildings in the US. Thats a far sight from destroying the world.
Dont know about the fallout though, that'd probably be pretty nasty.
Exactly, those log files you can parse in disaster scenarios are already binary, you just have a plethora of editors that understand a particular breed of binary called ASCII. Heck, a lot of the recovery process involves manipulating binary structures-- bootloader, boot sector, partition table, etc. We just have a lot of tools to handle these scenarios, and noone pays it much mind.
Theres no reason to think the core nix toolset wouldnt come to include tools for this new format.
Not being a primary *nix user I dont really have a horse in this race, but it really gets tiring hearing otherwise intelligent sysadmins complain about something that in technical terms is already the case. The only difference between "binary logs" and what you have now is semantics and a thing called ASCII. There is NO REASON a binary log format could not be as well documented and supported, particularly if it were a standard across all linux distros.
Good lord, mySQL uses "binary formats" but somehow it isnt an issue parsing them. Why do you suppose that is?
Luckily Moto G / Moto X / Nexus devices are all quite nice, cheap, unlocked, and will have support for a very long time.
With an Android device, the manufacturer outright abandons updating the phone the moment their next handset is on sale.
And then you say "screw it" and grab AOSP, and evaluate why you didnt get an unlocked non-contract phone like a motoG or nexus to begin with.
The issue is that 71% of RESPONDANTS said it was an issue.
You cant form any conclusion from that without knowing more information-- the size of the poll, whether people responded to other questions but not this one, etc.
I guess you now realize that's wrong. The main purpose of trim is to avoid reading and writing pages that are unused anyway. The SSD doesn't need to reallocate trimmed blocks, because the OS isn't using that data anyway. Less physical reading and writing == more endurance.
Its not wrong.
1) TRIM simply alerts the drive when a block is ready for erasure; its right there in the article I linked. Its primary purpose is not reallocation or anything else; its just garbage collection for performance reasons.
2) The endurance thing is ONLY if the firmware being used is using a hack to implement their own garbage collection which could induce write amplification. It does not, in itself, reduce endurance if the SSD isnt doing anything fancy / out-of-spec.
3) Reads have no impact whatsoever on endurance. Only write / erase cycles do-- hence why they quote 1000 P/E cycles (where P= program and E= erase)
Now that you've agreed with what I said (trim affects endurance, but in an application dependent way), are you ready to admit YOU had forgotten exactly what the tech does?
From the wikipedia article's opening paragraph:
A Trim command (commonly typeset as TRIM) allows an operating system to inform a solid-state drive (SSD) which blocks of data are no longer considered in use and can be wiped internally.
The purpose of TRIM is performance-- NOT ENDURANCE. It has NOTHING TO DO WITH ENDURANCE except insofar as it replaces a manufacturer's proprietary and amplification-causing garbage collection. Older drives dont HAVE garbage collection, and TRIM does NOTHING for their endurance; all it does is eliminate the eventual performance crash.
You REALLY need to read up on TRIM, as you seem to not understand what it is that it does. To repeat: It does not have any effect on reallocations. It does scheduled erasures. If an erasure would cause a reallocation, that would happen regardless of whether it was during a scheduled TRIM, or during a "on-the-fly erase/write".