You could ban all of those drugs, and some other drug would become the first one users try.
Would that be causation, or just correlation?
Pushing this even further --- I have inherited a (mostly empty) 3,000 square foot data center (almost Tier III - but it shares a wall with the outside or so I'm told). I'm using (maybe) two racks.
Are you a Nigerian prince?
Yeah, CPU-only coins last for about 48 hours before a GPU miner is released. As far as crypto-coins the fact is, a modern graphics card is faster than almost anything a CPU can do.
This applies mainly to those that simply choose a semi-standard hash algorithm, such as one of the SHA3 contestants or a combination thereof. Often there is GPU code already available, and building the miner is all about reading some specs and writing some glue code*. Also, most of these coins are based on Bitcoin and simply change the hash algo.
However, most Cryptonote coins (using the Cryptonight algo) have lasted for ages without an open GPU miner. For starters, they are not forked off Bitcoin. Boolberry is a Cryptonote coin with a different algo, which makes it faster to sync, while still aiming for GPU resistance. An open GPU mining codebase was released just a few days ago, and there's still work to do for general distribution. Besides, Boolberry's algorithm needs several MB of fast cache, which is OK with GPU texture cache at the moment, but it will grow over time, possibly making GPU mining unfeasible again.
*(I wrote a GPU miner for JH-256 coins in a few days with no prior GPU/OpenCL experience. Endianness is a bitch.)
If only there were a dedicated community for every sad sloth, or at least an anagram thereof.
(If you either feel for the sloths, or just appreciate the pun, please send a random amount of slothcoins to SML12GaoebyneT7ctYuj9PFicptetjPUct. Thank you.)
Or if you're into math, you invoke the pigeonhole principle So the limit of useful compression (Shannon aside) comes down to how well we can model the data. As a simple example, I can give you two 64 bit floats as parameters to a quadratic iterator, and you can fill your latest 6TB HDD with conventionally "incompressible" data as the output. If, however, you know the right model, you can recreate that data with a mere 16 bytes of input. Now extend that to more complex functions - Our entire understanding of "random" means nothing more than "more complex than we know how to model". As another example, the delay between decays in a sample of radioactive material - We currently consider that "random", but someday may discover that god doesn't play dice with the universe, and an entirely deterministic process underlies every blip on the ol' Geiger counter.
IOW, Kolmogorov complexity. For example, tracker and MIDI files are a great way to "compress" music, as they contain the actual notation/composition rather than the resulting sound. Of course, that doesn't account for all the redundancy in instruments/samples.
So while I agree with you technically, for the purposes of a TV show? Lighten up.
IMHO, half the fun of such TV shows is exactly in discussions like this -- what it got right, where it went wrong, how could we use the ideas in some real-world innovation. I find that deeper understanding only makes me enjoy things more, not less, and I enjoy "lightening up" my brain cells.
In the past few years, USB has gotten much faster
I agree with most of your post, but this is simply false. USB 3.0 is a completely new interface, bolted on USB 1/2 to make it seem like a seamless transition.
I used to think USB is all about selling a new interface with an old name. For example, in a few years we'd have a CPU socket called USB 14.0, but hey, at least it's USB. Now I have a USB 3.0 hard drive, and the mini plug/socket in particular shows how it's just USB 1/2 + 3.0 bolted together. So my new future prediction is USB 17.0 where you have this fist-sized lump of connectors from different ages, all tied into one bunch to ensure backwards compatibility.
BTW, I have two Intel Core CPUs here, Core 2 Duo T7200 (released 2006) and Core i5 520M (2010), both "mobile" CPUs. The former is a lot faster under certain workloads. In practice, they are roughly equal, and the new one probably has better power efficiency, but it's not exactly the level of progress I'd expect.
I could have let this one slide, but I have a few things to say:
1. Darl, Darl McBride, is that you? When will you be testifying against Mark Shurtleff and John Swallow?
Is that an African or a European Swallow?