If you absolutely need the computing power of a high-end CPU, then you probably want to figure out how to do the same calculation in a GPU or an FPGA, because those can be much more powerful in many cases, and more power efficient too.
The theory is that the exhaust air from the CPU heatsink spreads out to parts that are more heat-tolerant but still need active cooling, such as the voltage regulators. A VRM that can operate at 100C without trouble can be cooled just fine with a slow flow of 50C exhaust air from the CPU cooling system.
Thanks for pointing this out, I haven't always considered it, though I've noticed the idea on various places, such as GPUs. I also recall the instructions in a passive chipset heatsink that it's supposed to have a CPU fan nearby to work properly.
In practice, people have found that a front-to-back airflow, preferably ducted, is quieter and more effective than a mix of back-to-front, blow-down, and turbulent airflows. It does, however, require actual engineering work, rather than just attaching a bunch of fans to everything.
Agreed. The "bunch of fans" approach is really annoying, and you still see it in quite high-end applications such as Bitcoin mining ASICs. People should remember that it's the airflow that cools things, not the fans themselves -- it's not how many fans you have, it's how you use them
In other words, if you use the same volume of copper and the thickness of the fin is half the diameter of the sponge cylinders, you have the exact same surface area. The thinner fins may be weaker, but since the additional fin material on the sides reinforces the structural strength, I assume that's not too big a deal.
I agree with the rest of your argument, especially including the fluid flow part. However, I'm not sure if this part works out, it really depends on other assumptions. For a point load I agree -- the load is spread across the width, at least to some extent. But with a wider surface, you generally experience more load, proportional to the size, and there's no benefit in connecting the fin segment to neighbouring segments. So the cylinder would be stronger in this sense. It's the intuitive idea of increasing the width in the direction of the load, and the other direction won't help.
Simply by looking at the reactivity series, you can tell that copper is considerably less flammable than iron. OTOH, powdered copper burns with a nice green colour when tossed into a Bunsen flame.
For a practical standpoint, you could ask if steel wool burns in the temperatures of a CPU heatsink. Probably not, and this copper sponge is much less of a risk. Of course, if you like living on the edge, and tweaking CFLAGS is not enough, try an entire case made of a notoriously reactive metal.
basically means that for slower airflow, you need larger gaps for air to flow through. This is why the sponge is bad for heat dissipation, and great for insulation. It's kind of intuitive, but it's nice to have some science backing to it. Having a large surface is good, but it doesn't help if the airflow across the surface is limited.
On a side note, I've been on a quest for quiet cooling since the very early 2000s, incidentally after getting a physics degree. It's mostly in the last couple of years that I've started to see really sensible coolers in the general market. For example, the usual CPU cooler in the olden days had a fan pushing right against the CPU with minimal fins in between, meaning there's a considerable high-pressure centre with no airflow. No one with a fluid mechanics 101 would design crap like that. OTOH, the traditional CPU/mobo setting is a little problematic; first you put the most heat-concentrating element in the middle of everything, and then later you realize it needs cooling. (I'd put the CPU socket on the reverse side and use the case as a huge heatsink...) Now finally the designers have the sense of using a straight sideways airflow, combined with heat pipes. Why the fsck did this take so long?
I used to strive for pure passive cooling, but in the end I don't mind a large, slow fan -- it's enormously better than no fan, and still indistinguishable from other background noises. This is another nice thing to see in cooler designs, from the 1-inch whiner in my first Linux laptop to the 140-mm quiet giants that can easily manage a couple of hundred watts of GPU.
BTW, if you ever need to explain somebody how a heat pipe works, take them to a sauna.
You could ban all of those drugs, and some other drug would become the first one users try.
Would that be causation, or just correlation?
Pushing this even further --- I have inherited a (mostly empty) 3,000 square foot data center (almost Tier III - but it shares a wall with the outside or so I'm told). I'm using (maybe) two racks.
Are you a Nigerian prince?
Yeah, CPU-only coins last for about 48 hours before a GPU miner is released. As far as crypto-coins the fact is, a modern graphics card is faster than almost anything a CPU can do.
This applies mainly to those that simply choose a semi-standard hash algorithm, such as one of the SHA3 contestants or a combination thereof. Often there is GPU code already available, and building the miner is all about reading some specs and writing some glue code*. Also, most of these coins are based on Bitcoin and simply change the hash algo.
However, most Cryptonote coins (using the Cryptonight algo) have lasted for ages without an open GPU miner. For starters, they are not forked off Bitcoin. Boolberry is a Cryptonote coin with a different algo, which makes it faster to sync, while still aiming for GPU resistance. An open GPU mining codebase was released just a few days ago, and there's still work to do for general distribution. Besides, Boolberry's algorithm needs several MB of fast cache, which is OK with GPU texture cache at the moment, but it will grow over time, possibly making GPU mining unfeasible again.
*(I wrote a GPU miner for JH-256 coins in a few days with no prior GPU/OpenCL experience. Endianness is a bitch.)
If only there were a dedicated community for every sad sloth, or at least an anagram thereof.
(If you either feel for the sloths, or just appreciate the pun, please send a random amount of slothcoins to SML12GaoebyneT7ctYuj9PFicptetjPUct. Thank you.)
Or if you're into math, you invoke the pigeonhole principle So the limit of useful compression (Shannon aside) comes down to how well we can model the data. As a simple example, I can give you two 64 bit floats as parameters to a quadratic iterator, and you can fill your latest 6TB HDD with conventionally "incompressible" data as the output. If, however, you know the right model, you can recreate that data with a mere 16 bytes of input. Now extend that to more complex functions - Our entire understanding of "random" means nothing more than "more complex than we know how to model". As another example, the delay between decays in a sample of radioactive material - We currently consider that "random", but someday may discover that god doesn't play dice with the universe, and an entirely deterministic process underlies every blip on the ol' Geiger counter.
IOW, Kolmogorov complexity. For example, tracker and MIDI files are a great way to "compress" music, as they contain the actual notation/composition rather than the resulting sound. Of course, that doesn't account for all the redundancy in instruments/samples.
So while I agree with you technically, for the purposes of a TV show? Lighten up.
IMHO, half the fun of such TV shows is exactly in discussions like this -- what it got right, where it went wrong, how could we use the ideas in some real-world innovation. I find that deeper understanding only makes me enjoy things more, not less, and I enjoy "lightening up" my brain cells.
"Conversion, fastidious Goddess, loves blood better than brick, and feasts most subtly on the human will." -- Virginia Woolf, "Mrs. Dalloway"