This isn't exactly unique or special. Most of downtown Toronto is covered by the cooling grid from one such deep-water lake cooling systems, and I know of at least one datacenter (one of if not the most critical in the country) that uses the service.
According to the letter, DEFCAD was instructed to review everything else on their site for ITAR violations. If this is all legitimate, they probably simply decided to take the whole site down to be safe, rather than risk leaving something up that might get them into deeper trouble.
That said, the DoS is crazy if they think that this will stop distribution of the files... That cat is way the heck out of the bag.
The government trying to ban 3D printed guns by legislation will be about as effective as hollywood trying to prevent MP3 downloads by suing people. It won't work.
The lawmakers are saying things like "a terrorist could 3D print a gun and take it past airport security". Well, how exactly will banning 3D-printed guns prevent a terrorist from doing it? Short of regulating the sale and use of 3D printers, the law will be unenforceable.
In many countries, like Canada (mine) and the US), coax is ubiquitous. Penetration is virtually universal. VDSL2 is a nice upgrade over ADSL (I personally have 50 meg VDSL2), but it'll never get that much faster. Bonding and vectoring might hit 150ish, that's about it. Coax, which as I said is near universal, can hit much much higher speeds, and with DOCSIS 3.1 is competes directly with fiber. This is why the phone companies like Bell are using fiber to compete with coax. Yes, they still deploy VDSL2, but only where they haven't got around to deploying fiber yet.
The big difference is that the coax is already there, and has been for decades, while the fiber is only just starting to see decent deployment.
VDSL2 with bonding will likely cap out at practical speeds of 100Mbps or so. If you push the remotes even close to the customer, perhaps 150 or even 200. Cable with DOCSIS 3.1 fully dedicated to IP can push 10 Gbps over node sizes that are similar to GPON. My cableco is already using 10GPON-style node sizes.
VDSL2 has no long-term future, while coax can compete with the best PON fiber we've got today.
When Mozilla proves they've solidified a piece of web tech that many people already rely on, "who cares?"
Define "many". Before today, as a geek and a software developer, I've never heard of MathML before. Now that I have heard about it, and know what it's for, I find it completely useless to me, and I suspect the vast majority of people. I think that a tiny fraction of a percent of people will find MathML useful.
RIM's attempts to enter the tablet market were laughable and flopped miserably, so I would argue that they have very little understanding of that market. For them to proclaim the death of the market that they completely failed to penetrate is bizarre.
Battlestar Galactica also posited that connecting two computers together with an ethernet cable instantly makes them completely vulnerable to long-distance wireless hacking because "now it's a network and the cylons can hack networks", so I'd take the whole thing with a grain of salt.
The Iris Pro has similar performance to a GeForce GTX 650. Please let me know where you can get a GTX 650 for under $20.
You'd have to go back more than one generation to find an Intel iGPU that is slower than a $20 discrete card.
The 680 isn't mainstream, by any means. Haswell brings the higher-end iGPU up to the performance levels of a GeForce GTX 650, which definitely is more mainstream.
Sure they are (making money). It's estimated that Satoshi Nakamoto (the anonymous inventor of BitCoin) got somewhere between one to one and a half million bitcoins in the early days, when they were very easy to generate (see the "total bitcoins" graph on wikipedia). Assuming he hasn't sold them off at some point in the past, they're currently worth somewhere between $120 million USD and $180 million USD. That's a pretty tidy profit for one person.
Lack of support for modern wireless networking (no 802.11n, on either 2.4GHz or 5 GHz), inability to perform any sort of processing whatsoever on faster connections (hitting those 80Mbps speeds requires disabling anything that might hit the CPU, so no stateful firewall, no QoS, no wifi encryption, no nothing), limited wired performance (100 megabit switch is a bottleneck for LAN use), limited conntrack ability due to tiny amounts of RAM and CPU power available, lack of USB ports for external connectivity (no hard disks, no 3G/4G data sticks, etc), enormously overpriced when sold new ($50 is enough to get you a simultaneous dual-band 802.11n router today), etc.
For modern internet connections, the thing is nearly useless. I've got a 50/10 VDSL2 line. The WRT54GL that I've got is incapable of routing that at full speed without seriously stripping it down to disable all the useful stuff, and even then its ancient 802.11g wireless radio won't even do half the speed of my connection. On top of that, the lack of a gigabit switch would bottleneck access to my file server (even my gigabit switch is a bottleneck there).
If you've got an old 10 meg internet connection and don't have much of a LAN, it might still be suitable. For people with modern connections, it's useless.
Tomato RAF for the WRT54G uses the 2.6.22 kernel, and can push 80 Mbps of routed throughput (not sure why. Optimizations? Performance improvements in 2.6 versus the 2.4 used by most other WRT54G firmwares?) The things are still ancient, though, and should be retired.
I agree. I've got Ubuntu Server on my box at home (I like Debian, but I also like a fixed release/support schedule with LTS releases), and I'd be happy to see Debian or Ubuntu get ZFS in one of the official distros.
Debian can be... particular. The CDDL, however, does seem to be considered to be DFSG compatible, and there seems to be CDDL licensed code in the main Debian repo. It looks like all the debate on the subject happened in 2005-2006, and then nothing, so the fact that there seems to be CDDL license notices in the main repo indicates to me that the matter was settled in favour of the CDDL being compliant.
Well, you could argue that going from "infrequent big releases once or twice a decade" to "frequent small releases every year" is moving in the direction of rolling releases, but I don't think it means that they're moving TO rolling releases. There are advantages and disadvantages to each approach (yearly or occasional), and I think Apple has had pretty good success doing frequent releases with OS X (9 releases since 2001). It's easier to convince people to upgrade in small chunks when it's (say) $20 for an upgrade once a year rather than $100 for an upgrade once every five. With Microsoft's current approach, they get really hurt by a failure like Vista. It means that Microsoft has no viable Windows release to sell between XP (2001) and 7 (2009). Apart from XP, that is. That's a big gap. All indications are that Windows 8 is a much bigger bomb than Vista was (8's sales are much lower than Vista the same number of days after release), so it makes sense that Microsoft wants to avoid this situation in the future by moving to frequent releases where a misstep doesn't cause nearly so much damage. They can also iterate a lot more effectively.
Office365, on the other hand, that's a web app. I'm not sure if you can say a web app is a rolling release, because it's a single piece of software rather than a collection of software. Web apps definitely do tend to be in the same style, though, in that they tend to get small frequent updates.