Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Comment Re:Bah, fake posturing. (Score 1) 401

What about economies of scale and the more appliance like, smaller scale fission reactor developed by Taylor Wilson? Wikipedia repeatedly has nuclear power as one of the cheapest sources available in different locations, if not the cheapest. For instance, in UK, the median price for onshore wind looks to be about 30% higher than nuclear, and offshore wind is 2-3x the price (original reports here.

Based on the first numbers I found on Google, it would cost about £120+ billion to install panels on the 25 million homes in the UK (assuming economies of scale offset the huge change in demand). The HS2 rail project is reported to cost anywhere from £28 billion (with less tunneling) to £80 billion. Did you leave out the price of labor to have the panels installed?

Comment Re:Given the mass extinctions... (Score 1) 401

Tropical storms are a risk that people take to live in beautiful, warm, coastal areas. If the inhabitants are genuinely concerned, then they should build more resistant buildings. I know a small volunteer organization that built a number of houses in Jamaica that withstood Hurricane Ivan. Those only took a week apiece to build, and the one and only seriously damaged house was in a very bad location.

Fortunately, the 2013 hurricane season was one of the least active ever recorded, but tropical storms have unfortunately been at higher levels. So far (according to Wikipedia) we have gone from having 1 tropical storm in 1914 to 14 storms in 2013, but considering there were also 15 storms in 1916 and 20 storms in 1933, I don't think there is very good data for any sort of long term trend prediction, even though the numbers have been higher for a couple of decades.

The costs to forcibly, radically and rapidly impact society are very high, especially in developing countries that have no money for green infrastructure, which is why I would consider it a weapon of mass destruction. We also have little idea what opportunities (e.g. energy inventions) that will cost us. However, if we can just figure out how to keep from being scorched by the impending heat wave of mass destruction, most would argue that it's worth it...Or are we facing a massive man-made ice age again? I don't keep up with the 30-50 year weather predictions anymore. The 3-5 day ones are wonky enough.

Comment Re:why not? (Score 3, Funny) 303

The reason for knee jerk reactions is probably because the article actually shows no notable uptick in Microsoft's market share of active sites. It's just a sensationalist summary of some poorly analyzed data. For doing actual web serving (not just parked domain serving), they've fallen to 3rd, being beaten by both Apache and nginx. According to the numbers, 93.0% of Microsoft's sites are inactive, and they are leveraging 86.1% of the growth in inactive sites. Microsoft is now the leading web server for inactive sites. In other words, IIS does nothing better than the competition

Comment Re:Uh? (Score 1) 734

I live in an area without brownouts, and have a house shaded by some large trees. What benefit do I get for buying you stuff? Sounds like someone trying to justify legalized theft. I hope you at least researched the manufacturer of the panels to make sure they're not dumping chemicals into the villages of developing countries. http://www.washingtonpost.com/...

Comment Re:Efficiency. (Score 1) 937

Volkswagen, with their 1-liter car project, produced a limited edition car with a drag coefficient of 0.189. One of their prototypes had a drag coefficient of 0.159. The production car is a diesel/electric hybrid that is estimated to get 260 mpg (US), or running on diesel alone 120 mpg.

Comment Re:meh (Score 1) 365

The hardware company may not have signed a contract yet. You don't want to just give something away to the customer when they haven't bought it yet. They're probably trying to establish design and build costs, so they will have an idea profitability and feasibility before locked into a contract to buy something they can't sell.

Comment Re:Why don't they know? (Score 1) 365

The electronics manufacturer probably hasn't seen the algorithm at this point. I assume they're still trying to figure out things like design cost, build cost, and feasibility before making a commitment to buy, and the software company doesn't want to give it away without a contract for payment. I would add up all the different operations of each type in your algorithm, along with some information about looping etc. and present this to the hardware company, but you would have to get a careful balance between giving them enough information to help and enough to build it themselves.

A hardware implementation can vary widely for a single algorithm. For example there are many implementations for running x86 instructions. A Haswell chip should run the same code that a 286 does, but with more transistors, higher IPC and a modified algorithm. If you look at closer processor generations, you may even see a repeated algorithm at some points.

Comment Re:Hard to believe (Score 1) 804

I can hit the $11,000 price point with a 1TB PCIe SSD or beat it with 4x240 GB SATA SSD's. Although I did skimp a little (about $300) by using slower RAM, my system can be upgraded to 128GB or more if you fill the second CPU slot, and I went with a standard tower. How many people have a use for a $10,000 workstation being small and portable? If you really want a nice system you can move, then bundle it with some sound equipment in a small rack.

The off the shelf FirePro W9000 that it is being compared against is a more powerful card than the D700. If you're running two of them, the DIY PC will have 8 TFLOPs (single precision) and the Mac Pro 7 TFLOPs. Considering I can get the PC parts for only an extra 15% cost, and that AMD is charging a HUGE premium ($3000 vs $1410) to go from 3.23 TFLOPs and 4GB RAM to 4 TFLOPs and 6GB RAM. That last bit of power is always expensive, and I could hit the price of the Mac by using 3x W8000 cards, while providing 38% more GPU power.

As long as we're swapping out top end parts for multiple smaller parts, I can go with 2x 2.8GHz 10-Core Xeons and 3x W8000 Cards. I get 73% more CPU power, 38% more GPU power and a faster disk array for the same money. Plus, as a $1,000 add-on I can get 18TB RAID-6 storage. For another $6,000 I can also double the RAM and upgrade to dual NVIDIA k6000's (12GB RAM, 5.2 TFLOPs each0. Given that the only benchmarks I found show the W9000 regularly losing to the older Quadro 6000 and that labor costs and potential returns will dwarf workstation prices, this is probably a good extra investment.

Comment Re:BTRFS filesystem (Score 1) 321

I have seen both Dell RAID-5 and Sun RAID-6 arrays fail with 3+ simultaneous disk failures each. Google ran a Petabyte Sort benchmark in 2008 (6 hours to sort 10 trillion 100-byte records) and was not at all surprised that they had at least one hard drive failure on every attempt (4+ drive failures per day). I have seen enterprise tape systems fail to read their data (hopefully there was redundancy, but I don't know). I have seen backup systems have major performance glitches and fail to restore within their needed time frame. Facebook, for example, only has a few seconds to recover from a failed server before customers might get angry, and has built systems to handle it because it's necessary to provide a good service. The major players who are succeeding and profiting at giving away free services to hundreds of millions plan that all data storage will fail regularly, and plan accordingly.

A little primer for those of us who haven't kept up with new storage technologies since the 90's.

Google deals with enough data that they cannot consider any of your technologies reliable enough. Five years ago, they were already processing 20PB of data every single day with map reduce, and if you have to buy enough systems, even the best RAID6 SAN systems will break regularly. Statistically, a small chance repeated often enough gives you a virtual guarantee of probability. Google generally doesn't bother with expensive technologies like SAN's and RAID, or even bother with enterprise drives (spinning disks -- they probably use an enterprise PCIe flash). You can make what you want of the enterprise drive decision, but I'm pretty sure I've read from at least a couple of sources that enterprise drives are just as prone to failure as regular drives. The major differences are warranty and firmware (e.g. supporting RAID friendly reads). Numerous sources have substantiated that the manufacturers' MTBF numbers are pure marketing fiction. They probably boast a lower error rate, but I have not seen a comparison, only reports that they are off by several orders of magnitude.

What Google does is avoid any redundancy in their machines and take the "redundant array" to a whole new level: Redundant Array of Inexpensive Servers. Multiple copies of the data are written to different servers in different cabinets, and with each data block a checksum is stored. Every time the data is read, the checksum is verified. This way you know with 1 single read if you have bitrot, and can correct it with 1 good read. Now you no longer have to keep comparing 3 copies of the data to correct bitrot. The Hadoop project copied this with their HDFS, and many other large scale technologies have followed suit.

At a desktop level, ZFS, BTRFS and (I think) Windows Storage Spaces do something similar, combining RAID technology (0/1/5/6 maybe 1E) with checksums inside the file system. If a drive fails or even just that the checksum doesn't verify there can be redundancy to attempt to rebuild from automatically in the file system, giving you a better data guarantee than any RAID card I have seen. If the journaling is done correctly, it shouldn't be susceptible to losing data from a power loss either, but home battery backups aren't too expensive. The OP was asking specifically about bitrot. A lot of URE's (uncorrectable read errors) get labeled and treated as bitrot, but it sounds like data he has previously verified is now corrupt (actual rot), not that the reason for corrupt blocks matters once they are corrupt. Bitrot happens more frequently when you don't have such stringent environmental controls in your home as you would in a data center, and I have personally seen it with only 10's of GB of my data.

In my experience, data that is backed up and archived, isn't a prime target for user error nor gross negligence regarding data backups. The user is definitely experiences some sort of URE. In this case, a proper file system is quite important for protecting the data. I would recommend setting up a multi-drive NAS using ZFS (there is probably a distro build for this) and a second set of drives for archiving data to, where you cannot delete the contents. Then you need to regularly scrub the data to correct errors and you might want to check for SMART errors (scan errors and reallocation counts) as well. You should be able to get this information emailed to you, but it probably requires a little scripting. Storage Spaces on Windows might be an alternative, but I know little about it. If nothing else, writing scripts to manage everything will be a pain in Windows if Microsoft hasn't built it as a feature. If you want to protect against a disaster in your house, you may be able to synchronize the data to another ZFS array in another location, preferably an automatic solution over the Internet.

An online solution will handle all of this for you, but I do not expect it to be cheap. With some document retrieval trade-offs, Amazon Glacier would probably be the cheapest at $20/month. If you don't mind a maximum photo size of 2048x2048 (800x800 w/o Google+) and maximum video length of 15min, then these don't count towards your storage limits on Google Picasa.

Comment Re: Firechrome (Score 1) 381

I have a friend who never paid anything for many months of renewed free trials because he would always cancel in time. Eventually, AOL sent him a check with fine print that said by cashing it he agreed to another free trial. They were paying him to use the service.

Comment Re:Healthcare (Score 1) 356

The WHO ranks the US health care system #1 in responsiveness to the needs and choices of the individual patient (http://online.wsj.com/news/articles/SB10001424052748704130904574644230678102274). If that is nowhere near good, then what are you grading it on? Are you flunking it because Bill Gates, Warren Buffett, Larry Ellison, the Koch brothers and the Walton family aren't pooling their money to give you free access to healthcare?

Where the US famously ranked 37th in overall performance is more based on our funding and distribution than how well it responds to patients needs (by a factor of 5). We tend to charge people for health care somewhat based on what goods and services they use, instead of making Mark Zuckerberg fund unlimited access to a 30 year old bum living in his mother's basement dividing his time among reddit, McD's and drinking himself sick (notice no job). Actually, those factors only get the US downgraded to 15th in overall attainment. The remaining downfall is that the WHO thinks we should have a better health care ranking with our resources. (http://www.politifact.com/truth-o-meter/statements/2009/sep/14/paul-hipp/rocker-viral-video-mocks-us-37th-best-health-care-/)

To summarize...The UK sets records for wait times for a hospital bed. The US guarantees that anyone can walk into any emergency room with no money or resources and get excellent care (i.e. not sleeping in hospital hallways -- http://thelongestlistofthelongeststuffatthelongestdomainnameatlonglast.com/long545.html). In the end, the WHO decides UK has better healthcare because there the harder you work to have more money, the more money it costs to get the same pitiful access.

Slashdot Top Deals

Machines certainly can solve problems, store information, correlate, and play games -- but not with pleasure. -- Leo Rosten