Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:fused off? Really?! (Score 4, Informative) 127

Probably referring to efuses that can be burned out on the die. These are common and allow CPU/GPUs to have unit-specific information (like serial numbers, crypto keys, etc) coded into otherwise identical parts from the fab. Video game systems like the 360 use them as an anti-hacking measure...disallowing older version of firmware to run on systems that have certain efuses "blown." Likely, there is an efuse for each core or group of cores. Those can be burned out if they are found to be defective or to simply cripple a portion of the part for down-binning. That is a practice at least as old as the Pentium 2.

Comment Re:Offsite backups (Score 1) 320

Truecrypting external USB/eSATA drives are by far the better option. We also use normal 3.5" drives with external USB/eSATA docks. There are NO cheap tape solutions anymore. I'd further argue that what tape solutions exist are trumped by hard drive backup solutions for on-site backup--far slower and no more reliable than hard drives. Tape is dead. Anyone still using them is either leveraging a 5-to-10-year-old investment in a tape robot or is being sold a bill of goods by a vendor.

Comment Re:It feels too heavy and old (Score 3, Insightful) 242

I have to agree about the heft. But I prefer the "old" style interface. I had to install Office 2007 to interact with some clients and I am completely lost. I've been using word processors since the C64 days, but this is the first time I decades that I have stared blankly at a program and had to click on every menu/button/active splotch trying to find out how to turn on Track Changes.

Of course, people can get used to the interface and maybe following the mythical transition, I will be enamored with its interface glory. But it just seems different for difference's sake...like .docx and .xlsx where.

To the LibreOffice folks, you really need to do a top-down performance/memory analysis. I like it and will continue to use it, but I don't see why it needs to be the resource hog it is.

Comment Re:How many commenters have BUILT a cluster!? (Score 1) 264

The difference is even bigger than you posted! You made a math error on the Sandy Bridge FLOP calculation:

64 Sandy Bridge Cores: 8 FLOPS/Hz * 2.8 GHz * 64 cores = 1433.6 GFLOPS

48 Magny Cours: 4 FLOPS/Hz * 2.1 GHz * 48 cores = 403.2 GFLOPS

So, Sandy Bridge is roughly 3.5 times faster than AMD.

And the original poster commented that the application was parceling out data sets and crunching on the independently, so the application is embarassingly parallel. This design would be rubbish for any *real* parallel application, but I think it is optimal for OP's stated goal.

Comment How many commenters have BUILT a cluster!? (Score 1) 264

OK, I won't be too hard on the discussions above, but I read enough to try to give some real help to the OP. I get that this is basically an embarrassingly parallel application. So, that means a gigabit network is fine. That also means that single core performance is the ONLY indicator of the speed of the application. That means investing in anything AMD is a mistake. The best bang for the buck is quad-core Sandy Bridge CPUs. 4000 pounds is about $6300. I can build a quad-core 2.8 GHz Sandy Bridge node (2GB/core in a desktop case) for under $400 each. Cables, Gbit switch, and 15-16 nodes (60-64 Sandy Bridge cores total) will fit in the budget without too much effort.

OK...so, it isn't ECC memory. And it isn't general purpose. And it isn't going to run most parallel applications worth a crap due to the gbit network, but the point of building a cluster is to design it to match the application. 64 Sandy Bridge cores will run rings around any Magny Cores solution you can build for the same price.

Comment Re:Supercomputer? Really? (Score 1) 240

Not enough ram. The 8 cores are still four or five years old. And the most damning thing is the gigabit interface. That severely limits what real work can be done...embarrassingly parallel stuff like rendering, primes, SETI, or folding will work. But not comp chem or CFD. We haven't built a cluster using gigE for interconnects for 3 or 4 years. And when we did, we used multiple gigE links per node to try to keep up.

Comment OK, I'll Say It (Score 1) 140

This is stupid.

I am a big proponent of open-source software. I like the idea of being able to build my own versions of software, fixing bugs and adding features. I use it as a key component of my business. It is great. Moreover, most of the code that me or my employees write is or likely one day will be open source.

However...open hardware is a fundamentally different thing. No one has chip fabs in their basement. So someone will have to pay big money to make the masks and tape-out and test the hardware. Unless some major vendor picks up the design and mass produces it lots of 100s of thousands, the price per CPU is going to be stupidly more expensive than an off-the-shelf CPU/motherboard or embedded system. And, even then, you are probably buying an overpriced, underpowered CPU just because it is "free."

This is Stallmanism as its worst--"freedom" for freedom's sake without regard to functionality or practicality. Stuff like this casts a shadow of crazy.

Comment Re:Evil commenting on evil (Score 5, Informative) 378

Large downloads are a potential impediment to piracy, but with the ability to run unsigned code, it can likely run backup manager with an ftp server that can be used to move games directly onto the PS3 hard drive and run from there, not unlike the current situation with JTAG 360 systems now. Therefore, bluray blank prices aren't going to be an issue.

Comment Backlash (Score 3, Insightful) 853

The ace in the hole for net neutrality is the latest crop of cheap TVs with built-in Netflix and other online services. My in-laws just purchased one a few months ago and they use Netflix constantly. These are dye-in-the-wool, Ann Coulter-reading, FOXNews-watching Republicans. I mentioned to my father-in-law about net neutrality being a big issue. He had never heard of it. When I explained the ramifications for their Netflix usage, his response was to immediately support it. It will be interesting to see this shake out. This is another chance where we can see if FOX and Rush can convince more people to act against their own self interest in support of some bastardization of "freedom."

Comment Re:Huh? (Score 1) 220

Not rosy. But I do expect CIOs to project a bit more foresight than this. But with all of the Chinese hacking and Wikileaks in the news, maybe it will fan the paranoia and knock some sense into them. I'd love to see "the cloud" go poof in a new Red cyber-scare.

Comment Re:Huh? (Score 1) 220

Grid computing achieved buzzword status too...among suits. People dumped money into it and it fizzled. Who is still doing grid computing...except for SETI or Folding? Eventually, this will go the same way.

I suspect that it will take one thorough breach of just one of these cloud platforms to make everyone realize that this is bad bad idea. Even just one employee accessing "the cloud" from their home PC that happens to be rooted with a keylogger installed is enough. Then that delicious "access from anywhere" feature becomes a wicked liability.

Slashdot Top Deals

2.4 statute miles of surgical tubing at Yale U. = 1 I.V.League

Working...