Become a fan of Slashdot on Facebook


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 internet speed test! ×

Submission + - The Saga of NiceBooby.Net and NBC's Hilarious Shadow-URL Empire (

Recaply writes: Maybe one night, as you scuttled darkly across the fringes of the internet, you came across a site like Or Maybe even But when you clicked, you were met not with the lurid promises of the URL, but rather by the smiling faces of SNL cast members. Had you discovered a wormhole in the web? Nah. Just another pervy-sounding NBC property.

Submission + - The Earth is a gravitational wave detector (

b30w0lf writes: Gravitational wave detection—i.e. the detection of propagating ripples in spacetime—is a hot subject these days with ground-based interferometer experiments like LIGO active, and hopes for a space interferometer like LISA. But, physicist Freeman Dyson proposed back in 1969 that the earth itself could be used as a gravitational wave detector. The idea is behind the approach is that gravitational waves impact the earth’s crust, causing potentially detectable seismic waves. Using Dyson’s approach, Physicists at Harvard and NINP, Florence were able to put an upper limit on the intensity of gravitational background radiation based on a year of observational seismic data. The upper limit they found improved currently laboratory upper limits by 9 orders of magnitude.

Submission + - Physicists propose "perpetual motion" time crystals ( 1

b30w0lf writes: It is commonly understood that crystals exist in a state of matter that is periodic in space. Meanwhile, relativistic physics tells us that we should think of time as being a physical dimension, given similar status to the other spacial dimensions. The combination of these two ideas has lead researchers at the University of Kentucky and MIT to propose special manifestations of matter which would be periodic in both space and time, dubbed “time crystals.” Time crystals would continually transition between a set of physical states in a kind of perpetual motion. Note: the articles stress that this kind of perpetual motion in no way violates the established laws of thermodynamics.

While time crystals remain theoretical, methods have been proposed for creating them. The most obvious application of time crystals is the creation of very precise clocks; however, other applications to time crystals have been proposed, ranging from quantum computing to helping us understand certain cosmological models.

A technical summary with links to the original publications can be found at:
For a less technical summary:

Comment Re:Yes, this is legit and no, we're not idiots (Score 2) 387

Speaking as one of the people you will (temporarily) be supplanting, sounds like you have a tough spot to get through.

I also admin life sciences clusters for a major university on the east coast. I'm going to assume that our workloads are going to be fairly similar (R, matlab, blast, HMMER, IDEA, maybe some mutual information codes, sequence alignment, etc.). If that's not the case, some of this advice may be off.

So, a couple of things:

- I think CentOS is a good idea for a cluster platform. I do not think Rocks will scale like you want it to to that size, and it's really not terribly flexible either. Let's put it this way, I often find that I could have just built from scratch by the time I get Rocks to do all the customization I need. We run Rocks on small clusters, but big ones we spin ourselves (e.g. CentOS, or sometimes Fedora + Kickstart + some utility scripts and a scheduler... we use SGE, now OGS). Finally, stay away from more fringe distributions. You'll find that commercial software vendors are pretty quick to let you know they just don't support running their software on XX distribution. There are other reasons too. I posted a bit of a rant on this a while ago at:
- Infiniband vs. 10 Gbps. Well, InfiniBand is cool, and I've spent a lot of time working with it. I once had a project that involved writing some early stage block level storage protocols for InfiniBand... really, I like InfiniBand. That said, unless you plan to run a lot of MPI enabled MD simulations like Desmond, skip the IB and get 10 Gbps. There are a couple of exceptions to that rule, but most life sciences applications do not use MPI, and most of your traffic is going to be storage I/O. Depending on your storage solution, it's probably not InfiniBand enabled (in the front-end anyway, and you really don't want to be running IP over IB if you can help it). To say more I'd have to know a bit more about what you're going to be running.
- GPUs. One thing sticks out to me a lot here. If you don't know which GPUs to get, that probably means no one has ported anything to GPU yet. If someone has done some porting, you should ask them what they ported to. If they ported to CUDA, you should probably be looking at 2050s or 2070s. If they haven't ported anything, and they don't have (good!) GPU ported applications... don't waste money on too many GPUs. We've run a couple of pilots where we tried to get people using GPUs, and here are a couple of observations: 1. most researchers can't/won't do the porting; 2. most pre-built applications, such as matlab and R _still_ require you to port the matlab, R, etc. code, which researchers will probably also not do; 3. some life sciences algorithms just don't work well on GPUs (e.g. they are branch-heavy or memory I/O heavy algorithms); 4. many of the pre-built GPU applications for life science are terrible (I know a particular sequence alignment tool, for instance, that is proud of it's 4x speedup over a single CPU... do the math... which costs more, a quad core CPU or a tesla?). GPUs can be great, but buy them sparingly at the beginning and integrate them as they are actually being used. If you're buying now you should be buying CUDA (i.e. NVidia). It's the only actual mature development kit (though I don't like that it doesn't let you control the scheduling on the card... but I digress).
- Chargeback: So the bottom line is nothing is going to give you chargeback without some effort. You're going to have to manage that on your own. The best way to do it is to setup some basic accounting scripts that will dig your cluster logs (or database, depending on your configuration) and generate accounting reports. Note that it's the resource manager/policy manager (e.g. OGS, Torque/Maui, etc.) logs that you're going to do this with. You _could_ do it with Rocks as well as anything else (but again, I don't suggest Rocks for this project).

Sounds like you have a fun project ahead of you... good luck!

Comment Re:RHEL (Score 5, Informative) 264


A primary component of my job is the design and maintenance of high performance compute clusters, previously in computational physics, presently in biomedical computing. Over the last few years I have had the privilege of working with multiple Top500 clusters. Almost every cluster I have ever touched has run some RHEL-like platform, and every cluster I deploy does as well (usually CentOS).

Why? Unfortunately, the real reasons are not terribly exciting. While it's entirely true that many distro's will give you a lot more up-to-date software with many more bells and whistles, at the end of the day what you really want is a stable system that works. Now, I'm not going to jump into a holy war by claiming RedHat is more stable than much of anything, but what it is is tried and true in the HPC sector. The vast majority of compute clusters in existence run some RHEL variant. Chances are, if any distro is going to have hit and resolved a bug that surfaces when you have thousands of compute cores talking to each other, or manipulating large amounts of data, or running CPU/RAM intensive jobs, or making zillions of NFS (or whatever you choose) network filesystem calls at once, or using that latest QDR InfiniBand fabric with OpenMPI version 1.5.whatever, it's going to be RHEL. That kind of exposure tends to pay off.

Additionally, you're probably going to be running some software on this cluster, and there's a good chance that software is going to be supplied by someone else. That kind of software tends to fall into one of two camps: 1) commercial (and commercially supported) software, and; 2) open source, small community research software. Both of these benefit from the prevalence of RHEL (though, #1 more than #2). If you're going to be running a lot of #1, you probably just don't have an option. There's a very good chance that the vendor is just not going to support anything other than RHEL, and when it comes down to it, if your analysis isn't getting run and you call the vendor for support the last thing you want to hear is "sorry, we don't support that platform ." If you run a lot of #2, you'll generally benefit from the fact that there's a very high probability that the systems that the open community software have primarily been tested on are RHEL-like systems.

Finally, since so many compute clusters have been deployed with RHEL-like distros, there is oodles of documentation out there on how to do it. This can be a pretty big help, especially if you're not used to the process. Chances are your deployment will be complicated enough without trying to reinvent the wheel.

Slashdot Top Deals

Mathematicians practice absolute freedom. -- Henry Adams