Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:I won't be buying one... (Score 1) 632

And they're going to retrofit all of our guns how?

Geez, comrade. What should we do until then? Surrender them for 'safekeeping' while they figure out how to retrofit an old M38 Mauser or Finnish Mosin-Nagant? How long will they keep my WWI era Luger, my 1952 Russian SKS or the AR15 I use to shoot in Service Rifle competitions? What about black powder rifles and handguns? What about knives? What about blunt instruments, broken glass or even gasoline?

One of the biggest mass killings in US history was committed with a gallon of gasoline. How are they going to track that? With GPS trackers and fingerprint locks on gas containers? Perhaps they should put rubber bumpers on all the sharp corners of the world so it's impossible to get hurt. Then we'll all be safe, sound and secure, right?

Oh wait. The criminals will be the ones that have the guns without the fingerprint readers.

How about you butt out of our countries business and tend to your own feeble socialistic existence, jackass!

Comment Re:I use it for linux distributions (Score 1) 302

We're currently using the ROCKS cluster distro to run our cluster, but are finding that it's beginning to limit our ability to patch and otherwise maintain our cluster infrastructure. We've adopted cobbler and puppet for some of our HPC assets and will likely switch from ROCKS to more of a home-grown approach to manage our nodes. One thing I dearly love about ROCKS is the Avalanche Installer which uses bittorrent to distribute the image to the nodes when they do their initial build. I've

Are you using that or a similar package to do your node builds?

Comment Re:We're running away from SPARC as fast... (Score 1) 175

A majority of our IT HW budget is for High Performance Computing. We have about 10000 x86 cores running CentOS, about 100 M2070 GPUs and close to a petabyte of Isilon cluster storage in production right now and will be expanding to over 15000 cores in the next year.

If we wanted to use SPARC systems, we couldn't afford anything nearly as powerful or as painless to manage. We don't need OS support other than patches and we're not tied to any particular vendor (other than Isilon). We may implement a Gluster storage cluster to gain independence from sole-source vendors entirely.

We have a couple of Solaris bigots on the team, but they're mostly relegated to running our handful of Solaris boxes and non-cluster storage/backups.

Comment Re:Dont discount SPARC just yet (Score 1) 175

With the advent of cluster file systems, you don't have to pay for unreasonably expensive "bulletproof" hardware anymore. You can set up a Gluster storage cluster on commodity-grade X86 hardware get all the speed and redundancy you need (and are willing to pay for) at vastly cheaper prices. For those that don't want to roll their own, you can use commercial storage clusters by Isilon or storage virtualization devices such as the F5 Acopia with pretty much any storage underneath that you like.

Comment We're running away from SPARC as fast... (Score 4, Informative) 175

We're running away from SPARC as fast as we can.

Our unix shop used to be primarily SPARC-based, but with limited IT budgets, we're able to do far more with much less money using HP blades running CentOS.

For most purposes, SPARC hardware is far too expensive and Oracle seems to be doing all they can to kill Solaris.

We still run a handfull of SPARC systems that run specialized applications and a few Solaris zones, but nearly all other services have been pushed to natively hosted Linux systems, or virtual machines running Windows or Linux.

Comment Re:To learn Red Hat .... (Score 1) 573

The point of this particular thread-let is what to learn if you're after an IT career. I don't know of any respectable Unix admin that would choose Fedora over CentOS in the enterprise.

CentOS (and Scientific Linux) are both well-respected, stable OSs built from the RHEL source. It's basically Redhat without all of the licensing silliness.

As was mentioned in another thread, Unix is best learned in a VM that's regularly snapshotted. That way, if you hork things up, you can revert without a lot of pain. Having to set up a system from scratch because you broke it and keep breaking it will dissuade new users from learning essential skills.

I also suggest that if someone wants to learn to be a Unix admin, learn the vi editor. Don't use a gui-based crutch until you're proficient in vi. I know a lot of people like emacs, but vi is an essential tool.

Learning to write shell scripts is also an essential skill, but stick with a mainstream shell. Csh is godawful, and zsh is too obscure for the enterprise. Ksh implementations used to be very spotty, especially when moving scripts between Solaris and Linux.

Learn some of the other tools like awk, sed, grep, cut, sort and uniq.

There's a huge shortage of decent Unix admins and a glut of Windows admins. Most of the Unix Admins we interview can't script unless they're stealing from something someone else wrote and most don't understand the innards of how the OS even works.

Comment Re:Once you solve the hardware challenges..... (Score 1) 160

It comes with a pretty recent version of SGE and openmpi installed. It's fully capable of using NFS shares and many people have used it with Infiniband. Cluster monitoring's done with ganglia. The kernel's customizable and you can add your own modules as "rolls" and can manage packages either as a post install or build it into the kickstart for each node. We use Isilon for our shared storage, but we're probably going to be setting up a gluster storage cluster too.

Rocks is a great way for an organization to get their feet wet with high performance computing, but we're beginning to find some limitations especially when it comes to security patching.

We're working on a next-gen cluster architecture where we will provide the same user interface and resources as Rocks, but will use Cobbler or something similar for provisioning, Puppet for configuration management and either SGE, OGE or Univa grid engine for the scheduler. We plan on using ganglia and nagios for monitoring and will eventually, extend our provisioning, patching and monitoring to cover the rest of the enterprise.

Comment Re:Don't do it (Score 2) 160

GPU-based computing's a great idea, but not appropriate for all problems. There's also significantly more work managing memory and all that with a GPU.

We have about 50 M2070 GPUs in production and virtually no one uses them. They depend instead on our CPU resources since they're easier to program for.

Comment Re:don't rule out (Score 4, Interesting) 160

Totally agree. We had a bunch of dual dual-core server blades that were freed up and after looking at the power requirements per core for the old systems we decided it would be cheaper in the long run to retire the old servers and buy a fewer number of higher density servers.

The old blades drew 80 watts/core (320 watts) and the new ones which had dual sixteen-core Opterons drew 10 watts/core for the same amount of overall power. That's a no brainer when you consider that these systems run 24/7 with all CPUs pegged. More cores in production means your jobs finish up faster, you'll be able to have more users and more jobs running and use much less power in the long run.

Comment Once you solve the hardware challenges..... (Score 5, Informative) 160

You'll need to consider how you're going to provision and maintain a collection of systems.

Our company currently uses the ROCKS cluster distribution, which is a CentOS-based distribution that provisions, monitors and manages all of the compute nodes. It's very easy to have a working cluster set up in a short amount of time, but it's somewhat quirky in that you can't fully patch all pieces of the software without breaking the cluster.

One thing that I really like about ROCKS is their provisioning tool which is called the "Avalanche Installer". It uses bittorrent to load the OS and other software on each compute node as it comes online and it's exceedingly fast.

I installed ROCKS on a head node, then was able to provision 208 HP BL480c blades within an hour and a half.

Check it out at www.rockclusters.org

Slashdot Top Deals

The one day you'd sell your soul for something, souls are a glut.

Working...