Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:To learn Red Hat .... (Score 1) 573

The point of this particular thread-let is what to learn if you're after an IT career. I don't know of any respectable Unix admin that would choose Fedora over CentOS in the enterprise.

CentOS (and Scientific Linux) are both well-respected, stable OSs built from the RHEL source. It's basically Redhat without all of the licensing silliness.

As was mentioned in another thread, Unix is best learned in a VM that's regularly snapshotted. That way, if you hork things up, you can revert without a lot of pain. Having to set up a system from scratch because you broke it and keep breaking it will dissuade new users from learning essential skills.

I also suggest that if someone wants to learn to be a Unix admin, learn the vi editor. Don't use a gui-based crutch until you're proficient in vi. I know a lot of people like emacs, but vi is an essential tool.

Learning to write shell scripts is also an essential skill, but stick with a mainstream shell. Csh is godawful, and zsh is too obscure for the enterprise. Ksh implementations used to be very spotty, especially when moving scripts between Solaris and Linux.

Learn some of the other tools like awk, sed, grep, cut, sort and uniq.

There's a huge shortage of decent Unix admins and a glut of Windows admins. Most of the Unix Admins we interview can't script unless they're stealing from something someone else wrote and most don't understand the innards of how the OS even works.

Comment Re:Once you solve the hardware challenges..... (Score 1) 160

It comes with a pretty recent version of SGE and openmpi installed. It's fully capable of using NFS shares and many people have used it with Infiniband. Cluster monitoring's done with ganglia. The kernel's customizable and you can add your own modules as "rolls" and can manage packages either as a post install or build it into the kickstart for each node. We use Isilon for our shared storage, but we're probably going to be setting up a gluster storage cluster too.

Rocks is a great way for an organization to get their feet wet with high performance computing, but we're beginning to find some limitations especially when it comes to security patching.

We're working on a next-gen cluster architecture where we will provide the same user interface and resources as Rocks, but will use Cobbler or something similar for provisioning, Puppet for configuration management and either SGE, OGE or Univa grid engine for the scheduler. We plan on using ganglia and nagios for monitoring and will eventually, extend our provisioning, patching and monitoring to cover the rest of the enterprise.

Comment Re:Don't do it (Score 2) 160

GPU-based computing's a great idea, but not appropriate for all problems. There's also significantly more work managing memory and all that with a GPU.

We have about 50 M2070 GPUs in production and virtually no one uses them. They depend instead on our CPU resources since they're easier to program for.

Comment Re:don't rule out (Score 4, Interesting) 160

Totally agree. We had a bunch of dual dual-core server blades that were freed up and after looking at the power requirements per core for the old systems we decided it would be cheaper in the long run to retire the old servers and buy a fewer number of higher density servers.

The old blades drew 80 watts/core (320 watts) and the new ones which had dual sixteen-core Opterons drew 10 watts/core for the same amount of overall power. That's a no brainer when you consider that these systems run 24/7 with all CPUs pegged. More cores in production means your jobs finish up faster, you'll be able to have more users and more jobs running and use much less power in the long run.

Comment Once you solve the hardware challenges..... (Score 5, Informative) 160

You'll need to consider how you're going to provision and maintain a collection of systems.

Our company currently uses the ROCKS cluster distribution, which is a CentOS-based distribution that provisions, monitors and manages all of the compute nodes. It's very easy to have a working cluster set up in a short amount of time, but it's somewhat quirky in that you can't fully patch all pieces of the software without breaking the cluster.

One thing that I really like about ROCKS is their provisioning tool which is called the "Avalanche Installer". It uses bittorrent to load the OS and other software on each compute node as it comes online and it's exceedingly fast.

I installed ROCKS on a head node, then was able to provision 208 HP BL480c blades within an hour and a half.

Check it out at www.rockclusters.org

Comment Re:But (Score 1) 255

The Mac Mini's more of a home/small company server, IMHO.

We have a couple of XServes in production that we saved from the scrap heap and have another 4 in reserve in case the production ones crap out. They're nice systems, but don't talk to SATA II or III drives and need to be jumpered down to 1.5GB.

When the XServes die or aren't supported by whatever OS we need, then we'll have to reassess things.

Comment Re:But (Score 4, Interesting) 255

Re: windows vs Mac, I personally hate using windows as a workstation, but I have one at home for gaming. In general, it's a crufty clunky dog's breakfast of an OS that's a pain in the butt to configure and update. I've used nearly every version of DOS or Windows since the days of DOS 2.0 and Windows 2.0, so I'm familiar with its flaws and foibles. The only versions I've never used are Vista and Win 8.

MacOS used to be a crap OS. It was pretty, but didn't multitask at all and crashed far too often to trust. OS/2 was nice, but fragile and was never as popular as Windows. OS X is an awesome OS for workstations and is excellent to work with for day-to-day stuff. The only Linux I use for workstation stuff is Ubuntu. CentOS as a workstation OS is ok, but is too much of a pain to deal with for stuff like sound cards, etc.

Slashdot has a lot of different kinds of people on it. Many of them hobbyists and people who work in small *nix shops. Many are also enterprise IT types and the most popular enterprise *nix is Linux, hands down. Redhat/CentOS flavors dominate, but there are a few debian shops as well, such as Akamai.

A lot of that stuff is just holy wars, but if you look at what vendors support what OS's, You don't typically see much for BSD. Our company recently retired a BSD cluster and are in the process of decommissioning our BSD-based servers for a myriad of reasons. Juniper may use BSD in their stuff, but many more use Linux as their embedded OS.

BSD is popular with some companies and in colleges, but when you get into the real world it's either Linux or Solaris and Solaris is fading fast. Look at the job market. Linux is what most companies are looking for. I'm not dissing BSD, but I'd never recommend it for anything in the enterprise.

I used to run some SunOS (bsd-flavored) systems 'back in the day' and loved them, but when Solaris came out, pretty much everyone switched. I've used Solaris 2.5 - Solaris 10 on both SPARC and X86 and have watched it decline over the years in popularity because of hardware costs and X86 compatibility issues. Oracle has made some really dumb moves over the years regarding the stuff they purchased as part of Sun and most admins I know have given up on their stuff.

Comment Re:But (Score 5, Interesting) 255

OS X is a capable OS, but best used as a workstation at best. Deploying large numbers of OS X servers is greatly complicated by the fact that even Apple acknowledges that there's no market for their server grade systems and they've stopped selling them. Even if I put a Mac Pro into production, they'd be so expensive and occupy so much room that they'd fill the data center. If I stick a Mac Pro sideways in a rack, it takes 4 or 5U at least for 12 cores. I can put 4 dual hex or octo core Xeon rack mount servers in the same space or even some dual 16 core opteron servers. If I choose to use blades, I can put 16 HP 460c blades in 10U.

Don't even mention the Mac Mini as a viable server platform, it's an underpowered joke of a system if you want to do real work on it for sustained periods of time. They're not intended for, nor will they stand up to the kind of loads you see in the enterprise.

I work in the IT industry running computational clusters and lots of other kinds of servers. My rock is pretty large, but I'm on the top of it.

I do have a couple of OS X servers in the enterprise, but they're only there to run Open Directory to manage our Mac workstations.

your assertion that windows 7 or OS X is better than a Linux server shows how out of touch you are with enterprise computing. We have some windows 2003 and 2008 servers in production, but they're there to provide infrastructure for the windows workstations. No one tries to do anything else with them since it's far easier to deploy services on Linux.

As I mentioned, I love apples workstations and laptops but they don't make an appropriate platform for running any meaningful services in the enterprise.

Comment Re:But (Score 4, Insightful) 255

Because as good as OS X is, it's not a particularly good server platform and requires Mac hardware, while Linux has been around for ages, runs on commodity hardware, has a very well supported number of open source packages and is considered mainstream by most Unix admins.

As a server platform, OS X suffers from the same problem as Solaris. You need the vendor supplied hardware to get it to run well. Solaris is a dying OS because Sun and Oracle supplied hardware is too expensive and just isn't worth it when you can get three times the computing power for less money, and X86 Solaris is frankly crap, since it has such a small hardware compatibility list.

I don't mention BSD since it's not really mainstream any longer. It's a good OS, but lacks overall vendor support.

All that being said, I prefer OS X systems for my workstation and CentOS or Scientific Linux for servers. Redhat's nice, but overpriced when you need to deploy a lot of systems.

Comment Re:Health and safety? (Score 1) 130

If a class alpha fire happened to break out somewhere enroute to the upstairs generator, they could likely have thrown the diesel fuel on the fire to put it out

A class-a fire still puts out a lot of heat. Trying to put it out with diesel will give you a class-b fire to boot, I think.

Flash point of diesel fuel's 144F so it's not exactly something I'd throw on a fire (gasoline's flash point is -45F).

It's not something I'd want to handle around an open flame or anything, but it's pretty safe otherwise.

Slashdot Top Deals

If all else fails, lower your standards.

Working...