There are plenty of arguments for using white boxes or boxes from big brands, but this was wasn't one of them.
Red Hat had a bug open for it (bug 887006 if I recall correctly), and it was interesting to see what their response was to paying customers. They did provide special kernel packages to help fix/troubleshoot the issue, but it still went on for a long time. To make matters even worse, even when the bug was visible to me (as a Red Hat customer), lots of it was redacted, to the point where it was difficult to determine key pieces of information. And while I don't have access to my RHN login right now, I don't believe that bug is accessible to anyone outside of Red Hat at this point (which is another problem itself)
I suppose my point is even in circumstances where you can hold the vendor responsible and where they are taking action, it doesn't guarantee that the problem will be fixed when The Business(TM) wants it to be. And for problems like this, where it's affecting or going to affect a large number of people, it'll get the proper attention it needs, paid support or not.
I get paying for support from a CYA perspective, but that's really all it is, IMHO.
It takes a completely different set of skills to manage people and projects well. And it's not easy, even if you have the skills. Being able to manage IT *well* (software development or IT operations alike) requires a fair amount of technical knowledge (you don't need to the the expert on everything, but you do need to know your stuff), and being able to communicate well to those above you and beneath you.
A good coder or sys admin is hard to find. A good *manager* of those people is even harder to find, and are worth their weight in gold (both for the people who work for them, and for the company itself).
Full disclosure: I have a wonderful manager who helps make my job (as a Ops team lead--so I'm still in the trenches but "managing"/"mentoring" those on my team) much easier, and I've seen our best coders rise up to be very effective managers themselves.
Is there anything wrong with that? I'd say it depends on the cost. And I wholeheartedly agree with you that what you described is what college should be about, but it just isn't. That's the problem.
And, to be fair, this did vary a lot by major, so please don't take it as a blanket assumption about each and every group that attended.
Other than that, Cisco gear all the way. It's overpriced, and for the most part, you're going to be limited to 100 megabit, even on eBay, for a reasonable price, but it's rock solid gear.
And it really is solid--I keep all my networking gear at home on a UPS, and it's still far more solid than any standalone Linksys router was (and uses far less power than it's predecessor--a Celeron 366 MHz box that had ~1400 days of uptime before I killed it).
First off, a fairly high percentage of kids going to college are just throwing their money away to begin with. How many kids are going to college now who have no business going? How many graduate without being able to think or analyze anything? They graduate with a diploma that means next to nothing, and yet, they're either in tons of debt, or mom and dad paid for it all and, in any case, there's little to be had from it. The value of a degree has gone down, and the price has skyrocketed. And, more so than ever, kids are told right from their freshman year of high school, that they need to go to college. This topic has been discussed endlessly here, and I don't want to rehash it more than is necessary to prove a point, but it's a big part of the problem that exists today with an entire generation.
Though I'm living at home, I'm more than able to cover my living expenses. I choose to do so, because as much as I want to move out, it would take quite a while to save up for a house between paying rent, utilities, and said student loans. I made some very foolish choices straight out of high school, and I'll be paying for my degree for a number of years, when it has proven entirely unnecessary in my line of work (IT systems engineering/administration). They're my own mistakes, and no one else's, but tons of people keep making these mistakes because of societal expectations and job "requirements" that are hardly requirements. And then they're surprised when they can't find a "real" job. The kids carry a good portion of the blame, but they can't carry all of it.
Want to be a doctor or lawyer? OK, go to college? Want to cure cancer? Go to college. Want to fix the horribly deficient infrastructure throughout the country? Go to college (but don't expect to find a job, since there's no funding for this). Want to party for 4 years and live at home for the rest of your life? Don't bother going to college, you can do that without a diploma. The distinction needs to be accepted by employers, but I doubt it ever will again.
For the record, since I'm sure the natural inference people will make is that I'm knocking the business majors, the humanities majors, etc., I'm not. Necessarily. I think they're well worth studying, and we're all well served by doing so, but going to college to do it just for the sake of having a degree is rather pointless. Also for the record, I graduated with a "BS" in business, and not once has it proven relevant on the job. Lastly, and again for the record, I am a bit bitter about it.
More than a laptop? Sure. Substantially more than a laptop? Not really. Especially if you were to add the screens and peripherals in. And while I'm sure you can find laptops with 32 GB of RAM, I doubt you'll find them as cheaply as I built this setup ($1500, roughly).
You're right, but the difference really isn't that large. And when I'm at my desk, I'll take this setup over the 13" MBA that also lives there any day.
I'm not the world's biggest fan of Steve Jobs, and I respect Woz's talents far more than I do the other Steve's, but they would have been no Apple if it weren't for both Steves.
So when Apple was looking to buy a company for the next generation Mac OS, Jobs had a very personal motive to get Apple to buy NeXT instead of Be (as Gassee was the president of Be, and in negotations to sell Be to Apple). That, and he got Apple to buy NeXT at a time when he was considering investing his own (and Larry Ellison's) money to take over Apple. Instead, he got paid to do it, and got the guy who executed the move fired.
Jobs was great at many, many things... but he wasn't exactly a nice guy, or--from everything I've read--the kind of guy you'd want running anything when he was forced out of Apple. I think even Jobs would admit it was probably good for him (and Apple) in the long run.
Of course, we were sane, and didn't blindly apply these updates to all our systems. We tested it out on one or two lab boxes first, and once we noticed the upgrade was problematic, we yelled-and-screamed at the vendor.
Point being, some firmware upgrades are bad, and some are good. Blindly applying all or blindly applying none at all are equally stupid system administrator philosophies. If you're not testing these sort of upgrades in a lab or testing environment prior to doing your production gear, you're doing it wrong.*
* - Yes, I know not everyone has the luxury of a test/lab environment. But you almost always have critical systems and non-critical, and it should be pretty obvious where you'd want to try upgrades first.
That said, this has led us down the path of constantly increasing availability requirements, for things as (relatively) insignificant as an internal company blog. We're currently doing work between two new data centers, and one of the goals is to provide near 100% availability of all systems. It becomes very easy to sell such an idea to the business at little incremental cost (compared to the cost of building out two DCs in the first place), but the actual work involved in making it happen can be tricky at times. Not to mention the real incremental benefit is questionable at best, at least for a lot of the applications in question (IMHO, and given that many systems aren't tied to money-making endeavors).
Sure, it's theoretically possible to have two DCs, and when you want to do patching, you flip to your secondary site, patch your primary, flip back, and patch the secondary. It's a practice I'd certainly expect to see in an environment like NASDAQ. The business likes it, and the technical minutiae are workable (most of the time), but it is a substantial amount of added complexity (and work... and time) for little added benefit, in a lot of cases.
In short, I agree completely with what you said, but it can have the side effect of increasing the "required" availability numbers to the point where it becomes little different than simply looking at uptime (depending upon the environment).
That Linode VM is only about $30/month, and it comes in handy for lots of other things. If it's a hobby, it's well within the realm of affordability. Can't recommend them enough for something like this (their competitors are probably good too, but I only have personal experience with Linode).
All in all, if I spend 2 hours a month maintaining the setup (generally upgrading ClamAV), that'd be a lot. I use CentOS+Sendmail (been running Sendmail since the get-go, don't have much motivation to swap it out) out of the box, with custom compiled (latest-and-greatest) versions of SpamAssassin and ClamAV.