Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:If it ain't broke... (Score 5, Informative) 100

TBH, I suspect this is just getting publicity since it's the first super-dodgy HP firmware patch since they adopted their "no updates for YOU!" mentality - the explanation for which from HP was that they'd sunk a lot of money into their patching process and people shouldn't get to use it for free I guess. This won't be the last time this happens either.

As a sysadmin that's dealt with dozens of these "killer firmwares", there's often an indentified need. We make extensive use of the HP SPP's at work and they come with a list of fixes and known issues as long as your arm; it's part of my job to go through the advisories to see if we're at risk and if we are to analyse the risk of updating/not updating. Many of them aren't security vulns or emergency fixes and are often extremely obscure, but once in a while you'll encounter something like a NIC locking up on receiving a certain type of packet or the BIOS repeatedly saying a DIMM has failed when it hasn't, or if you mix hard drives with firmware X and firmware Y on RAID controller Z running firmware... er.. A it might drop the whole array... lots of little issues than can severely impact running systems if left unchecked. And then when you upgrade one component you'll frequently have to upgrade others to stay within the compatibility support matrix, until eventually you just run the damned SPP to make sure everything in that server is at a "known good compatible" level.

Sure, we don't just flash as if it were patch tuesday and no-one ever should - we wait for at least 2 months of testing on non-production boxes before we patch any prod kit with firmware unless it's an emergency fix - but lots of people use the HP SPP to automatically download the latest updates; we've had enough problems with them that we'd never do this (and in any case 97% of our servers have no net access). But the whole point of the SPP is meant to be that HP should have already done most of the regression testing for you.

That said, we've had nothing but trouble with Broadcom NICs for ages and I'm sure there's many admins here who have fond memories of the G6 blades, broadcom NICs, ESX and virtual connect from a few years back. Think HP switched much of their kit to Emulex after that debacle. Also, the latest web-based HP SPP (as opposed to the last one where you just ran a binary) is a complete train wreck on windows for ad-hoc updates, largely due to the interface being handed over to people who seemed to want to make it a User eXperience rather than a tool.

Comment Backups. Backups! Backupsbackupsbackupsbackupsback (Score 1) 224

Assuming I got at least 100Mb/s up (preferably a gigabit), this would make online backups for any more than a few GB feasible. A friend and I have been mulling this over for a decade before "cloud" became a thing and before even commercial online backups became viable, but it would be effective for those too.

I have a NAS. He has a NAS. We can both set up encrypted containers that the other one of us doesn't have the key to. We both require offsite backups and have the nouse to tunnel rsync through an SSH tunnel or a VPN. Wouldn't it be great if I could just set up a cron job and do a weekly sync to each other?

Of course it would, but even on a "decent" downstream connection (let's take my 24Mb/s ADSL2 connection with a ~2.5Mb/s upload) I wouldn't even get halfway through my deltas for the week (of which disc images of my windows boxes or a major component size-wise). Sure, I could periodically pop on the train with a hard drive once a month or so, but even then let's say I need to restore a 5GB file pronto when he's off on holiday - the restore would still take hours, possibly days.

If we all had a full gigabit up/down, I'd build NASes for friends and family for free if I could use them as backup locations which would also have the nice side effect of finally letting me put the oblig. XKCD http://xkcd.com/949/ to rest.

Comment Re:openWRT runs, without wireless (Score 3, Informative) 113

From a few posts along in the thread https://lists.openwrt.org/pipe...:

Quick update on this subject: Linksys has now posted a GPL source for
the WRT1900AC, and it contains the wifi driver sources.
It appears to me, that this driver was properly licensed under GPL, with
proper license headers in all source files.

This means that work on supporting this device can theoretically
continue, although I expect it to take quite a bit of time. As I
anticipated, the code quality of the driver source code is abysmal.
This looks like rewrite (not cleanup) material, ugly enough to cause eye
cancer or frighten small children ;)

There are also still some pieces missing: Since this driver does not use
standard Linux Wireless APIs, it can only properly function with custom
hostapd/wpa_supplicant hacks. I don't see those in the release.

- Felix

Update 2: Those can be found in the OpenWrt SDK for this device on
GitHub. Same comments regarding code quality apply here.

- Felix

The link to the firmware appears to be here http://support.linksys.com/en-..., it's one of those annoying javascript-non-hyperlinks.

Can anyone more au fait with OpenWRT verify that this is correct?

Comment Re:Nonsense (Score 3, Insightful) 294

This. Absolutely, 100% this.

As I've alluded to in my other posts, as soon as I graduated from cowboy sysadmin to a "proper" sysadmin that files change requests and writes project documentation, I've come to love change managers for precisely the reasons above. Change managers are under continual bombardment from non-technical project managers and developers that might well have deep, deep insight into a certain area but can't see past the end of their nose. A good change manager will often trot up to us sysadmins and say "So-and-so has submitted this change but doesn't think it needs approvals from you guys, can you take a gander?" to be met with either a "yeah that's fine" or a "Holy crappingon what-the-fuck in a god-buggered handbasket NO!". Good sysadmins in a constructive environment see a bigger picture than the project managers and the developers and, as far as CAB is concerned, submit better change requests as a result - because risk analysis is such an innate part of our job that most of us don't even realise we're doing it. But change managers see a bigger picture still because they're exposed to the sysadmins, network admins, security admins, user admins, mail admins, storage admins, admin admins, admin users, sysadmin networks, bread, eggs and breaded eggs.

Change managers exist to protect the business. Sysadmins exist to run the business' IT. Change managers realise that sysadmins are often asked to do dangerous or even outright impossible things by powerful people with only an inkling of what consequences such an action might have; it's a change manager's job to communicate with and understand the sysadmin (and everyone else) in such repsects, just as it's the sysadmins' responsibility to communicate to the business why change X is crucial or dangerous. In a properly functioning IT dept, sysadmins and change managers protect both each other and the business from stupidity, mis-co-ordination and lack of oversight. As a sysadmin, change managers are almost always on your side - either pushing for that change that's so essential, or holding you back where there's a risk. They're a highly valuable ally. When something goes to shit, they're the first people to step in and say "no, the sysadmins had nothing to do with this incident".

I'm MrNemesis and in the last three years I've learned to love my change managers.

Comment Re:RAID? (Score 1) 256

It depends how you measure "speed". If you measure speed by things like sequential read or write speed like so many people do, it's possible to match SSD speed with as few as two platter-based hard drives.

But in the real word (of servers at least) there's not really any such thing as sequential reads/writes any more, and when you throw a VM-backended-on-a-SAN into the mix it's safe to say that there is no such thing as a sequential transfer - all I/O, by the time it hits the SAN controller, will look random simply because it's the aggregated reads and writes of dozens or hundreds or thousands of different servers.

So going back to the original premise - if you in fact measure speed in IOPS rather than throughput, you'll need something approaching at least twenty spindles (probably with a bunch of expensive battery-backed RAM as cache sitting in front of it) in order to even get close - platter-based drives basically just suck at random IO and it's not unusual for them to be an order of magntitude slower in throughput when doing 4kB random as opposed to 4kB sequential; I've seen drives that can do >150MB/s sequential drop to doing less than 1MB/s random (something you can easily try out yourself with iometer if you so wish). It's why so many SAN technologies now use tiering, where incoming writes first get written to RAM, then the SAN controller does some IO coalescing, and then sends it down to the fifty or so spindles directly - or increasingly these days to an intermediate NAND layer. This way you can serialise requests so that whilst the data hitting the SAN is inherently random, your SAN controller has the smarts to get it to write to the spindles in as sequential a manner as it can.

If it's IOPS you're after and you don't have a fancypants SAN, it's now frequently cheaper to shell out for a limited amount of NAND than it is to buy enough spindles to support a peak IO load, even if you shell out on the big bucks of FusionIO or those ludicrously pricey SAS SSDs. If you need speed and capacity, you can now buy "application accelerators" or suchlike that will automatically promote hot blocks into a local NAND cache rather than going straight to the platters (although I don't know how well these work in practive). If you do have a fancypants SAN you can make it an even more fancypants SAN by plugging a layer of NAND in between the controller cache and the spindles themselves and still have oodles of relatively cheap platter capacity.

Of course at home I still use an SSD for the OS and programs and I keep my static media on platters, because that's one environment where I do know accesses will be mostly sequential and I need the capacity-per-quid that only platters can give at present. But I've just added an SSD writeback cache to my NAS and it's noticeably faster already.

TLDR: Throughput and capacity aren't the only measures of storage, and an SSD can improve performance massively whilst costing less than the equivalent platters as long as you're aware of your IO workload.

Comment Re:Open both eyes, and quit squinting! (Score 2) 312

Same here - standing gets very uncomfortable very quickly for me, but I can happily walk up hill and down dale until the cows come home.

I no longer smoke, but I still take fag breaks at work just to give me a reason to stretch my legs and have some mental downtime once every hour or two. Pacing around is great for thinking, but for doing I need to be sat down.

Comment Re:Patching.... (Score 1) 294

I depends very much on the makeup of CAB and the company culture surrounding it. I've already mentioned in my other post the "fun" I had with a CAB in previous employment who were always obstructive about everything until we'd had a long string of changes that went exactly as planned (including some changes that were approved against our wishes and broke things exactly like we said they would) - if there's an adversarial relationship then even with excellent diligence it takes a long, long time to build up a sufficient amount of trust.

CAB at my current employer is brilliant. You submit your request along with your technical risk analysis and the people on the CAB are techy enough to understand those risks and how they relate to business risks. Submit, say, a zero-day and not only will they welcome it with open arms but they'll literally ask how fast-tracked you want it. Yes there's technically red tape as T's need to be dotted and I's need to be crossed, but it's not a hindrance. A good CAB should know when they need to be slow and especially when they need to be fast; anything else runs against the idea of having a CAB in the first place.

Comment Re:Nonsense (Score 3, Insightful) 294

So refusing to comply with an order that's in direct violation of your contract is acting like an arsehole now? And you're happy for the rule you're being routinely forced to violate in the course of your professional duties to be left on the books to trip up the next person who doesn't have the guts to stand up and say "no, I won't shoot myself in the foot"? Will HR even remember you have a signed waiver before marching you out of the building for knowingly violating company policy?

Sorry, but no. If you're stuck in a Kafkaesque situation like the GP was, the only professional thing you can do is give them exactly what they want. Especially when you've explained to them why giving them exactly what they want will be bad and they've given a written response that amounts to "we don't care, do it anyway".

If you act like something that badly needs fixing doesn't need fixing and you're happy to see people and companies ruined over it, but all means keep thrusting ones cranium into the pulverised silicon dioxide. Some people might say you're acting like an arsehole however.

Comment Change Management is good (Score 2) 294

...and necessary* but that doesn't stop some change management boards being needlessly obstructive.

Years back, I was working at a company where all of our servers got patched at build and then never patched again "in case it broke something". Myself and the rest of the ops team begged and pleaded for the business to allow us maintenance windows, allowed to reboot the OS outside of business hours, install patches... all to no avail.

Until the company lost a bidding on a contract because they had no maintenance or patch management policy in place so the business comes running at us screaming why we don't patch our servers (they would listen to their potential clients about computer security and whatnot, but not to their own staff). Cue us showing them the dozen or so draft maintenance policies that we'd submitted over the years, all of which were rejected by the directors. Red faces all round in that meeting :)

So the latest draft gets pushed into force by a wheelbarrow full of cash and we go out and buy Shavlik, a really rather nice patch management solution... and then our change management board goes nuts when they see our report. Lots of w2k and w2k3 boxes had literally hundreds of service packs and patches oustanding before, and like the OP wanted an individual change raised for each patch going on each server. We then set up an email direct to the change board that gave them Shavlik's automated PDF thingy which gives a list of all the patches outstanding on a server along with a hyperlink to the MS KB or similar... but that wasn't good enough. They wanted a report on what each patch did, which files it altered, all the usual stuff. Now as another poster had pointed out, under ITIL this should all have been "standard change" without needing so much paperwork (seriously, they should be at least aware of ITIL even if they're not going to follow it to the letter) but we could sympathise with them that, even with our planned dependency-based staggered rollout over a 4 week period, this was both a radical shift in company culture and posed a significant opportunity for breakage... but still. Filing about 20,000 change requests it was to be.

So obviously, since we were dealing with obstructive officials, we did exactly that. Did a few dozen hacky shell scripts that took the PDFs that Shavlik made, CURLed down the contents of the link to the KB page and then posted it off into the change management system - one request per patch per machine. After about twenty minutes of this we'd submitted about 400 requests and the change management system (an in-house pile o' shite that wasn't so much written as congealed out of various bits of sharepoint and was universally hated) had slowed to a crawl enough that it took 10mins to open the page. It used funky whizz-bang ajax to load *all* of the pending change requests in the background ("who needs a LIMIT on this SQL parameter?! We're never going to have more than fifty open change requests!" The developer in question also seemed to think that using a LIMIT statement was akin to taking the go-fasta stripes off your car. Wonder if he's doing webscale development now). After some brief arguing where they actually suggested we should open a change request to submit changes - at which point we cackled at the prospect of submitting another 20,000 pre-change-request changes - and after finding their ITIL manual down the back of the sofa they finally agreed that yes, actually, they didn't need quite such a detailed report, and were prepared to accept our risk assessment report as a single change for the first weekend's rollout.

So about 20,000 patches/service packs were staged and installed over the next two months, and luckily we didn't have a single failure due to the patches (yes, I also thought this was miraculous considering the crufty applications). From then on, every patch cycle needed just four changes, one for each week. That's how it should be done.

* Yes, necessary! I've done more than my fair share of JFDI but that just doesn't scale when you're working in teams of more than a few people - and completely falls apart when you scale up to multiple teams. Perhaps most important, aside from scheduling potentially conflicting changes ("what do you mean the routers are down for an hours' maintenance whilst we're uploading the new data?!") is making sure we admins document our changes and document a rollback plan. Version control for config files and the like... once you're used to it, you wonder how you ever lived without it.

FWIW, I'm still a sysadmin and I still hate the paperwork of doing change management - why do I need to do this? It's never going to go wrong! But I've seen (and perpetrated in) so many changes going wrong that I can see its value; you never actually miss it until it's gone.

Comment Re:And the cry goes up from ten thousand admins, (Score 1) 151

As an atypically profane Brit, there's much to love even about the simple* word "fuck".

"You know, Minister, I believe that in the long view of history, the British Empire will be remembered only for two things... The game of soccer. And the expression 'fuck off'."
- The last Governor of South Yemen, in conversation with then-Defence Minister Denis Healey on the eve of South Yemen's independence.

Personally, members of my team are fond of variations along the lines of "Fuck, the fucking fucker's fucked!" since one gets to use the word fuck as an exclamation, a verb, an article and an adjective; concise, immensely satisfying to say yet still grammatically correct.

* Not a simple word at all really since, handled correctly, it can convey pretty much any meaning. It's frequently spotted as a metasyntactic variable in particularly hairy functions, wibblefuck being a common variation/combination. My favourite spot of this was "fuckwomble**" in some in-house LDAP code which entered company lexicon as an abbreviation for someone in compliance with Tucker's Law.

** A Womble is of course one of the inhabitants of Wimbledon Common who make a living by picking up rubbish. The coder in question had assigned variables named after all of the wombles, and after running out of names once he hit Bulgaria, started using a variety of swearwords rather than proceeding somewhat-logically through other countries of eastern europe. Given the nature of the code, it was a decision I could only applaud.

Slashdot Top Deals

Living on Earth may be expensive, but it includes an annual free trip around the Sun.

Working...