Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment Re:Sysops writing unit tests? (Score 1) 394

I think the point is more to write unit tests of *your own* stuff.

I project managed a huge core network upgrade at our company. As we put together the plan, we had to take everything down in layers and then once the network was swapped out, bring it all back up. So the plan the various admin teams presented me with were "Storage team brings up the SAN and filers, then the UNIX team brings up all their systems, then the Windows team brings theirs up, then app admin teams bring up app servers and whatnot, and then after we've done all that for three hours, we'll have end users test. If their apps don't work, then something went wrong."

I said, "Shouldn't you test your own layer yourself? You know, before inflicting it upon someone else?"

Their response: "Huh?"

So we had to work together for a while, and finally all the ops teams had the equivalent of "unit tests." All the unix boxes would do a many-to-many ping sweep to make sure every box had connectivity to every other box, for example. The storage team tested that you could connect to the storage, and looked at NFS error rates. It took a bit to get to this point because of an odd embedded opinion that "testing is for other people. Admins don't test." Which I think we might recognize as not being all that helpful.

So yes, operations folks should write unit tests. According to the DevOps "infrastructure is code" belief, your new UNIX build is just like some guy's Java code, and should have unit tests (and be in source control and other such best practices).

Comment Yay UNIX, but remarkably uninformed about DevOps (Score 1) 394

I agree with the general sentiment of "you can solve a lot of problems just using UNIX and not big fancy things." That doesn't really have much to do with the random DevOps swipe and shows a fundamental lack of understanding of it.

DevOps grew out of a couple major needs. The first is for developers and sysadmins to collaborate more. Do you like the old practice of "developers make it all happen, and toss their demented creation over to you a week before go live and say 'figure out how to make this run in production'"? As developers have largely uptaken agile development methods, it's gotten even harder for traditional sysadmin teams to work with them to make sure that the end system is going to have good availability, performance, security, etc. I don't think a desire to have operations folks engaged from conception with products/projects to be "sucking up to the developers."

The second is to advance the state of the sysadmin practice. It has stagnated somewhat in recent years. It's not old tools that are the problem (again, yay UNIX) but the processes and practice - structured process turned into the huge unusable beast called ITIL and so many admins have apparently decided that the way they do business really shouldn't change from the 1970s. But Visible Ops, the agile systems administration movement, the growth of automation tools like Chef/Puppef/cfengine, etc. mean that we need to bring system administration up to the same level of professionalism as programming can achieve. Why exactly should your sysadmin scripts not be source controlled? Why should you not write tests - not for others' code, but for yours? Why shouldn't you automate system builds like code builds, even moving to continuous build cycles? In the increasingly virtualized/cloud world, "infrastructure as code"

Here at National Instruments we have a DevOps implementation to develop some new SaaS products. It's a single team with developers and sysadmins on it. Provisioning and monitoring are built into the apps from scratch. All our sysadmin stuff is kept in source control. We script things rather than doing them by hand. Developers write unit and integration tests for their code; admins write unit and integration tests (we call them "monitors") for our assets. We all work in iterations and work tasks on a burndown chart. Bugs in the systems are tracked in the same system the developers use. All of our systems get built and booted and have software and apps loaded on them completely automatically. And that's awesome! Sure, it's not the good old "lurk like a troll in the back room and make developers cry when they come around" model, but you know, different strokes I guess.
The Internet

A New Kind of Science Collaboration 96

Scientific American is running a major article on Science 2.0, or the use of Web 2.0 applications and techniques by scientists to collaborate and publish in new ways. "Under [the] radically transparent 'open notebook' approach, everything goes online: experimental protocols, successful outcomes, failed attempts, even discussions of papers being prepared for publication... The time stamps on every entry not only establish priority but allow anyone to track the contributions of every person, even in a large collaboration." One project profiled is MIT's OpenWetWare, launched in 2005. The wiki-based project now encompasses more than 6,100 Web pages edited by 3,000 registered users. Last year the NSF awarded OpenWetWare a 5-year grant to "transform the platform into a self-sustaining community independent of its current base at MIT... the grant will also support creation of a generic version of OpenWetWare that other research communities can use." The article also gives air time to Science 2.0 skeptics. "It's so antithetical to the way scientists are trained," one Duke University geneticist said, though he eventually became a convert.

Slashdot Top Deals

2.4 statute miles of surgical tubing at Yale U. = 1 I.V.League

Working...