Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Utah: More of the same (Score 1) 161

Correction: Approximately 2% of the population of Utah practices polygamy or currently lives in a polygamous family. That's around 40,000 people. So around 98% of the state isn't polygamist.

Source: James Brooke. "Utah Struggles With a Revival of Polygamy. " New York Times [New York, N.Y.] 23 August 1998, Late Edition (East Coast): 12. ProQuest Newsstand. ProQuest. Brigham Young University, Provo, Utah. 11 Dec. 2007

Comment Re:Powerbroker & logging (Score 1) 433

My suggestion would be to implement a "break glass" mechanism into it. If the root password is requested by the system, it automatically schedules a password change for that system one day later.

I really like that suggestion. I'll ask the Service Design team over our root password implementation if they've considered that approach. It makes sense, and should be a pretty low-cost change.

My only real complaint about the system is that, on occasion (like if someone changes the root password manually) it breaks. Not something that happens very often, but with tens of thousands of machines, once a week or so I have to do digging through old root passwords to find the right one.

Comment Re:Rotate Admins and Audit (Score 1) 433

Rotate admins between systems/responsibilities and have a third party do random audits.

I've been through a few detailed audits, and I actually enjoyed having an outside source confirm I'm doing things right (or wrong). They can actually be empowering as they can justify an increase in IT budgets that otherwise would be a hard sell. Again an admin who goes apesh!t at the idea of having somebody else check their work is a bad sign.

If I had mod points in this discussion, I'd give you +1. As it is, I simply must contribute my point of view.

There's a break-point somewhere between "small shop with many hats" and "big company with people in IT roles who can keep an eye on one another". But even in a small shop, you can divvy up the duties. You can hire an auditing company and, as mentioned above, give your admins a break for a week while the auditors do their thing. The auditors will always find something wrong. That's their job. Don't make your admin fear for his or her job due to the mistakes. Work to fix them.

We go through this every year with our payment-processing. An external auditing firm hired by the credit card companies (but we have to pay for them) does a rectum-to-retinal examination of everything that touches the payment processing stuff at the data center. They make recommendations, and we follow them as closely as we can. The next year, they show up again around the same time and we do it all over, but with slightly different recommendations this time in response to new vulnerabilities someone has thought of or a crook has implemented.

I used to think that a company would want a single, uber-admin up until about 80 total employees. These days, I think that divvying up the IT duties right from the start is a good idea. Have a change-management procedure in place so that the sysadmin needs to get work approved by a higher-up before proceeding, and have this written down or tracked in a change-management database somehow. Plan the work so that, unless a system has crashed, NOTHING is an emergency. Require a minimum of a day or two prior to starting the work so that all the approvals are done, and that way you have your work scheduled 2 days in advance, all the time.

Of course many companies will not want to shoulder the up-front opportunity cost of a decent change-management strategy. But at some point, they're going to struggle with managing lots of changes because they didn't figure out how they were going to do it earlier... and the job will be much more complicated at that time.

If you hate oversight and review of your work, find a different career or start your own company.

Comment Re:Depends on what kind of people you want to attr (Score 2) 433

A very talented, and very honest person will not put up with layers of approvals and constant monitoring.

Have you actually worked at a company composed of very talented, very honest people who put up with this very thing every day? Setting up an ITIL-compliant change management system -- and getting everybody on-board with using it -- is a very daunting procedure. Speaking as someone who has been on both ends of this, I can say that in the end, it's worth it. My day as a sysadmin is no longer all about putting out fire after fire, dashing around and pulling crazy hours at the whims of vice-presidents who think the latest thing they heard about is the "highest priority".

Work is scheduled, executed, and followed-up on. It's not perfect, but requiring approval from the stakeholders prior to making changes has been a HUGE improvement in my quality-of-life and that of my fellow admins. The principal cost is manpower: I spend far more time managing changes than I used to.

It bugs me when people equate "managing change" with "being a bureaucrat". The process of getting to the point that we could manage and track all our changes was a royal pain in the butt. Five years later, even if you factor in the time-cost of documenting all changes to all systems, we're running more efficiently than before.

I've played the game of being the chief sysadmin in a startup before. Heroic effort, hectic schedules, and obscene hours at low pay aren't what I'm interested in anymore. And truthfully, the average quality of sysadmin I work with in a Fortune 500 company is head-and-shoulders above the admins I worked with before. There's a minimum standard we expect, and if you can't hack it, you're out.

That minimum standard includes knowing that any change you make as root on a system will be monitored, catalogued, and may be subject to a later Root Cause Analysis (RCA). Learn to behave ethically, carefully, and competently at all times when you're working as root, and it's no big deal. In truth, I'm GLAD for the monitoring. During any audit -- and I've been through many! -- I can point to the service request number, change request number, and approvals for the work I performed. It's good pay, my ass is always covered when I make important changes, and I get to work on machines in a data center that makes the warehouse at the end of Raiders of the Lost Ark look small. What's not to love?

Comment Re:Powerbroker & logging (Score 1) 433

So, EVERY KEYSTROKE is logged.

You've obviously never used powerbroker before. Every keystroke is logged AFTER you've elevated your privileges. I've never once had to type a password after the PowerBroker authentication. If I need a different role, I have to drop out of the Powerbroker session and authenticate into the new role.

Security Monoculture - Single 'root' password, no matter how complex.

Nope, you miss how it's managed. Every system gets its own unique random root password, which is changed every 30-90 days. Guess I wasn't clear enough in my original posting; my bad. I have to log into our root-password-sharing system, request the password for that specific system, and that request is logged. If audited, I need to provide the service request # I was working on.

No monoculture. Every system has its own unique root password.

I wish you the best of luck with your solution, and applaud you for actually WORKING at it - that's rare, from what I've seen.

I'm part of a company of over 90,000 people. My little piece of the IT pie is quite tiny. I use the tools; I don't orchestrate them :)

Comment Re:Powerbroker & logging (Score 4, Informative) 433

You've tossed out a few red herrings and a couple of valid points. I'll try to address them in order.

this tells me that there is somebody that holds access above the other users, basically missing the point here.

No, I haven't missed the point at all. The point is to distribute the responsibility with sufficient checks in order to ensure that misbehavior will be caught and dealt with in a timely fashion. Is it possible someone could scheme up a way to slide abuses past the admins? Of course it is. But between good backups, fascistic logging, role-based access control, and routine audits by the change control committee, the risk is minimized.

There's no one person who holds the "keys to the kingdom". No critical data is stored on the machines themselves; it's all stored on centralized storage. The folks who admin the automated root password changes don't have any access to storage; the storage folks typically don't have any access to the systems.

Again, that means that there's somebody administering the logging system. and I almost assure you that even if their logins are listed somewhere: they have full access to remove those entries and make it look like it never happened.

Incorrect. I didn't cover this in my original post, but logs are (and should be) stored on write-once media. You can designate volumes on modern storage media so that, once written, it can never be altered without destroying the entire volume. We use this extensively.

say I have a machine that stores credit card numbers on a DSS approved network that's locked down in the ways you describe above. at the admin level, it would take me minutes to provision a machine to replicate the target. I don't mean replicate as in contents, I mean replicate to the network view.

Once again, distributed access can prevent this. The network team and the sysadm team aren't the same teams. Every port on your switch is disabled until it's enabled by the network team. Even once enabled, that port must be on the same VLAN as the hypothetical credit-card storage system.

That's once again where fascistic logging and automated reporting come into play. If a port is disconnected, unless a host has been blacked out with an appropriate change control ticket filed, the port disconnection generates an immediate Priority 1 service request to investigate.

If a drive is removed from centralized storage, that also generates an immediate P1 ticket. The sysadm's access would have been logged the moment he swiped his badge, and cameras throughout the data center capture the switch-over.

A corrupt admin can do a lot of damage, I admit. There's no getting around it. But with sufficient logging -- and yes, I include physical surveillance as "logging" too -- they're not going to get away with it.

the replicated machine can be tunneled into place and act as if it was the machine in question.

Now this is the red herring. If you've ever done ANYTHING major with credit cards in a data center, you are aware that you're subject to yearly audits of your infrastructure by Payment Services. They do a deep-dive of your systems to enforce a huge number of requirements. I can't go into it here. It literally fills a large book, and they go over it line-by-line with all the admins involved, every single year. I've been through several of these, and each year it gets broadened to cover more potential issues.

Chief among these requirements? A separate admin/management network from the front-end/back-end network. You can't "tunnel in" to that network and make it "act like" another system. The network is an unroutable private VLAN or fibre-channel connection.

at this point, I can reverse firewall the unit preventing it for calling for help or reporting the changes I make. I can snapshot the drive and move it offsite

Yet another red herring. Databases must be encrypted in order to pass payment services audits. As do the backups. Maybe you could grab the encryption key, but then you have to pull a whole shelf off of an EMC or Netapp storage unit, with intimate knowledge of the storage layout. Once again, separation of responsibilities protects you here.

And if you tried to walk out of our Tier 4 data center with a spare hard drive or three in your bag, you'd get locked up. I can't go into all of the specific precautions, but some are common knowledge: metal detectors, mass-change mantraps, retinal scanners, and non-random (read: everybody gets one) bag searches.

Until CPU's are made to understand the "two key" approach to authentication, any machine will be susceptible to weak physical security.

Agreed. That's why any Payment Services machines have strict physical security requirements, and your hypothetical credit-card-data situation is a red herring. A system that would allow an admin to VPN in and "tunnel" a virtual machine as part of the payment services cluster would never pass an audit.

On the other hand, every year they revise the handbook for the audits because of the new exploits performed in the past year. So you're always playing catch-up with the crooks. I don't know a way to reverse that role. If you figure it out how to always stay one step ahead of the bad guys, would you please tell the TSA so they don't touch my junk next time I fly?

Comment Powerbroker & logging (Score 4, Informative) 433

We have several solutions which work together to minimize the risk of root at my company:

1. Powerbroker. It's in use on every single UNIX system administered by our Global IT teams. Every user has a role (or several roles), and that allows them to execute a variety of commands with elevated privileges. Once Powerbroker is invoked, however, every single keystroke is logged and can be played back. These logs are stored indefinitely; access is very restricted.

2. Automated, centralized root password management. One of the steps to setting up a UNIX machine here is ensuring the root password and remote console admin passwords match that dictated by our automated provisioning system. Then every 30-90 days (depending on policy for this type of system) the root password is changed to a very long, apparently very random string. I can look this password up if my role allows it, but the lookup is also logged.

3. A good Change Request (CR) process. Every system that exists in a data center should have a record in our systems database. Once a system has passed through the phases of deployment (Warehouse -> Data Center Install -> Sysadm Configure -> Deployed) any change made to the system must be requested and approved by the owners of the system. This approval is logged, and the date/time of the work is also logged. Sysadms must close service requests within the time window specified by the CR, or apply for an extension or reschedule if they're unable to complete it within the allotted time.
    The downside to this is that you lose quite a bit of system administrator work hours filing and managing change requests. However, this loss of efficiency -- IMHO -- is better than the mayhem that ensues without an organized change process.

4. Automated forensic tools to monitor changes. Information overload is a real risk with any Tripwire-style system, though. We're still working out some of the kinks on this part of the system. Once we ensure that all normal changes due to operation of the system and scheduled maintenance get excluded, this will be the fourth leg to reduce the risk of super-user privileges.

At any company, IT must find a balance between controlling user actions and monitoring those actions. In most cases, the easiest approach is to prohibit by policy only those things that might typically result in lawsuits, but monitor everything else to the best of your ability. Combining a Powerbroker-like product with automated root password management -- both with fascistic logging -- is a reasonable approach that works well for many large companies. Combine this with a change management system, and a forensic tool to automatically monitor and notify of unauthorized changes, and super-user isn't really all that big of a concern.

Comment Re:Of course (Score 2) 945

Before the their/there/they're your/you're cops arrive, allow me to state that I intended to state "who gets there first?" as my final rhetorical question, not "who gets their first?". I've no excuse save that I'm distracted by my full-time job :)

Comment Re:Of course (Score 3, Interesting) 945

I still own and drive my 2001 Honda Insight daily. Its top speed is 113MPH -- that's when the governor kicks in -- and it is truly scary. Particularly WHEN the governor kicks in, suddenly the IMA is whining as it goes into recharge mode at a speed way past what it expects, then since your foot is still to the floor it accelerates the car as hard as it can (which isn't much) until the governor cuts off gas to the motor again.

In truth, about 90MPH is when the Insight gets pretty scary to drive. Ultimately, it's an all-aluminum, incredibly-efficient econo-box that can get out-accelerated by my wife's minivan.

I have never had a problem with freeway speeds (75MPH) in the Insight. The only time it gets scary at that kind of speed is when the road is grooved, like for construction, or uneven. Then the offset between the front and rear tires comes into play, and the car will kind of shimmy around a little in the lane.

So to sum up:
* Max speed of 55MPH? B.S. I've owned my 2001 Insight for nine years now, and drive faster than this all the time.
* Max sped of 120MPH? B.S., by about 7MPH. My sole experiment showed the governor kicking in reliably at 113MPH (which, by the way, is the max rated speed of the tires)
* Anemic performance? Damn right! My hybrid automatic transmission still averages better than 50MPG... and that's running larger, grippier tires than stock. I don't mind getting out-accelerated by trucks at stoplights; I'm playing the high-mileage game, not "who gets their first?".

Comment Re:engineering != rhetorical bile (Score 2, Interesting) 177

Now, one can say that their customers are stupid, and Oracle is milking them by offering a product of little or no additional value. Or one can say that Oracle is trying to milk the Linux cash cow by attaching their name to what's effectively a rebranded existing Linux distro. One can also say that their execution is incomplete or poor. But by no means would such a product be useless.

Or one could say that Oracle Enterprise Linux fulfills its role: an Oracle-controlled software platform that allows the Oracle kernel folks to have their say about the way a stock configuration should look to better run Oracle databases and middleware. Which, to me, is really the point of the distribution entirely. Oracle undercuts the Redhat price for support, gets more of the profits, and guarantees the OS will do what it needs to do.

I support some 2,000+ physical Linux machines, and of those, the vast majority are running OEL or Oracle Virtual Server (a Xen-based product). By and large, the stock configurations work perfectly for us -- with some tweaking for RAC, and tuning for the memory/CPU configuration of the box, of course -- while I cannot say the same for our Redhat instances.

Comment New Sun Hardware Requires New Kernel Version (Score 1) 177

The article is missing the point. The key pitch here is that you need to be running Oracle Enterprise Linux 5 Update 6 or newer to work on the latest generation of Sun x86 hardware. It's a big deal inside of Oracle because Oracle wants to be running on Oracle hardware, but is about 80% Dell stuff on the x86 side right now in the Oracle data centers that weren't Sun acquisitions. There's a substantial hardware refresh effort inside the company right now, temporarily making Oracle one of Oracle's biggest hardware customers.

But this is part of a pitch to existing customers: run our OS and you have full hardware support TODAY. Run Redhat and you'll have it when they release Redhat 6 or if they decide to backport new hardware features to their kernel in a few months or years. The announcement is a statement that Oracle -- for the first time -- is taking the lead in releasing a newer kernel ahead of Redhat, rather than waiting for the Redhat release first before releasing the slightly-tweaked-for-Oracle version in Oracle Enterprise Linux. It's driven by hardware needs, and for at least several months that will be a selling point to customers wanting the latest and greatest: use OEL4u5 or Solaris, not Redhat, or else you won't leverage the new hardware features effectively (if at all).

I actually think the compatibility issue may just boil down to the SAS driver update to work with Sun's latest chipset. But it's a bit of a show-stopper if you're not running OEL5u6, since you can't even install the operating system without the SAS driver update.

Comment Re:Sad (Score 1) 234

I truly hope your experience [[with the 7410]] will be better.

As do I. We've experienced reasonable success in certification for our needs -- principally NFS, with a sprinkling of CIFS -- and are in a fairly unique position to leverage the platform and influence the direction of development. We're not just buying eight of these units. We're buying a few hundred of them...

Comment Re:Why the silence? (Score 1) 234

And this whole process only authorizes you to use the download for 90 days on an evaluation basis. Beyond that, you have to cut a cheque. And, as another poster mentioned, you don't get any patches either.

Already debunked your assertions in this thread.

1. Hobbyists and personal use were unaffected by the license changes; what is affected is COMMERCIAL production or development use beyond 90 days: http://developers.slashdot.org/comments.pl?sid=1719254&cid=32904884 . Summary: Oracle's never going to come bash in your front door to extort license charges for that virtual Solaris you're running on your laptop or on that Sparc III you fished out of the dumpster.

2. Critical security patches are available generally and immediately; usability and functionality upgrades are on a six-month release basis for non-paying customers: http://developers.slashdot.org/comments.pl?sid=1719254&cid=32904728

Just 'cuz a load of Anonymous Cowards on Slashdot keep saying you can't get patches for unpaid Solaris installs doesn't make it so!

Comment Re:Sad (Score 2, Interesting) 234

It's not that it wasn't up to the job, it's that the features weren't/aren't backported to Solaris (10) yet.

Right you are. I stand corrected. My main experience with the 7000-series storage devices comes from some training classes, followed by hands-on recently as we've received a few of the devices with many, many more back-ordered due to the global solid-state disk shortage.

Thanks for the clarification.

Root file systems on ZFS were originally OpenSolaris-only, but are now possible in recent updates of Solaris 10.

Yep. Happy day! I'm running Solaris that way right now (though typing this from my Ubuntu Linux box).

Comment Re:Why the silence? (Score 1) 234

That's FUD. You can still download and run Solaris for free. You only need to pay for it for commercial use.

If I didn't have several posts in this thread, I'd mod this "informative". The anti-Solaris FUD spread far and wide since the acquisition has been ridiculous.

Most of the hysteria, I think, springs from the transition adjustments. Suddenly Oracle went from several years of an unwritten policy of "we do not buy any Sun hardware" to being the #1 customer of Sun hardware. Every trade-in, every re-purpose, every non-end-of-lifed system that Sun/Oracle gets its hands on is being recertified and used internally. Oracle had a HUGE pent-up demand for hardware to get a large number of projects off the ground, and Sun is scrambling to provide it. There's a lot of uncertainty right now just because of the growing pains, but with a new $300Mn data center going up in Utah due to open in 2011 (with three more expansions planned already), and maxxed-out data centers all over the world with hardware being refreshed, there's enough demand in the channel to backlog providers of everything from solid-state disks to capacitors.

Sun's scrambling to meet this huge demand -- in addition to a large number of companies that also held off orders due to acquisition-uncertainty -- with a reduced staff from those who left voluntarily or were let go during the acquisition. Oracle's spending out of the war chest, finally able to appease the hardware-hungry project base that had been stymied for years while it was hunting for a hardware vendor to buy.

There are, of course, growing pains. But the two companies, IMHO, are going to come out of this much stronger than before, and able to compete head-to-head in the IBM/ACN/HPQ/MSFT space.

Slashdot Top Deals

"If I do not want others to quote me, I do not speak." -- Phil Wayne

Working...