Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:Backing up user data on Linux (Score 1) 517

For a server it's different because each service has its own location for config and data, but if your job is to setup and manage the server then you should know what its running and where those services keep their data.

That's a great theory, but in the real world numerous people rely on servers that don't have a dedicated admin, so these things do matter and "You should know everything about everything" isn't a terribly useful philosophy (leaving aside the often incomplete nature of documentation in FOSS world, which can make it hard for even a competent and generally knowledgeable admin to actually know everything they need to here).

In this context, I'd take backing up user data and reinstalling Windows and its applications over backing up user data and reinstalling Linux and its application any day of the week.

Comment Re:Security team (Score 1) 517

So they should never run scans because every time your computer is on you are using it?

The kind of entire system scan that slows everything down for an extended period? No, probably not. Those scans are mostly worthless from a security point of view, and have a high impact on the overall efficiency of the system.

They should never patch and just let well known vulnerabilities run amok because you don't want to be inconvenienced, either by having to leave your machine on or wait while patching happens?

Of course not. But we aren't talking about rolling out the approved updates across the organisation after Patch Tuesday or whatever we're calling it this month. We're talking about regular scanning that routinely interferes with normal use of the system.

You left them no choice by giving them no time that wasn't work time.

There are plenty of other choices, starting with having sensible security practices that don't routinely undermine systems at all, and closely followed by having a standard procedure for applying security updates in a timely fashion that allows for things like people being out of contact for extended periods and provides for notifying them of any urgent threats while they are away and then getting them fully caught up when they return.

If the process of installing updates and perhaps a reboot on a Windows box is itself taking so long that it can't be done in the background while someone is making a coffee, again you probably have bigger problems to deal with and need to consider whether the spec of your systems is good enough what what you need to do with them. But in the real world, this is almost never a problem in practice if you have a remotely sensible set-up.

Comment Re:Security team (Score 1) 517

That's how you see it, not how IT, nor Management, nor lots of other orgs see it.

Frankly, I think it's how responsible IT and smart Management see it as well, and I don't know what "other orgs" you mean so there's little to say there.

IT is a support function. The purpose of support functions is to support the primary functions of your business. Any time your support functions start undermining the primary functions, that should be robustly justified, or the people who want to do it should be told "no". It's really as simple as that.

As for your example scenario, that's the kind of foolishness that costs real businesses money all over the place. I bought some quite expensive household goods a little while ago, and as it happened we were just finishing up the paperwork at 8pm as the showroom "closed". The sales guy was incredibly apologetic about how he couldn't print the last form we had to sign -- which was the important one that guaranteed us the goods and them the sale -- because their central management system went off-line for something-or-other and despite it being 8:01pm and him having a high value customer waiting to complete a sale, he couldn't.

As a direct result of the poor policy imposed on the local store by some genius in central IT, they were at risk of losing one of only a few final sales they would have made that entire day; in fact, if it had been one day later, they would have done, because we would have been on holiday and so not able to return the following day to finish everything off as we actually did. That is what management technically refers to as a "total screw up".

Actually, their IT systems generally were a disaster. On our first visit, they had multiple people looking around at one point. However, it took so long to put a provisional order into their prehistoric computer system to get a proper quote (seriously, like an hour to do what should have been maybe 5 minutes) that people were literally walking out after waiting half an hour to see the sales guy who was tied up with the other customer.

I can easily imagine based on just those experiences that dumping seven figures into building a modern IT system that could handle customer orders properly would increase their revenues by 25-50% indefinitely. It obviously wasn't a new or unique problem, as the sales guys on both occasions seemed both genuinely apologetic but also had a well-rehearsed patter for how it happens sometimes but no-one ever fixes it.

Comment Re: Security team (Score 1) 517

To be fair, if you're dealing with the level of malware that can cover its tracks against that kind of investigation, and if that malware is already on your system but wasn't picked up on a previous scan, the game is already over anyway and you're well into complete reinstall and restore from back-ups territory. These days, with threats that can hide in other areas of the hardware/firmware to survive the wipe and reinstall process, I'd be wary of trusting even that in any highly security-sensitive environment.

Comment Re:Security team (Score 1) 517

I'm freelance these days, so I'm afraid I can't help. Sorry. :-)

One of my regular clients operates in this field, and seeing things done in a reasonable way reminds me of why I used to get so irritated when I did work as part of a large, bureaucratic institution. It's not magic. It's just being aware of modern tools and practices, and being willing to make the effort (and yes, sometimes, being willing to spend the money) to set up something that provides a useful degree of security but without making things so secure that you forget why you're there in the first place.

Given the potential costs of getting security wrong, I don't really understand why any organisation large enough to be facing these issues regularly wouldn't hire people who know what they're doing and provide a reasonable budget for them to deploy proper tools. I can only assume it's the usual suspects, probably some combination of ignorance and corporate politics.

Full disclosure: Obviously I make money from working for that client and they make money in part from selling some of those tools, so I'm kinda sorta shilling here. But not really, because really, the cost of hiring smart people and giving them proper equipment vs. the cost of say a major regulatory investigation or having your whole sales team at the pub all day because they can't work... not exactly close.

Comment Re:Security team (Score 3) 517

They shouldn't be doing their work at home - which is what the GP said.

Oh, OK then. It's not like full- or even part-time telecommuting is one of the most advantageous perks offered by many modern workplaces in terms of productivity or staff morale, so I don't suppose the business will suffer too much. Should I also recall our entire sales force and tell them they can't work on customer sites any more?

In other news, please be aware that due to a change in company IT policy, next time you get paged at 4am because of a network alert, remote access will not be permitted for security reasons. Instead, you will be required to get up, spend 20 minutes driving to the office, log in from a properly authorised and physically connected terminal, type the same one CLI command you do every time that alert goes off to confirm that it's still just the sensor that is on the blink, type the same second CLI command you do every time to shut off the alarm, spend 20 minutes driving home again, and then go back to bed. Sleep tight.

Comment Re:Backing up user data on Linux (Score 1) 517

The only part I've found complex is finding out where and how various apps actually store their data, particularly when I don't really have much interest in the app.

In a sense, yes, the most important problem is that simple, but as you then demonstrated with things like the database example, "simple" and simple aren't always the same thing.

The other point I was to make is that your example presupposes that all of the packages you need are installed using your distro's package manager. In my experience that is rarely the case, and while there are tools like checkinstall that can help, the lack of any enforced installation conventions or protections against unexpected interactions in mainstream Linux distros means you are always vulnerable to certain nasty problems. Anyone's make install can probably nuke the output from anyone else's. Someone running a make uninstall that removes something that some other project assumed would be present can break the other project. Even if you stick to distro-only packages, there is not always a guarantee of backward compatibility when moving to a new version of the distro.

To me, the fundamental problem here is that for the most part I want an OS foundation that is stable and robust, and other than security fixes I probably never want it to change for the lifetime of the system. On the other hand, I want to be able to install drivers for new hardware or protocols and of course new application software on top of that OS, and I want them to have a stable platform to run against and to be as independent as possible so swapping out one part of the system doesn't undermine any other parts. The current Linux ecosystem with its distro model does not promote that kind of separation and safety, unfortunately.

Comment Re:Security team (Score 2) 517

Until some drone with mapped server drives gets cryptolocker and gets everyone's files encrypted

If you have a network that is wide open to "drones with mapped server drives getting cryptolocker" and causing the entire organisation to lose a day of work, the kind of scheduled scans mentioned above probably aren't going to protect you anyway.

To defeat a threat like cryptolocker you need real-time measures to prevent it operating in the first place: proper scans on incoming mail and web downloads, internal firewalls, and so on. To limit the scope of the damage if cryptolocker manages to get in somehow anyway you need least privilege access controls on your internal systems. And to restore anything it does manage to get hold of, the most important thing is to have frequent back-ups with fast recovery procedures. Scheduling a system-wide full scan so your staff can't use their laptops for 15 minutes at 10am every day is not going to give you any of those protections.

Obviously there is always a risk of some disruption if IT are responding to an ongoing incident or recovering afterwards, but if you're routinely causing significant disruption to your entire staff then there are probably better ways to achieve the results you want.

Comment Re: Backing up user data on Linux (Score 1) 517

Your anecdotal experience does not make the problem any less real for others.

Objectively, for example, I have a reasonably well-known web app running on a few servers. It's written in Ruby and runs via Passenger. Between the official documentation and generally sensible tutorial/reference material on-line, I have literally seen four completely different recommendations about just where to install the related scripts, from directly under /var/www to places under /opt. As with many web applications, it also wants configuration files in certain places, often related to where those scripts are. Those configuration files should be properly backed up, and just like that, with hosting a single web app and without even installing any OS-level packages, you've got a real question about which areas of your filesystem contain data that should be safely backed up.

Now scale that up to the number of applications and packages you might have installed on a traditional Linux server used for multiple purposes or multiple teams, and it's not hard to see why configuration management tools and running separate servers (or virtual servers) for each application have become the standard practice in corporate sysadmin and devops world.

Comment Time for some regulation? (Shock! Horror!) (Score 2) 517

It's hard to do actual research as an end user when you're talking about devices costing hundreds of bucks and you have a software environment that won't let you move back if you "upgrade" and it renders your device effectively unusable. This is a very convenient situation for the device manufacturers and the people who don't want to bother with things like backward compatibility and long-term support of their software, of course.

But count me in for at least half a dozen similar anecdotes among friends and family with various mobile devices, particularly the expensive ones like Apple/iOS and Samsung/Android phones and tablets.

I am increasingly of the view that there should be a certain degree of mandatory regulation in these industries, where the commitment (or lack of it) to future proofing such devices against software-related breakage must be clearly stated before purchase and failure to do so is automatic grounds for a refund if the device does then get bricked or otherwise rendered effectively useless. I am generally very wary of regulating software and liability issues, because of the difficulty in establishing objective standards for what is reasonable, but there is so much abuse in our industry now because of continual updates and built-in obsolescence that I'm starting to think consumer protection authorities should actively intervene.

Comment Re:Security team (Score 4, Insightful) 517

The security team runs the scans during the daytime because that's when everybody's laptop is powered on and connected to the network.

Coincidentally, the staff also do most of their work during the daytime.

Too many people shut off their machines at night, or carry their laptops home, so the scans won't reliably run if they do them then.

Yes, damn those idiots who take their laptop out of the office so they can actually do their jobs. Those crazy kids are messing everything up.

Seriously, if you have security policies that are interfering unreasonably with your staff's ability to do its job -- and if you are dramatically slowing down their systems or causing disruptive behaviour like reboots during the working day, that is undermining the staff's ability to do its job -- then you're doing it wrong. IT is there to help people do whatever it is you do, not the other way around.

Comment Backing up user data on Linux (Score 1) 517

With Linux and pretty much every other os, you back up the home directory and install over the top of the other partitions.

You and I have very different experiences of Linux-based systems, though admittedly I am mostly using Linux on servers rather than workstations, and really the problems are more about the distro/software running on top of Linux than Linux itself.

My experience of trying to back-up a real world Linux system is that you start with backing up /home. Then you also figure out what you need to back up from other places, like /root, /etc, /opt and /var. Some of the configuration files in there will be automatically generated from others, but if you overlook any of the underlying ones, you'll be running at 640x480 forever or your RAID won't be as redundant as you thought. Some of the configuration data will be specific to the particular version of something you currently have installed, and the new version will fail to initialise properly after you've upgraded because it doesn't update the previous configuration completely and correctly without user intervention. Some of the executable code you run will be under those directories too, because web apps and scripting and interpreters.

And that's just with standard applications that are provided with your distro. $DEITY help you if you want to install anything else or need to build anything from source, because no-one else is going to. Try not to allow too many breaking conflicts under /etc or /usr/local, where there are essentially no naming conventions and everything just gets a short/abbreviated name and goes into the global namespace. Oh, never mind, we forgot to add the important things under /usr/local/somedirectorymylastdistrodidntevenhave to the back-up scripts anyway.

And then you upgrade your distro to the next major revision because the price of OS stability in the Linux ecosystem is falling behind with all your applications as well, and... Well, in my entire career, across different organisations and with different teams of sysadmins, I can probably count the number of completely smooth major distro upgrades I've seen on no hands. On the server side, I now see a lot of "one install only" policies: the expectation of success with any in-place update process is so low that the standard MO is to set up a new clean machine with the new software required, figure out how to migrate specific configuration and data from the essential applications from the old system to the new one, and then retire/reformat the old machine. Even then, the actual applications and packages installed are tightly controlled; there is an entire industry these days making tools like Puppet or Chef or Ansible because trying to manage these things manually on modern Linux systems is crazy, and making any local changes to standard configurations is frowned upon. Personally I prefer to run Windows for my main workstations for various reasons, but I work with several colleagues who prefer to run Linux workstations and they seem to run into analogous problems with end user/client applications too.

Linux is great in many respects, but with most popular Linux distros, having a clean filesystem structure and code/config/data set-up are not among them. Maintaining most real world Linux-based systems is absurdly complicated as a direct result.

Comment The recent UK general election polling (Score 2) 292

Just about everyone in the polling industry was significantly off-base in the recent UK elections. Literally no-one in the mainstream was calling the actual result in the run up to election day, as far as I know. The debate was all about who would be leading a coalition and how the electoral math would stack up to determine which parties would be likely to join. Even the party leaders changed their tune in the last days of the campaign to reflect an assumption that they wouldn't be governing alone and who they governed with would be a significant question.

Ironically, having won an unexpected absolute majority this may have left David Cameron and the Conservative Party leadership in a bit of a bind. I suspect some of the policies they were promoting before the election were things they didn't really want to do but advocated for popularity reasons, hoping that after the election they would be able to "reluctantly" negotiate away some of those commitments as part of a coalition agreement. Similarly, some of their more unpopular policies now won't have a partner party or two to act as scapegoats next time if those policies don't work out well. Given that their working majority is also very small, which leaves the leadership very vulnerable to disruption by rebel MPs on controversial issues such as Europe, ironically they might have been better off leading a strong coalition than winning. The pollsters and commentators and political journalists didn't consider any of these issues in much detail in their pre-election coverage, if they even acknowledged the possibilities at all.

Slashdot Top Deals

Living on Earth may be expensive, but it includes an annual free trip around the Sun.

Working...