Ah, I see. I think I have erased from my memory the time when my daily commute was measured in hours -- I could only stand it for a few years early in my career and then gave up and moved to a job closer to home. I do sympathise, though.
Only the first few times. Then it's just annoying.
Where I grew up, it was the same but the dick contest was with off road vehicles. The biggest, shiniest off road vehicle they could afford with the largest tires. They never went off road, but that wasn't the point.
SOME guys went off road - but their trucks looked like they had rolled over (and they had).
Super Dracos are for escape in flight too, including in and past MaxQ. But they are on Crew Dragon, not Cargo Dragon. Cargo Dragon did not carry a crew and wasn't programmed to save itself.
There's good reason to be skeptical of rules. Too often, rules are not honest. The usual tactic is to not give any explanation. When that won't fly, safety is the #1 excuse for a rule. But so often, it turns out that someone profits from a rule, and that is the real reason for it. Even when there are genuine safety concerns, there is often also a profit motive. That seems highly likely with this particular Disney rule. Why couldn't people use electronic devices or carry nail clippers on planes? Why did so many cities try red light cameras? Why can't people bring their own food and drink to the movie theaters? Why can't we play movies on our computers' DVD drives?
Yeah. Don't blindly trust The Rules.
For a server it's different because each service has its own location for config and data, but if your job is to setup and manage the server then you should know what its running and where those services keep their data.
That's a great theory, but in the real world numerous people rely on servers that don't have a dedicated admin, so these things do matter and "You should know everything about everything" isn't a terribly useful philosophy (leaving aside the often incomplete nature of documentation in FOSS world, which can make it hard for even a competent and generally knowledgeable admin to actually know everything they need to here).
In this context, I'd take backing up user data and reinstalling Windows and its applications over backing up user data and reinstalling Linux and its application any day of the week.
So they should never run scans because every time your computer is on you are using it?
The kind of entire system scan that slows everything down for an extended period? No, probably not. Those scans are mostly worthless from a security point of view, and have a high impact on the overall efficiency of the system.
They should never patch and just let well known vulnerabilities run amok because you don't want to be inconvenienced, either by having to leave your machine on or wait while patching happens?
Of course not. But we aren't talking about rolling out the approved updates across the organisation after Patch Tuesday or whatever we're calling it this month. We're talking about regular scanning that routinely interferes with normal use of the system.
You left them no choice by giving them no time that wasn't work time.
There are plenty of other choices, starting with having sensible security practices that don't routinely undermine systems at all, and closely followed by having a standard procedure for applying security updates in a timely fashion that allows for things like people being out of contact for extended periods and provides for notifying them of any urgent threats while they are away and then getting them fully caught up when they return.
If the process of installing updates and perhaps a reboot on a Windows box is itself taking so long that it can't be done in the background while someone is making a coffee, again you probably have bigger problems to deal with and need to consider whether the spec of your systems is good enough what what you need to do with them. But in the real world, this is almost never a problem in practice if you have a remotely sensible set-up.
That's how you see it, not how IT, nor Management, nor lots of other orgs see it.
Frankly, I think it's how responsible IT and smart Management see it as well, and I don't know what "other orgs" you mean so there's little to say there.
IT is a support function. The purpose of support functions is to support the primary functions of your business. Any time your support functions start undermining the primary functions, that should be robustly justified, or the people who want to do it should be told "no". It's really as simple as that.
As for your example scenario, that's the kind of foolishness that costs real businesses money all over the place. I bought some quite expensive household goods a little while ago, and as it happened we were just finishing up the paperwork at 8pm as the showroom "closed". The sales guy was incredibly apologetic about how he couldn't print the last form we had to sign -- which was the important one that guaranteed us the goods and them the sale -- because their central management system went off-line for something-or-other and despite it being 8:01pm and him having a high value customer waiting to complete a sale, he couldn't.
As a direct result of the poor policy imposed on the local store by some genius in central IT, they were at risk of losing one of only a few final sales they would have made that entire day; in fact, if it had been one day later, they would have done, because we would have been on holiday and so not able to return the following day to finish everything off as we actually did. That is what management technically refers to as a "total screw up".
Actually, their IT systems generally were a disaster. On our first visit, they had multiple people looking around at one point. However, it took so long to put a provisional order into their prehistoric computer system to get a proper quote (seriously, like an hour to do what should have been maybe 5 minutes) that people were literally walking out after waiting half an hour to see the sales guy who was tied up with the other customer.
I can easily imagine based on just those experiences that dumping seven figures into building a modern IT system that could handle customer orders properly would increase their revenues by 25-50% indefinitely. It obviously wasn't a new or unique problem, as the sales guys on both occasions seemed both genuinely apologetic but also had a well-rehearsed patter for how it happens sometimes but no-one ever fixes it.
Sure. Perhaps you've heard of bigamy? Alice can't marry Carol because Bob already has a vested marital interest with Alice. For example, if Alice marries Carol and dies, Carol is entitled to 100% of her assets as spouse. But so is Bob.
That's not the policy rationale for the prohibition on bigamy, and while it is perhaps a little better of a reason than administrative convenience, it boils down to the same thing, since the question of marital property is one of the issues that legislatures will have to address when the ban is overturned as it inevitably will be.
On the contrary, tradition is absolutely relevant as to whether something is a fundamental right. Marriage is a fundamental right because it's enshrined in our traditions and collective conscience.
Polygamy does not have such a place in our traditions or collective conscience, and therefore is not a fundamental right.
Yep, that's the bullshit argument that people were rolling out against same sex marriage all right. That because it wasn't traditional, it wasn't fundamental.
The core mistake with that argument, whether in the context of same sex marriage or marriage among persons already married, or in larger numbers than two, is that what's fundamental is not opposite sex marriage, or same sex marriage, or polygamous marriage, but simply marriage, without qualification of any kind.
Issues like gender, race, consanguinity, marital status, and number of spouses are all restrictions on that singular fundamental right. Whether they stand hinges on whether they can be justified. Two of them, it transpires, cannot be. Ultimately I think the only restriction that will hold up will be consent, and perhaps consanguinity will have to be reframed in terms of consent if it's to be salvaged.
To be fair, if you're dealing with the level of malware that can cover its tracks against that kind of investigation, and if that malware is already on your system but wasn't picked up on a previous scan, the game is already over anyway and you're well into complete reinstall and restore from back-ups territory. These days, with threats that can hide in other areas of the hardware/firmware to survive the wipe and reinstall process, I'd be wary of trusting even that in any highly security-sensitive environment.
I'm freelance these days, so I'm afraid I can't help. Sorry.
One of my regular clients operates in this field, and seeing things done in a reasonable way reminds me of why I used to get so irritated when I did work as part of a large, bureaucratic institution. It's not magic. It's just being aware of modern tools and practices, and being willing to make the effort (and yes, sometimes, being willing to spend the money) to set up something that provides a useful degree of security but without making things so secure that you forget why you're there in the first place.
Given the potential costs of getting security wrong, I don't really understand why any organisation large enough to be facing these issues regularly wouldn't hire people who know what they're doing and provide a reasonable budget for them to deploy proper tools. I can only assume it's the usual suspects, probably some combination of ignorance and corporate politics.
Full disclosure: Obviously I make money from working for that client and they make money in part from selling some of those tools, so I'm kinda sorta shilling here. But not really, because really, the cost of hiring smart people and giving them proper equipment vs. the cost of say a major regulatory investigation or having your whole sales team at the pub all day because they can't work... not exactly close.
They shouldn't be doing their work at home - which is what the GP said.
Oh, OK then. It's not like full- or even part-time telecommuting is one of the most advantageous perks offered by many modern workplaces in terms of productivity or staff morale, so I don't suppose the business will suffer too much. Should I also recall our entire sales force and tell them they can't work on customer sites any more?
In other news, please be aware that due to a change in company IT policy, next time you get paged at 4am because of a network alert, remote access will not be permitted for security reasons. Instead, you will be required to get up, spend 20 minutes driving to the office, log in from a properly authorised and physically connected terminal, type the same one CLI command you do every time that alert goes off to confirm that it's still just the sensor that is on the blink, type the same second CLI command you do every time to shut off the alarm, spend 20 minutes driving home again, and then go back to bed. Sleep tight.
The only part I've found complex is finding out where and how various apps actually store their data, particularly when I don't really have much interest in the app.
In a sense, yes, the most important problem is that simple, but as you then demonstrated with things like the database example, "simple" and simple aren't always the same thing.
The other point I was to make is that your example presupposes that all of the packages you need are installed using your distro's package manager. In my experience that is rarely the case, and while there are tools like checkinstall that can help, the lack of any enforced installation conventions or protections against unexpected interactions in mainstream Linux distros means you are always vulnerable to certain nasty problems. Anyone's make install can probably nuke the output from anyone else's. Someone running a make uninstall that removes something that some other project assumed would be present can break the other project. Even if you stick to distro-only packages, there is not always a guarantee of backward compatibility when moving to a new version of the distro.
To me, the fundamental problem here is that for the most part I want an OS foundation that is stable and robust, and other than security fixes I probably never want it to change for the lifetime of the system. On the other hand, I want to be able to install drivers for new hardware or protocols and of course new application software on top of that OS, and I want them to have a stable platform to run against and to be as independent as possible so swapping out one part of the system doesn't undermine any other parts. The current Linux ecosystem with its distro model does not promote that kind of separation and safety, unfortunately.
Until some drone with mapped server drives gets cryptolocker and gets everyone's files encrypted
If you have a network that is wide open to "drones with mapped server drives getting cryptolocker" and causing the entire organisation to lose a day of work, the kind of scheduled scans mentioned above probably aren't going to protect you anyway.
To defeat a threat like cryptolocker you need real-time measures to prevent it operating in the first place: proper scans on incoming mail and web downloads, internal firewalls, and so on. To limit the scope of the damage if cryptolocker manages to get in somehow anyway you need least privilege access controls on your internal systems. And to restore anything it does manage to get hold of, the most important thing is to have frequent back-ups with fast recovery procedures. Scheduling a system-wide full scan so your staff can't use their laptops for 15 minutes at 10am every day is not going to give you any of those protections.
Obviously there is always a risk of some disruption if IT are responding to an ongoing incident or recovering afterwards, but if you're routinely causing significant disruption to your entire staff then there are probably better ways to achieve the results you want.
Most of us do have a need to transmit messages privately. Do you not make any online purchases?
Yes, but those have to use public-key encryption. I am sure of my one-time-pad encryption because it's just exclusive-OR with the data, and I am sure that my diode noise is really random and there is no way for anyone else to predict or duplicate it. I can not extend the same degree of surety to public-key encryption. The software is complex, the math is hard to understand, and it all depends on the assumption that some algorithms are difficult to reverse - which might not be true.