Whenever I find myself needing to manage a group of younger dudes, I look around for some big problem they've been stuck on. And then I solve it, while lecturing on the context of how software like that has been built in various decades. Once someone has watched you quietly take out software enemy #1 on a project, they stop trying to mess with you on their reports.
40 is old for a software developer. Someone who is 40 today entered college just as web browsers were being invented. You could not just connect the dots on library calls to put together an application then. Now you can.
I have a strong sense of wanting to know how things work that comes from having built a lot of software in the 80's and 90's, when you had to know the internals to make progress. That is downright counterproductive in web development now. By the time you learn enough to understand how a library works, the developer who just learned enough to use it already shipped their code. That's the sort of disconnect between age ranges at work now.
Anyone who reads to the end should realize this a joke even without noting the date: "We can add a kernel later on, following the GNU/Hurd’s successful
You're right, these kids need more paranomia.
I'm a polite Canadian
There's another kind?
The main benefit of using a browser hosted editor is that you don't have to install (and maintain, and update, etc.) a dedicated editor/word processor. You just go to the possibly local web page that the editor is hosted at.
When you store your document in the cloud, the main benefits are automatic off-site backups, documents you can reach from anywhere, and collaborative editing (again, without installing any additional software for it). More fundamentally, you don't have to figure out how to convey the document to the other person. No more e-mailing documents around and then having to e-mail again after each update. Just share a link to it instead, and people will always come to the latest version.
Using the same editor as the other person doesn't always help. As you pointed out, just using a different printer will give you a file that renders differently on two systems.
The whole layout model used by Word and OpenOffice is fundamentally broken. You can either allow people to place text and graphics at fixed locations on the page, or you can be compatible with multiple printers. It's impossible to do both at once. Printers do not even have identical models for what's considered the printable part of the page, as just the most obvious layer of issues here.
The only way to have a document that can be edited on multiple machines and then print well everywhere is to use a markup language instead of a fixed position word process. I use ReST, Markdown, and Asciidoc for most of the documentation I write nowadays. I can then export into one of these brain-dead formats when needed. ODF just standardizes on the fundamentally broken model. The standard itself is so epically sized and full of ambiguous language, there's low odds any two programs that render ODF into the same page layout.
I know exactly when it started. I lost my first set of computer equipment at work due to electrical issues in 1990. The capacitor plague era was not a disruptive event. All of those issues were already around--a long as capacitors existed they have been failing like that--they just became a lot more likely during that period.
I assume everyone's data is important to them. Apparently you do not. You can't expect to be taken seriously on this topic with that attitude.
Yes, in most shells, kill is a built-in function that doesn't actually run the kill binary. It's not required by the UNIX specification though, so only having the binary is just fine; it's certainly not crazy pants for a UNIX system to run without a built-in shell kill.
Regardless, the ps you may need to find the process usually is not a built-in, and instead it will spawn a new binary. So the problem of
None of the incidents I alluded to were caused by bad capacitors; most of them happened before the capacitor plague really got started. It's very dangerous to assume that because a source of a problem has been identified, that class of problem will never happen again.
"Good enough" is a fuzzy term that doesn't mean anything. Not plugged in is statistically safer than plugged in. You can care about your data and try to maximize its survival, or you can be overconfident that you know how things are going to die and ignore some good practices. Confidence won't save your data though. Paranoia can.
The best part is that since the deletion normally runs alphabetically, one of the first files taken out is
If it's plugged in, there's a significant class of failures where the computer dies and it takes out everything attached to it. Electrical surges can do that from both the power supply and the network side. And the (presumably) USB port the drive is attached to might fail in a spectacular way, one that damages the connected drive.
This is not simply paranoia--I have seen all three of these things happen. (I was running two southern NYC data centers in 2001, so I've seen more than a few really unusual hardware explosions) Any backup that's not electrically isolated is at a higher risk than it should be.
Checking if STEAMROOT is an empty string is a good start, but it's still not enough. Anything that's unleashing something as dangerous as "rm -rf" should do a serious sanity check first. Looking at the text name of the directory, seeing if it's really a directory, or seeing if you can cd into it (and the output from pwd still matches) are all useful checks. But you will still find edge cases where they do terrible things in the real world.
As an example of something more robust, PostgreSQL does what it can to deal with this problem by having a file named PG_VERSION in every installed database directory tree. All utilities that do something scary take the directory provided and check to see if there's a PG_VERSION file in there. If not, abort, saying that the structure expected isn't there. Everything less complicated than that occasionally ate people's files. A common source of trouble here for database servers is when there was a race condition against a NFS mount, so that it showed up in the middle of when the script was running.
When you stare at that sort of problem long enough, no check for whether your incoming data is sensible is good enough. You must looking for a positive match on a "I see exactly the data I expect" test of the directory tree instead, before wiping out files in particular. Even the level of paranoia in Postgres is still not good enough in one case. It can wipe things if you run the new database initialization step and hit one of those mount race conditions. For that reason, the initialize database setup is never run in the init scripts anymore, no matter how many complaints we get that it should be automatic.
I first saw this class of bug in IBM's Directory software, in its RPM uninstaller. It asked RPM what directory the software was installed in, then ran "rm -rf $INSTALLDIR/data". Problem: RedHat 8.0 had a bug where that RPM query returned nothing. Guess what was in
Well, how can it be malware in a DOSbox?
With network sims. Err, shims.
You're only pointing out the downside. What about when the next version of systemd already includes his database application?