Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:Crazy (Score 1) 70

The editors are barely online anymore. Most of the work is happening in your local browser, driven by Javascript code. Some people have even started breaking those layers completely apart to where you don't need the remote component at all, like the Atom editor.

The main benefit of using a browser hosted editor is that you don't have to install (and maintain, and update, etc.) a dedicated editor/word processor. You just go to the possibly local web page that the editor is hosted at.

When you store your document in the cloud, the main benefits are automatic off-site backups, documents you can reach from anywhere, and collaborative editing (again, without installing any additional software for it). More fundamentally, you don't have to figure out how to convey the document to the other person. No more e-mailing documents around and then having to e-mail again after each update. Just share a link to it instead, and people will always come to the latest version.

Comment Re:Handle ODT files reasonably well (Score 1) 70

Using the same editor as the other person doesn't always help. As you pointed out, just using a different printer will give you a file that renders differently on two systems.

The whole layout model used by Word and OpenOffice is fundamentally broken. You can either allow people to place text and graphics at fixed locations on the page, or you can be compatible with multiple printers. It's impossible to do both at once. Printers do not even have identical models for what's considered the printable part of the page, as just the most obvious layer of issues here.

The only way to have a document that can be edited on multiple machines and then print well everywhere is to use a markup language instead of a fixed position word process. I use ReST, Markdown, and Asciidoc for most of the documentation I write nowadays. I can then export into one of these brain-dead formats when needed. ODF just standardizes on the fundamentally broken model. The standard itself is so epically sized and full of ambiguous language, there's low odds any two programs that render ODF into the same page layout.

Comment Re:And that people... (Score 1) 329

I know exactly when it started. I lost my first set of computer equipment at work due to electrical issues in 1990. The capacitor plague era was not a disruptive event. All of those issues were already around--a long as capacitors existed they have been failing like that--they just became a lot more likely during that period.

I assume everyone's data is important to them. Apparently you do not. You can't expect to be taken seriously on this topic with that attitude.

Comment Re:First day of *nix training... (Score 1) 329

Yes, in most shells, kill is a built-in function that doesn't actually run the kill binary. It's not required by the UNIX specification though, so only having the binary is just fine; it's certainly not crazy pants for a UNIX system to run without a built-in shell kill.

Regardless, the ps you may need to find the process usually is not a built-in, and instead it will spawn a new binary. So the problem of /bin being wiped out first and removing the tools you need for a fix is still there.

Comment Re:And that people... (Score 1) 329

None of the incidents I alluded to were caused by bad capacitors; most of them happened before the capacitor plague really got started. It's very dangerous to assume that because a source of a problem has been identified, that class of problem will never happen again.

"Good enough" is a fuzzy term that doesn't mean anything. Not plugged in is statistically safer than plugged in. You can care about your data and try to maximize its survival, or you can be overconfident that you know how things are going to die and ignore some good practices. Confidence won't save your data though. Paranoia can.

Comment Re:And that people... (Score 1) 329

If it's plugged in, there's a significant class of failures where the computer dies and it takes out everything attached to it. Electrical surges can do that from both the power supply and the network side. And the (presumably) USB port the drive is attached to might fail in a spectacular way, one that damages the connected drive.

This is not simply paranoia--I have seen all three of these things happen. (I was running two southern NYC data centers in 2001, so I've seen more than a few really unusual hardware explosions) Any backup that's not electrically isolated is at a higher risk than it should be.

Comment Re:When I see that [literaly] textbook mistake.... (Score 3, Insightful) 329

Checking if STEAMROOT is an empty string is a good start, but it's still not enough. Anything that's unleashing something as dangerous as "rm -rf" should do a serious sanity check first. Looking at the text name of the directory, seeing if it's really a directory, or seeing if you can cd into it (and the output from pwd still matches) are all useful checks. But you will still find edge cases where they do terrible things in the real world.

As an example of something more robust, PostgreSQL does what it can to deal with this problem by having a file named PG_VERSION in every installed database directory tree. All utilities that do something scary take the directory provided and check to see if there's a PG_VERSION file in there. If not, abort, saying that the structure expected isn't there. Everything less complicated than that occasionally ate people's files. A common source of trouble here for database servers is when there was a race condition against a NFS mount, so that it showed up in the middle of when the script was running.

When you stare at that sort of problem long enough, no check for whether your incoming data is sensible is good enough. You must looking for a positive match on a "I see exactly the data I expect" test of the directory tree instead, before wiping out files in particular. Even the level of paranoia in Postgres is still not good enough in one case. It can wipe things if you run the new database initialization step and hit one of those mount race conditions. For that reason, the initialize database setup is never run in the init scripts anymore, no matter how many complaints we get that it should be automatic.

I first saw this class of bug in IBM's Directory software, in its RPM uninstaller. It asked RPM what directory the software was installed in, then ran "rm -rf $INSTALLDIR/data". Problem: RedHat 8.0 had a bug where that RPM query returned nothing. Guess what was in /data on the server? That's right, the 1TB of image data that server ran against. (And to put the scale of that into perspective...this was 2003, when 1TB was not a trivial amount)

Comment Re:PGAdminIII (Score 1) 264

pgAdmin III is a client app for PostgreSQL. But what the poster wants here is information how to build their own specialized client app, not on how to use someone else's.

The only way pgAdmin III is a relevant example here is that the existing code is a hairy bunch of C++...and the developers have given up on maintaining it. Instead they're turning it into a web app so there are better and still improving libraries to leverage there. That's a lesson everyone thinking of writing a C++, Java, or .NET app should think about.

Comment Re:I'd consider Go and PostgreSQL (Score 4, Informative) 264

That's a nice outline of Postgres features; small and very pedantic correction for you. CREATE INDEX CONCURRENTLY in PostgreSQL isn't really asynchronous, since the client running it is stuck there waiting for it. And it does still need a full table lock to complete. It just only needs that for a brief moment in most cases, to install the index when it's built. But that's not guaranteed. I added a caveat to the docs a version or two ago that warns about the bad case; see building indexes concurrently, in particular the bit starting with "Any transaction active when the second table scan starts..." It's really rare that happens, but if you have long-running transactions eventually you'll run into it painfully.

I would restate the situation as "You can even create indexes with minimal locking of large tables when a new index is added". The code can't quite avoid locks altogether and remain transaction safe.

Slashdot Top Deals

An authority is a person who can tell you more about something than you really care to know.

Working...