Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Submission + - The sorry state of copying in OS design 2

Mattcelt writes: Mac OSX just turned 10. Windows 3.0 is old enough to buy its own drinks. MS-DOS is a venerable 30 years old. And UNIX, that lovely old bag, is of an age equivalent to the Answer to the Question of Life, the Universe, and Everything. These are well-established, mature operating systems, full of amazing advanced features and wonderful embellishments.

So can someone please tell me why the ‘copy’ function in ALL of these OSs hasn’t made any significant improvements in, well, EVER?? Why does the same horribly inefficient, serial, unresponsive, fault-intolerant file copying mechanism that was present in UNIX in the early 1970s persist today? (And why can’t I find a single article on the Internet that asks this question? I can’t believe I’m the only one who has noticed that we’re (metaphorically) racing our flying cars through the sky with hamsters under the hood. . .)

I am not, and never will be, a programmer. So I implore anyone who reads this to think about it, and if you have any programming skills, make a NEW copy function that actually incorporates some of the revolutionary ideas we’ve come to expect even from lowly FTP applications in the past 40 years! If you need ideas about what features to include, here are a few:

-queuing: Why does the OS try to start each copying operation on top of the ones already present, even when it cuts performance for both by 60% or more? If I want to ask the OS to copy a set of files when another copy is already in progress to/from the same source and/or destination, it should at the very least ask me if I’d like the copy to start after the current one has finished.

-report on errors: I just love it when I have a batch of hundreds of files which need to be copied or moved to a new location, and it fails about 80% of the way through Instead of a “failed while copying file x” or worse, just a cryptic error code, why can’t the OS keep track of which files WERE successfully copied, and tell me in a detailed report WHY it failed on that file?

-fault-tolerance: Let’s take the last point a step further. If I have a batch of files to copy and it fails on ONE, is it too much to ask to have the copy operation complete ALL of the rest, instead of allowing the error to interrupt the whole process? Inform me of the error, but don’t just stop — the OS should finish as much of what I’ve asked it to do as it can, without any further input from me.

-resume: If I need to interrupt a large copy operation to take care of something else, why can’t I resume it later? A batch copy is nothing more than a series of tasks, yet ALL modern OSs treat it as if it were a single task with a binary outcome. Give the user some control and knowledge about the parts of the process instead of just a window into an FMbox that way it is now.

-regexp: Since we’re already dreaming, why not add some advanced functionality that would be REALLY useful? How many of you UNIX admins have written a regular expression utility to handle file copies already? (I bet it’s most of you.) And how did you do it? By getting a file list from some other output, parsing the filenames you want from it, and passing those file names to the copy operation. What a waste!! Why isn’t there a filter built in to the copy function itself? And wouldn’t it be nice to have a quick-and-dirty GUI interface for it when you don’t want to interrupt what you’re doing in the command shell?

-direct-remote copying: Imagine you’re using a remote terminal to access server A. And for some reason, you can’t xterm or remote control servers B and C, but you need to copy a 500GB virtual disk from B to C. (I’m using a real-world example here, drawn from experience). So you map their drives onto server A, and begin the copy operation. What happens? In Windows, the file(s) is/are copied from server B to server A, then from server A to server C. Instead of 500 gigabytes being passed over the network, it’s a TERABYTE. Taking protocol and routing overhead into account, it ends up being more than twice the bandwidth than is necessary. So why not just have the ability to affect the copy directly from B to C instead of incurring all the extra? I know this is the most complex feature of these to implement, but it would be awfully useful. . .

So what do you say? In these days of 3D home televisions, floor-sweeping robots, and electric cars, can we please put the 1970s mimeographs away and get ourselves a nice new, 2011-ready ‘copy’ program?

Comment What's next in your career? (Score 2) 244

Hi, I think it depends on what do you want to do next. If going into academy, master's/PhD degree, then I would say that it helps a lot to have one article published. However, if you are thinking going enterprise, I think most of those people do not care much about publications. It also depends on the quality of your work. If it is something really good, I think it might be worth going. Have you asked the University or your advisor for funds to go? If your work is really good, I think the University would have no problem at all paying for your expenses.

Slashdot Top Deals

UNIX was not designed to stop you from doing stupid things, because that would also stop you from doing clever things. -- Doug Gwyn

Working...