Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Comment Book conversion is labor intensive compared to aud (Score 2) 77

While book scanning can be done by machine, the machinery is going to be expensive and complicated. Your typical bibliophile can't afford it. Scanning a book by hand can take hours, even with a V-shaped book-scanning fixture and two cameras.

The technology for digitizing audio is much easier to acquire and use. Any audiophile can afford the hardware and software to do a tolerable audio rip. Anyone can set up a rip, or several rips, and do real work while the rip takes place in the background. The quality might not be to audiophile standards, but will satisfy most casual users.

Even after you've created an "ebook" of page images, it isn't really suited for use in modern ereaders. For that you need an ePub format, or something similar. The text has to literally be in text format to allow reformatting. A decent modern ebook can adapt the text to different display sizes and different type sizes. This is hard to do.

Compare a typical book produced by Project Gutenberg and a typical book scanned into the Internet Archive. Gutenberg produces true ePubs consisting of text possibly sprinkled with digitized illustrations. Gutenberg might start with automatic text recognition, but its books go through a distributed proofreading process before they're released.

While I value what the Archive does (any digitization is better than none at all) I've discarded most ePubs I've downloaded from them. There are simply too many typos in the text recognition. Their scanned raw images and PDFs are usable, though they lack the flexibility of true ebooks.

Comment Re:Online or Secure (Score 1) 292

Actually the alternatives are between "secure" and "managed by a third party."

The threat isn't being on-line, the threat is when you put unprotected (plaintext) data on a device managed by a third party that can succumb to secret leverage. This isn't just a question of secret FISA demands. The same problem would arise if Apple were so foolish as to store sensitive plaintext emails on a third-party email service that could get bought out by a competitor.

There is no obvious problem with storing properly encrypted data on cloud storage. The problem arises when you decrypt the data to process it further. There are a very tiny number of applications in which you can do further processing of encrypted data without decrypting it first.

Comment Primary care underpaid/specialists overpaid (Score 1) 336

My wife is a board-certified family physician, and she does a lot of things an obstetrician does - deliver babies "normally" or through surgery (how does one spell "caesarian" anyway?) and does a lot of in-office procedures. Since she's not board certified in obstetrics, she is paid less for doing these procedures than someone who is so board certified. The work is identical and meets the same practice and safety standards - otherwise she wouldn't have hospital privileges to perform the procedures.

So why does identical work cost more depending on which professional organization certifies you?

If you can do the work, the hospital grants you privileges to do it, and the malpractice insurer is satisfied with the hospital's oversight process, then why shouldn't everyone be paid the same for the same work?

Comment Re: The problem was well known when the story was (Score 1) 96

At the time it seemed virtuous to implement state machines. One guy did his phd by building a mechanism that did coroutining - the programmer could write out the whole procedure and stick in the strip breaks after the fact. I suppose someone did something like that for the Mac, tho I stopped writing Mac code before seeing such a thing.

Comment The problem was well known when the story was new (Score 5, Interesting) 96

This is a rambling bit of history. Move on if that's not your thing. I love reading about problems like the the Pathfinder problems. Trust me - such things often happen on Earth-bound systems, too.

Back in '79, I was working on a multiprocessing router for the ancient ARPANET. At the time the net had over sixty routers distributed across the continent. Actually we called them "imps" - well, "IMPS" but I'll use the modern term "router." We had a lot of the same problems as Pathfinder without ever leaving the atmosphere.

By then all ARPANET routers were remotely maintained. They all ran continuously and we did all software maintenance in Cambridge, MA. By then the basic software was really reliable. They rarely crashed on their own, and we mostly sent updates to tweak performance or to add new protocol features. Once in a while we'd have to use a "magic modem" message to restart a dead machine and to reload things. The software rarely broke so badly that we'd have to have someone on-site load up a paper tape. So remote maintenance was well established by then.

The multiprocessor didn't run "threads" it ran "strips." Each was a non-preemptive task designed to execute quickly enough not to monopolize the processor. If you wrote software for a Mac before OS-X, you know how this works. A multi-step process might involve a sequence of strips executed one after the other.

Debugging the multiprocessor code was a bit of a challenge because we could lock out multi-step processes in several different ways. While we could put our test router on the network for live testing, this didn't guarantee that we'd get the same traffic the software would get at other sites. For example, we had software to connect computer terminals directly to hosts through the router (the original "terminal access controllers"). This software ran at a lower priority than router-to-router packet handling. It was possible for a busy router to give all the bandwidth to the packets and essentially lock out the host traffic. Such problems might not show up until updated software was loaded into a busy site.

Uploading a patch involved assembly language. We'd generally add new code virus style. First you load the new code into some spare RAM. Once the code is loaded, we patch the working program so that it jumps to the patch the next time it executes. The patch jumps back to an appropriate spot in the program once the new code has executed. We sent the patches in a series of data packets with special addressing to talk to a "packet core" program that loaded them.

The bottom line: it's the sort of challenge that kept a lot of us working as programmers for a long time. And they pop up again every time someone starts another system from scratch.

Comment IBM and DEC computers are first cousins (Score 1) 336

Warning: I'm doing a history geek thing here.

1940s: MIT builds Whirlwind - a beautiful little thing (many KB of RAM) out of thousands of vacuum tubes. They convince the Air Force to use it as the basis for nationwide air defense.

1950s: IBM builds the SAGE air defense system in conjunction with MIT and Lincoln Lab, using people like Ken Olsen. IBM uses Whirlwind and SAGE lessons to build its "scientific" computer line - parallel arithmetic, control store ("microcode"), grand bus architecture, etc.

1960s: Olsen and other SAGE refugees start DEC - PDP-1, PDP-8, PDP-6, PDP-9, etc. No surprise there are clear architectural antecedents going back to SAGE and Whirlwind.

Comment Re:Compare the 360 to PDP-6, -10 (Score 1) 336

I wrote a lot of asm code for the 360/370, PDP 11, and PDP 9, but my undergrad mentor (Uncle Willy Henneman, late of MIT AI and BU, rest his soul) waxed poetic on the PDP 10 instruction set, using the word 'orthogonality.' The -6 was introduced in about 1963 (before the 360) and the -10 was the successful, workhorse version of the architecture till the -20 came out.

Comment This will take a generation to solve (Score 3, Interesting) 228

My wife is an MD and (relatively speaking) is computer literate. She can touch type and navigate typical desktop machines.

Her clinic converted to EHRs several years ago and she still hasn't reached the level of efficiency she had with paper charts. At this point she's gone back to dictating parts of her chart (via speech recognition) to try to regain some of her lost productivity.

A lot of the problem is that the data is VERY free form. The mundane measurements (height, weight, temp, BP, etc) are easy to insert and digitize, and you can pass it off to another health worker to enter it. The really important information, however, doesn't fit into an established structure.

MDs learn how to collect and document patient status during med school and residency. The details vary from one program to the next. The efficiency of an office visit and its subsequent documentation all depend on how well the EMR flow (and even the number of clicks) fits how the MD does an office visit and/or documents a medical procedure.

The disconnect between habits and automation will continue to affect MDs until we have a generation of experience.

Comment There is no such thing as an error. (Score 2) 536

This is the real third option.

The so-called "error returns" from things like file opening are telling the program something very important about what's going on. The program's flow must be designed from the beginning to interpret and handle errors. This is in fact be much of what a good program does.

It doesn't matter whether we use exceptions or error codes to signal the errors as long as the program is designed to accurately interpret the errors that do occur. In some sense, exceptions may be easier to implement in today's event-driven interactive interfaces. Regardless, though, the design must not allow errors to be lost.

Was it Cooper in "About Face" who said that an error alert pop-up was essentially an admission of failure on the part of the programmer?


Comment No magic answer - treat it like a design problem (Score 1) 242

Cable management is like any engineering problem - you solve it by organizing things into a coherent design. Yes, it will take time, but it's worth it.

Personally, I have one tower and two monitors on a free standing desk. There's a large box/trunk near the desk with basket-weave top and sides to allow air flow, and I put the UPS and port connectors all in there. Cables flow in separately wrapped bundles to the box. There are a couple of walk-over cable carriers from the trunk that snake under rugs to reach power/phone/net plugins.

I don't tend to rewire very often (once or twice a year) so I make things neat and leave them that way. The few portable plug-in devices I use tend to be USB, and I have nearby connectors on the desktop and on the front of the tower. I can easily hook up a USB to SATA adapter when I need to play with a hard drive, for example.

Comment Mislabeled Image #14 (Score 1) 88

Image #14 is labeled as a "console teleprinter." It is really a storage drum, which was a a geometric alternative to the disk drive. Drums had a row of fixed-location heads for recording and playback. There was one head per track, which eliminated the moving arm. I once had a similar drum from an IBM 610 "calculator." It weighed about 30 pounds and stored 1,200 BITS. Perfect doorstop.

Comment Re: Why so much power? - RELIABILITY (Score 1) 88

Reliability was the highest priority among first-generation computer designers when choosing radio tubes (also called valves). J. P. Eckert was co-designer of the ENIAC, which was the first thing that might be called a working computer. Eckert spent a lot of engineering time on tube reliability. He selected tubes that seemed especially long-lasting and likely to work correctly out of the box. He drove them with circuits that treated them as gently as possible: lower voltages to preserve filaments, for example. While I have no knowledge of the relative reliability of compact tubes compared to full-sized ones, I'd have to guess that the smaller tubes were significantly less reliable. If the average tube life is 2,000 hours, then a 2,000 tube machine won't run for very long between tube replacements. ENIAC had over 17,000 tubes.

Comment Humanoid robots are bogus (Score 1) 1457

The important part of this pronouncement is the phrase "humanoid robots." The explosion of economic productivity over the past decades clearly shows that automation (and good old fashioned mechanization) have replaced lots of workers with machines. I don't know that anyone has done a study about this, but I wouldn't be surprised if half the U.S. labor tasks of 50 years ago have been eliminated by automation and/or mechanization. The percentage is probably a lot higher than 50%.

I did my thesis work in industrial robotics, and the big problem I see with humanoid robots is that there are always better mechanical solutions to specific problems than a human-shaped device.

I think Pamela McCordick and later, Camile Paglia, were right in saying that there are a lot of guys out there who secretly wish they could become mothers. That's why people talk so much about humanoid robots.

The first practical uses of humanoid robots will be for entertainment - props in movies, conversation pieces at parties, or for less savory purposes.


Slashdot Top Deals

Sometimes, too long is too long. - Joe Crowe