Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
User Journal

Journal DaChesserCat's Journal: User Interfaces - Part III

I just want to add a couple more items on the subject of input.

How does your web browser know what URL you just typed in? You're manipulating hardware, but other software is taking those raw manipulations and converting it to something your web browser understands. Modern handheld systems do the same thing, but they usually read your marks on a section of the screen and convert them to ASCII codes which are fed to whatever application you're using. Consequently, if your application can take triggers on various objects (in addition to text), and can take raw text input, you can use a keyboard for the text input, or you can have a handwriting recognition program outputting ASCII codes. Most of the problems with handwriting recognition comes when we have to enter odd, non-text inputs (like control keys, arrows, extensive amounts of punctuation or ALT-something). If those become triggered events, and we can trigger them through other means, we eliminate most of the need for the odd text inputs. We can keep them, for people who do have a keyboard, but the odd inputs become yet-another trigger for the events.

Additionally, if we have a handwriting recognition application which outputs raw text, and a text editor application which takes raw text as an input, we could have some kind of middleware, which looks at what's being entered and attempts to guess what we're writing; it could provide additional triggerable events which would cause multiple ASCII codes to be sent. I have such an application on my Palm; it's called TextPlus. I enter a letter or two, and it gives me a menu of word and phrase "guesses." If one of them is what I'm writing, I tap the word or phrase, and it is entered as if I had quickly written the whole thing. Since it adapts to what I write, it quickly becomes useful, and allows me to enter text very rapidly, even with an otherwise inefficient handwriting recognition system. It takes any kind of text input (so I can use Graffiti OR the onscreen keyboard to drive it) and provides text output, which can be used by any application which uses text input. If we can do this with text, what can we do with other forms of input? How hard would it be to implement a voice-based system which, while not quite up to taking dictation, can trigger many of the various events, as a result of voice commands? Such systems, with limited vocabularies, are much easier to implement, and could be handled by most handheld machines. Some handheld games have been set up with tilt sensors, so that physically tilting the unit causes things to happen. How hard would it be to have something like that scroll through menus, or work the scrollbars in a frame? Also, some handhelds have a "jog wheel" on the side; what if you could set that to trigger up/down/select events in a menu? Additionally, most handheld machines have additional buttons, which can be "mapped" to various purposes. Again, the basic physical inputs (microphone/tilt sensor/jog wheel/button) would drive a program, which would generate known events. Consequently, an application wouldn't have to be specially written for these input types. As long as the triggers are taken in through a standard means, and the input programs can trigger events through those means, any input program can map any kind of physical input to any kind of triggered event.

With these basic types of inputs, and the ability to convert input from one type to outputs of another type, what kind of input systems could we easily adapt to in the future? I mean, the concept of pipes, with programs taking input streams, producing output streams, and feeding that output to the input of another program, is nothing new. Unix systems have been leveraging that paradigm for decades, and to good effect. Consequently, we wouldn't need to rewrite applications, unless someone came up with a fundamentally different type of input. Considering the fact that touchscreens (originally driven by light pens), mice and keyboards have all been around since the 1960's (or earlier), and they are the main forms of interacting with applications today, it's safe to say that's not something which happens often. Consequently, this type of system, if properly implemented, could still be useful (and even powerful) four decades from now. Using intermediate object code would allow it to migrate and scale to new CPU architectures, so it wouldn't be outdated with the next CPU family to hit the market, which happens all too often with existing systems. And, since the stylesheets can be updated, the physical appearance could be kept fresh, without needing to actually replace the underlying application.
This discussion has been archived. No new comments can be posted.

User Interfaces - Part III

Comments Filter:

Any circuit design must contain at least one part which is obsolete, two parts which are unobtainable, and three parts which are still under development.

Working...