RSI, WIMPs and Pipes; What Next? 368
Tetard asks: "Long live the pipe! Since the `|' was invented by Doug McIlroy in 1973, has there ever been a more effective way of reusing tools and connecting data ? The mouse is a device of the Beatles era; Rather than try and provoke nostalgia in the older ones among us, I'm asking myself, as are others: when we don't try to reinvent the wheel, or at least improve it, why must we try and copy it every time ? Xerox PARC exposed us to WIMPs and we haven't done better: some innovation, some plastic surgery -- but no "paradigm shift" -- where's the
creative destruction that will take us further ? Graphical component programming is turning us into click-happy bonobos^H^H^Hchimpanzees, as we fail to find new ways to manage and connect richer data streams. My web designer friends are damaged for life because of mice, and yet we persist... Where do we go from here ? If we ever invent the graphical pipe, let if have keyboard shortcuts." Yes, you've probably seen a similar question to this run by Ask Slashdot before, but this time I'm wondering if maybe we need new input devices before the WIMP paradigm is replaced with something better. Might any of you have ideas on what form these input devices might take?
For those interested, here are the previous stories that have handled this type of question:
So what it will take to break us out of the WIMP box (or prison, depending on your bias), maybe new input devices would do it, but quite frankly, I wouldn't be surprised if a 3D interface might be another route (it would possibly spark interest in designing a new input device that would work better with 3D interfaces, or maybe data-gloves could serve this purpose?). Going on a limb, maybe this guy might just be the ticket.
Face Recognition (Score:4, Interesting)
With a sub-$100 webcam watching you, look at the point of the screen where you would click, and blink.
Are there lots of problems to doing this? Yes. Should that stop me from throwing out the idea? No.
Re:Face Recognition (Score:2, Interesting)
suppose all the software had to do was to find your eyes in relation to your nose or mouth and ears. then moving your head would cause those parameters to change, and the cursor would move.
Re:Face Recognition (Score:4, Interesting)
You can probably do a decent interface using that kind of accuracy, but you won't be doing any kind of precision work.
/Janne
Re:Face Recognition (Score:5, Funny)
Re:Face Recognition (Score:3, Interesting)
The main problem with the systems right now is that they cannot track only eye movement. You need to use your whole head for large-distance traversals. You think RSI sucks in your wrist, wait until you start getting neck cramps from your favorite RTS game.
Re:Face Recognition (Score:2)
The main problem with the systems right now is that they cannot track only eye movement.
Maybe not all systems, but I'm almost sure there are systems that can track eye movement. I've seen a short documentary with people with ALS (the same disease that Stephen Hawkings has if I'm not mistaken), and they showed Jason Becker [jasonbecker.com], a former guitar player that has this disease and can only move some muscles in his face.
His site does not say much about the equipment he has, but he uses many gadgets that track his eye movements (one of the only parts of his body that he can move) and translate into commands to his computer. Actually, he has produced a lot of music in the last years this way (and you guys should check his material, it *is* awesome).
It's a shame I can't show any more specific links, but maybe serching through his pages you can find something.
Re:Face Recognition (Score:3, Insightful)
Great idea, but it doesn't get us out of the WIMP paradigm. You're just replacing your mouse with a more efficient type of pointer.
Really, that's the problem in a nutshell. We are so used to the WIMP interface, that the best we can vizualize is an improvement of the WIMP system. Until we can come up with a totally different metaphor for interacting with our computers, we won't see WIMP go away. Personally, I think it will take a "mad genius" type to break out of the WIMP paradigm and move us forward.
Re:Face Recognition (Score:3, Insightful)
There's a time and a place for diplomacy, and this isn't it.
<div diplomacy="off"> That's a dumb idea. </div>
My attention wanders and consequently my eyes do as well. I blink subconsciously. You can't change these things, and it's stupid to try. Don't make an input system that fails when they happen.
I think Douglas Adams does a good job of describing the failings of this type of input system:
That's not even taking into account the way the eye inherently jitters, according to the other replies. Even without that, this wouldn't work out well except for extremely disabled people who have little other choice and aren't likely to complain about something that makes it at all possible for them to use a computer.
New kind of RSI (Score:2)
WIMPs? Pipes? (Score:4, Funny)
NLP (Score:3, Informative)
Natural Language Processing has my vote. Some of these folks [mit.edu] are working on it already. Wouldn't it be nice to say "move this thing over here", or some other combination of speech and gesturing, rather than all these inane menus and clicks? Someone still needs to develop the pipe infrastructure, tho. Just *don't* make it so narrow as to become worthless.
Re:NLP (Score:2)
Are you trying to say that speaking "move this thing over here" or waving your arms around is more efficient than a small twitch of my wrist on the mouse? Not to mention that speech input is annoying to everyone around you.
Re:NLP (Score:2)
Yes. That is what is happening in the brain, right? Read up on some Cog. Psych. & Human Factors lit and you will see documentation of both (1) increased reaction time and (2) increased probability of erroneous commands in translating a thought into a series of manual commands.
Many people (you & I included) are very adept with a mouse. A little NLP will go a long way. I anticipate a steep learning curve, though. Yea innovation!
Re:NLP (Score:5, Insightful)
Imagine a large room full of office workers. Now, imagine the same room with every worker talking to his wordprocessor or spreadsheet, trying to make him or herself heard over all the others, getting irritated and fatigued because of the constant noise of everybody else talking to their computers.
Imagine trying to do some work in an airport or on an airplane. Now, imagine trying to do the work using your voice _without_ other people hearing the budget details for your company or hearing the steamy endearments you will be mailing to your spouse.
Imagine talking. Now, imagine constantly talking all day, every day. Some actors and singers get permanent damage to their vocal cords - and they've had professional coaching and access to medical services. It could become RSI for your throat.
/Janne
Re:NLP (Score:2)
I agree. But is our chosen modality of business--centralized--the reason why this technology can't work, or does this proposal point out another flaw in the centralized office scheme? It sounds like another tick in the "Pros" column for the Pros/Cons debate over telecommuting & home offices to me.
Just imagine - (Score:2)
No, I think we should do it the traditional way, by clicking the mouse.
Re:Just imagine - (Score:2)
Sure you can order a drink after waiting five or ten minutes to get the bartenders attention and repeating yourself three times but I would not want to do that just to type in an email. It would never work in a business setting.
Re:Just imagine - (Score:2)
Re:NLP (Score:2)
Yeah, but imagine a typical telemarketing firm. Dozens, even hundreds of cubicals. Everyone talking all at the same time. No problems communicating, because the microphone and speaker are mounted on the worker's head.
It's a solved problem.
Re:NLP (Score:2)
Rally? You never have to shout? to repeat yourself? To explain something you said again because the person you are speaking to misheard or misunderstood?
Human speech recognition is full of fraught and peril. Humans are very inneficient at communicating with each other. Why take that same route when speaking with a computer.
NLP != Voice Recognition (Score:4, Interesting)
That sort of thing will be the wave of the future, and it will mean that apps will have to be smarter and communicate a lot more than they do today. My personal agent should reside on my local machine, not the network, and should watch out for my personal privacy. It should divulge only what is necessary to others in order to perform the commands that I give it. It should be flexible and configurable, but I should never have to configure it; it should learn what I like by how I interact with it.
Several large companies have been working toward this holy grail for years, but thus far not even common voice recognition much less NLP has emerged from their research. Sure there are some voice recognition packages out there, but there's very little integration, and AFAIK nothing at all in the NLP arena. We could start working toward the level of integration that would be a necessary foundation for a lot of this stuff, but I don't know that you could get the necessary level of cooperation in ANY software development community.
Re:NLP != Voice Recognition (Score:2)
I know you're going to say "but I only mean within the specific domain of the computer;" but what isn't within the domain of the computer these days? Finally, as I've mentioned earlier -- and others here -- speech doesn't handle many kinds of data, or a lot of a single kind, very well at all. The key to better interfaces is to make them more specific, not less -- ubiquity. If the whiteboard can duplicate itself to another whiteboard, and vice-versa, you hardly need to dick around with a window manager to do remote collaboration. If you've got smart paper, you don't need to worry about how to send an e-mail; and so on.
-_Quinn
Re:NLP != Voice Recognition (Score:2)
In that context, voice recognition is just one more way of getting input. And I think that what's needed is not just one more way of getting input. What we need is for computers to have an increased level of understanding of the Real World.
Re:NLP != Voice Recognition (Score:2)
So we finally can tell the office assistant (Score:2, Funny)
And I can tell my project - GO, FIX YOURSELF!!
Re:So we finally can tell the office assistant (Score:2)
I'm not sure I want to see what shape the paperclip bends itself into to animate that...
worked in Microsoft Research (Score:2)
Yeah, we can see that you have the typical Microsoft attitude. It will NEVER be anything, until someone implements it. Then, it's just a matter of throwing money around until the innovation becomes a Microsoft monopoly.
Comment removed (Score:5, Insightful)
Re:You know what else? (Score:3, Insightful)
If I want to make a trans-continental journey, I use wings. To ride across the harbor, a hull, and hydrodynamics. The wheels on your nearly brand new car are a far cry from the round stone of yester-century. They are even more advanced than the wheels you would find on a car just 20 years ago.
Change for the sake of change is one thing, but change for the sake of better usability, safety and economy is completely a different matter.
I for one appreciate the ride (and style)I get from my air-filled, steel belted radials wrapping polished aluminum rims.
Re:You know what else? (Score:3, Interesting)
Can you name a single improvement to the concept of the wheel in those years? AFAIK, they are still a round thing that revolves around an axis. Sure, the machining precision of that roundness and that axis are many levels of magnitude better than 5000 years ago, but the concept is still the same.
Re:You know what else? (Score:2)
Talking about wheels... Has it ever occured to you that someday a few thousand years ago somebody enhanced a square "wheel" predecessor into a triangular wheel predecessor? I am sure he must have considered it an heavy improvement, because there bas one bump eliminated with every revolution...
Re:You know what else? (Score:2)
Of course, keyboard input, to a great extent, worked. But people switched to using the mouse, probably because it seemed to go well with graphics, and was the next new thing.
Re:You know what else? (Score:2)
You can - if they're designed to interoperate in the first pace. Pipes==kludge (useful kludge but still a kludge).
New discussion site for post-WIMP interfaces (Score:4, Informative)
Nooface [nooface.net]
In Search of the Post-PC Interface
Eye Tracking (Score:2, Informative)
IBM has been working on eye-tracking movement. Supposedly it can tell what part of the screen your eyes are focused on. Would be cool for first person shooters, but for an OS I think moving a mouse is just as simple.
I saw this on TechTv, i think it was fresh gear, but I'm not sure, anyone have a link?
So far... (Score:2, Interesting)
Speech recognition is only useful for very limited functionality, mainly because computers haven't been fast enough or with large enough databases to really make use of syntax and context. Continuous speech recognition today typically uses waveform profiles with no contextual or grammatical analysis.
But with faster processors and larger memories, I expect speech recognition to go to the next quantum level within 5-10 years. Once we add contextual and grammatical constructs to speech recognition, computers will start to be able to really understand what we're saying. To go from that to understanding what we *mean* is another step, but that's coming too.
I also expect computers to have video cameras and to be responsive to our body language and facial expressions. They will be able to judge whether what they're doing is interesting or useful, and will ask for guidance or attempt to correct based on that feedback.
In other words, I expect interaction with computers to become more like interaction with people!
Re:So far... (Score:2)
however, i would definitely like to search the web in my kitchen when up to my elbows in bread dough or something... you know, shout out "search for thin crust bread recipies, knead time" and then have a voice read back to me the recipe or something like that...
and it's really not all that trekkian: it's all keywords and T2S. but the big hassle is connecting all the peripherals to the kitchen. now if i had firewire connecting every room so all i have to do is plug a speaker/mic combo into one outlet and start surfing, that would be cool
maybe the next big leap isn't interface, but infrastructure? replace the '|' character with a USB/Firewire line throughout the house, replace the shellscripts with small devices.
but back to speech to text: i'd hate to be in an office with 300 people using voice recognition software. it's bad enough how much noise i have to block out in my cube already.
Goo gravy this guy is link happy (Score:2)
Furthurmore, it wasn't PARC that introduced us to GUIs, it was Douglas C. Engelbart.
However, this IS a good question - is the "Desktop" metaphor the height of the GUI? I've read about some folks playing around with REAL uses for 3-D on the desktop: modeling files as a sort of "billboard" shaped like a U, with your point of view being at the bottom of the U. The part you are focusing on is at the bend of the U, closest to you and in highest res, the other parts are on the sides of the U and are show receding into the distance in decreasing res. You scroll data along the U, bringing interesting bits close but still having some awareness of the other parts.
Now, why doesn't somebody make THAT into a UI?
Simple: the same reason KDE and (to a lesser extent) Gnome look like Windows - if you make a radically different desktop interface, Joe Bloggs and his family will have their heads explode when they see it. Their tiny little minds are burned like a PROM to only accept the Windows(tm) way, and anything else will cause catastrophic cranial overpressure failure.
Re:Goo gravy this guy is link happy (Score:2)
Emacs, naturally (Score:3, Insightful)
I'm serious.
No, really.
Nothing gives me hand pain as quickly as using mice, especially ones with that wheely thing. Keeping my hands in good form over the "home row" of my keyboard -- and away from the mouse -- has virtually eliminated pain from my computing life. I spend half of my waking hours in Emacs, and I have come to love (and depend on) its there's-a-key-for-everything nature.
Dump the mice. Keep Emacs.
Re:Emacs, naturally (Score:2)
But don't dare suggest taking away my mouse. I'll fight you.
This is very short sighted.
I like both CLI and GUI tools. I use both. I would not let anyone take away either one from me.
hrm, not quite Re:Emacs, naturally (Score:2)
IMHO it's not mice that cause hand funkiness, it's switching back and forth between keyboard and mouse. Mostly-mouse user interfaces are just as pleasant as mostly-keyboard, at least as long as you're not using those screwy Apple hockey puck mice.
I do have a serious ergonomic bone to pick wrt emacs, anything that makes me use CTRL, ALT, and/or ESC frequently is going to give me hand cramps real quick becuase of the distance those keys are from the alphanumeric ones (i mean finger distance, not whole-arm-movement distance). Vim with appropriate settings and nedit get my vote for Things That Just Let You Type. But to each his own, the whole point of ergonomics is after all that "one size fits all" is a steaming load of livestock byproduct.
Raskin? (Score:2, Informative)
Why replace the current GUI paradigm? (Score:4, Interesting)
This seems a bit like asking what it would take to replace the current way of driving a car (steering wheel, gas and pedal brakes, etc.) with something better. But the interface between humans and automobiles is pretty much a solved problem, and nobody seems to spend much time speculating on what a paradigm change in automobile control would be like.
There's a curious assumption which I've seen repeatedly-- namely, that a paradigm shift in human/computer interaction would be a good thing. Why, exactly? I see no reason to pursue a paradigm change for its own sake; I view it as a problem which has basically been solved for now, much as the problem of steering cars is a solved problem.
Because our wrists hurt. (Score:2)
As a carpal-tunnel sufferer, I would be ecstatic to see WIMP (or at least the P) go away. Face it, our current input devices are less than ergonomic by their very nature. A fundamental shift in computer interaction would probably be towards an interface more suited to the human than the machine. Our current system of sitting motionless, staring at a screen, twitching a mouse, and banging on a keyboard are as archaic (and potentially painful) as the lawn sickle.
I firmly believe that my grandkids won't be using a keyboard and mouse like I do. They also will probably never know the term "RSI", and they'll wonder why Grandpa's wrists make those funny noises.
Re:Why replace the current GUI paradigm? (Score:4, Informative)
That's an excellent question. By Kuhn's model of paradigm shifts, the shift must be preceded by a number of anomalies in the current paradigm. In command-line interfaces, the anomalies were numerous -- the need for constant relearning of old habits, the need for memorization, the ease of making errors, the computer being in control of the human rather than the other way around, etc. Eventually social factors caused the anomalies to be recognized as such, so that when a new paradigm was created, its values were widely recognized. Perfect recipe for a Kuhnian shift.
What are the anomalies today which would force a change in the paradigm? Serious question, not rhetorical. For starters, I'd say Gelernter's new project is an attempt to rectify some anomalies which have not yet attained social recognition as anomalies.
Tim
Re:Why replace the current GUI paradigm? (Score:2)
* constant relearning of old habits: Just as command-line languages can differ from one another, so can GUIs. Granted, the differences cannot be as dramatic as with languages (the reason probably being that languages are far more expressive), but they are there.
* the need for memorization: Again, this is not a question of quality, but of quantity. Granted, languages require much more learning effort than GUIs, but then again they are more expressive.
* the ease of making errors: This is not a shortcoming of command-lines per se. A command-line can easily be configured to alert the user of any unpleasant side-effects a command to be executed might have. The fact that it usually isn't is quite probably due to its user feeling comfortable and secure enough.
* the computer being in control of the human: What is that supposed to mean? I use several command-line languages every day and I do not feel myself being controlled by the computer. On the contrary: being fairly competent in using those languages I can command the computer to do things automatically which a GUI user would have to do by hand repeatedly.
All this "command-line is a thing of the past - the future belongs to GUIs" is nonsense. Command lines give you a language which is usually Turing-complete, meaning you can express the automation of arbitrary tasks. This is something a GUI just cannot do. GUIs provide ways for performing an array of functions, but only very limited means, if at all, of tying these functions together and doing something automatically. And the automation of tasks is what a computer is ultimately for, is it not?
bye
schani
Because it can be better. Pie Menus rule! (Score:2, Insightful)
This seems a bit like asking what it would take to replace the current way of driving a car (steering wheel, gas and pedal brakes, etc.) with something better. But the interface between humans and automobiles is pretty much a solved problem, and nobody seems to spend much time speculating on what a paradigm change in automobile control would be like.
Oh yeah? Two words: cruise control. It completely redefined the "car interface". How about two more: intermittent wipers. True, the inventor got shafted by Detroit [engineerguy.com] and had to fight tooth and nail for years to get his due, but he too changed the "car interface" dramatically.
There's a curious assumption which I've seen repeatedly-- namely, that a paradigm shift in human/computer interaction would be a good thing. Why, exactly?
Simple: because the quantum increase in computer access that was engendered by the WIMP interface isn't by any stretch of the imagination the endpoint of interface evolution. Want an example? Don Hopkins has been pushing his concept of Pie Menus [piemenu.com] for about 15 years now, and has implemented them everywhere he can find an amenable display system (starting with (*shudder*) X10 and including MS-Windows!). If you think you know how user interfaces should work and you haven't read any of Don's exhortations on the human-factors improvements inherent in non-linear menus, you need to get with the program.
Not just the Sims (Score:2, Insightful)
By customizing the Marking Menus a little but, you can drive PowerAnimator with a 3 button mouse and the Control, Shift and Alt keys. Way cool.
Re:Why replace the current GUI paradigm? (Score:2)
[upenn.edu]
Visual Pipes
Gross motion v. fine motion (Score:4, Insightful)
Add a touch screeen so that when it comes to window selection less fine manipulation, you can use the large muscles of the arm and shoulder, then use the mouse for the finer that you cannot get with a touch screen.
The more important aspect is the comfort and the breaks. No matter what mechanism is used, trauma can accumulate over time. You need time to allow the body to recover from that trauma.
You are missing the point (Score:3, Insightful)
With regards to the command line or WIMP interfaces being old, and not particularly forward looking, you are also missing a fundamental point: A graphical "pipe" isn't innovative either. You're simply shoehorning two paradigms together, and even worse, two totally incompatible paradigms at that. The pipe is a useful metaphor and operator for stream-oriented I/O. The WIMP is useful for (obviously) visually oriented information, and its designed for a completely different purpose than the pipe. The WIMP is designed to allow humans to manipulate data and abstract objects in a visual manner. The pipe is designed to allow users to allow the computer to do the same, without intervention.
If you want an innovative computing interface, worrying about streams, or visual representations of data is a waste of time. You're going to have to come up with something totally new. One good example is the use of sound to communicate the health and performance of networks or systems.
Re:You are missing the point (Score:2)
"it's just that there are more and more applications that REQUIRE a GUI, or that are badly designed into using one "
- not really. More and more people are shoehorning applications into the WIMP model, whether they fit or not. That isn't the same as applications that "REQUIRE" a GUI. Badly designed applications are badly designed..If people choose to use them, what can you do? A better line of inquiry would be "why are most applications going to the WIMP model, appropriately or not?"...I'd hazard a guess that it's because more efficient or not, it's easier for the average user to understand and operate.
"A good example is the browser syndrome, where keyboard shortcuts are a no-go in most cases -- GUI systems like Motif, GTK, or even Win32, are well thought out. But remember, the original goal of GUIs was to facilitate _data representation_, i.e.: make it more easily assimilable by the person, NOT easier input"
"In the case of applications like CAD, graphics design, etc... a pointer-style input device makes sense -- but we NEED to solve the Repetitive Strain problem "
- Its not the mouse or the GUI that is causing RSI. If it was, why so many ergonomic keyboards? Why aren't there any ergonomic guitars? RSI is caused by bad ergonomics, nothing more, nothing less. If an application was poorly enough designed that it cause RSI, very few people would be able to use it for any length of time (which in fact makes me wonder about mouselook in FPS games, and what that is doing to a generation of computer users)
Unix is chock full of hard to reach keyboard symbols that have significant meanings (and that are needed to run certain commands) For all that geeks are supposed to cherish keystroke economy, there are a suprising number of conventions that require removing your hands from the home row, or using alt/ctrl/escape, etc to modify the default meaning of keystrokes.
3D Environments will lead to change (Score:5, Insightful)
The real advance that will open innovation (real innovation and not some corporation's twisted idea of it) is the beginning of a 3D workstation environment.
We already have the primatives for this kind of environment in games like Unreal, Q3A, and Black and White. Assuming that we implement a 'graphical pipe' that will work for a truly 3D application system, ie: Allow 3D applications to pass information back and forth between each other semi-effortlessly, this will ignite a new 'interface revolution' similiar to what we experienced as a result of Xerox's early WIMP system and the first versions of Apple's MacOS.
Once programs and applications can truly be represented as 'objects' in a 3D environment, we'll end up with something like the 'God' interface in Black and White, where processes are represented by animated people and files are represented as other objects. Tasks best handled in 2-D such as composition, coding, or painting will still be easy to handle, but tasks best performed in 3-D such as file management, database management, and even some advanced programming tasks like linking and compiling files, will take place in a representational environment. Imagine opening up your HDD and pouring objects into it, then sorting them into containers based on type, as you would sort files into directories.
Eventually, I see us moving into something like Stephenson's 'Street' metaphor for shared environments.
Along with these advances, will come new interfaces. I think that eye-tracking cameras have the biggest potential, but we keep coming back to the data-glove in one form or the other. I know CADesigners who still have an old Nintendo powerglove hacked for basic 3D manipution tasks. We're also probably going to see a renaissance of 'body tracker' devices that will track human motion via sonar or laser. Any one of these has the potential to vastly reduce RSI injuries.
The real trick in jumping from 2D to 3D is reverse compatibility. All the shells I've seen that attempt 3D interaction do it badly. Even then, they fail completely when faced with most of the tasks we do on a daily basis, like write or paint. I think we're going to have to use 'easels' or something similiar. In Cowboy Bebop, Radical Edward's computing environment is shown as a multitude of 2D windows hovering around her in 3D space. This wouldn't be that hard to do, really.
Navigation will be the true challenge for any 3D application designer. It will be that itch to scratch that will spawn new, inventive input and coding ideas.
Re:3D Environments will lead to change (Score:3, Interesting)
A 3D desktop is a step in the opposite direction, placing more emphasis on the desktop itself than what people want to do.
Re:3D Environments will lead to change (Score:4, Interesting)
This would require a lot of holography and motion tracking, touch sensors, etc, but it would be the ultimate in 3d interfaces. It would avoid klunky HUDs and gloves, etc that just detract from the actual work. You could even bring up a keyboard on the desktop and use that instead of a virtual pen or pencil.
3d interfaces would be nice, but on a 2d display I think it's best to stick with a 2d interface.
Forget WIMPs (Score:3, Funny)
(WIMP = Weakly interactive massive particle; MACHO = Massive compact halo object)
Hey, how about a few more links?! (Score:2, Funny)
How about next time, you make individual letters & punctuation into hyperlinks?! THAT WOULD RULE!!11!!
Re:Hey, how about a few more links?! (Score:2)
[All links pulled from google's first page for each letter - sick, ain't it?]
Graphical Pipes (Score:4, Informative)
The nice thing about graphical pipes is the ability to easily and transparently connect several forms of data to one node. With command line pipes, you've effectively only got one input and output.
What would be needed would be graphical terminal programs (built into the OS, or at least window manager) for connecting these things together, Oh, and a standardized way of defining input and output types - I dunno - would MIME work there?
I expect somebody can point us at a project that has already done this?
First start with data representation (Score:2)
The point is, once you have a different representation of data a different GUI approach (using the data) has to follow. I see data representation of large streams paving the way for true "Cyberspace" GUIs, allowing the user to walk through the data, adding movement, position etc. to the user experience.
Just my 0000010 cents
A combo of input methods and predictive interfaces (Score:3, Insightful)
Gloves work well technically (at least from what I've seen of them), but are fairly inconvenient to use. I suspect that some basic speech recognition will be mainstream in the not-too-distant future, because processing power is cheap enough to handle simple speech without impacting performance. Maybe eye tracking will be used some as well, but eyes tend to wander.
So I think the biggest trend in interface design over the next few years is going to be a return to simplicity. Fewer clicks, fewer mouse movements, and greater use of predictive interfaces - where the interface guesses what you'll do next based on experience and learning, and has it ready for you just in case. The Mac-style version of the UI (but not necessarily the Mac itself) will probably be the dominant strain overall - Microsoft is converging in that direction now as well as evidenced by the Luna interface in XP. I think mainstream mice return to two buttons (from the currently popular multi-buttoned models), or maybe even one. And mouse gestures will be used more instead of clicking in some cases.
Mice themselves I can see working by gyro rather than by ball or light sensor, which would allow a mouse to be held and moved within 3 dimensions (even if it only tracks in two), rather than skated over a desk in two dimensions. It's potentially a more natural hand motion. Keyboards probably won't change much due to inertia. They haven't really changed in basic layout for over a hundred years.
Of course, I may just be blowing smoke, since my intuition is no better than anyone else's. But the Slashdot crowd is very different from the "mainstream" - we are typically more forgiving of a complex interface and would trade off in favor of power over simplicity. So we may not be the best people to forecast.
pipes can be overated (Score:2)
"What nobody noticed in all the excitement was that the computer reductionists were still busily trying to smush your minds flat, albeit on a slightly higher plane of existance. The decree, therefore, went out (I'm sure you've heard of it) that computer incantations were only allowed to perform one miracle apiece, "Do one thing and do it well" was the rallying cry. and with one stroke, shell programmers were condemned to a life of muttering and counting beads on strings, (which in these latter days have come to be known as pipelines)."
And while Perl has not made many contributions to user interface design-- it is half line noise, after all-- it does share a "everything but the kitchen sink" approach to product design.
From a programming point of veiw, the small program does have numeous advantages. The code base is small, it is easy to test and debug, and the "do one thing" edict tends to focus the design.
With large monolithic applications, the APIs and coding peculiarities differ from application to application-- so instead of writing a spell checker pipe based app to work with dozens of other apps, one has to write additional application specific glue code to work with each monolithic application.
Or take multimedia frameworks. Xine, Alsaplayer, OMS, VideoLan, and Ogle each might have a different plugin architecture. A creator of a audio or video codec would have to write hundreds of lines of extra code to support each multimedia framework...
Pipes can be over-rated, but they're hackable! (Score:2)
I think there are a lot of ways that today's GUI software could be made more hackable. For instance, every window and dialog box that an application ever pops up could be a fill-in-the-blanks XML file, sort of like the way CGI and JavaScript fill in the blanks in HTML. If the user wants to change the way a window looks, s/he should just be able to edit the XML file, and the software's behavior should change appropriately. A cool way to implement this in a GUI system would be to have an Edit Source command that you could do on any window, sort of like the way web browsers have View Source.
For instance, suppose you have a program that always pops up an annoying dialog box saying "Warning: By erasing this file, you will be eliminating all its data, and you won't be able to use it any more. Do you really want to do this? [No](default)[Yes]" It would be nice to be able to change the default to Yes, and maybe to go further and make it not pop up at all. And you should be able to do this with
Of course, if it's open source C code or something, you could recompile it with your changes. But there are a lot of practical problems with that: (1) it's too hard for non-programmers, (2) it's time-consuming, (3) you have to maintain your own fork, and reconcile it with new releases of the official fork.
Re:Pipes can be over-rated, but they're hackable! (Score:2)
Why do you think that pipes and scripts are obsolescing? And when you say scripts, do you mean all programs written in interpreted languages? It seems to me that scripting languages are gaining, not declining in importance. And I think there are more people using pipes and scripts than ever before. I use pipes and other unix features heavily at work. I'm able to solve arbitrary problems much faster than the Windows programmers I work with because I have a better toolbox. Their only tool for bashing data is writing a custom program in C++.
I agree with your idea about GUIs. I think that the GUI (X application) should be a generic program unto itself, like a web browser. The GUI application would make a socket connection to the GUI and send it XML commands like "Pop a dialog box asking $question". or "Create a scrolling buffer called EVENTS". Then the GUI could be anything - running on the same host or different, on X11 or Windows or curses, written in any language, customized by the user or the system vendor in any way. And all the apps run through that GUI would look visually consistent.
Wake up, the answer is obvious: componnent models (Score:2)
Yes there has. Pipes only offer a very limited flow of information between applications and has a horrible tendency not to seperate content from presentation, so you end with nasty hacks like
Most Unix users think all graphical programs are limited in UI because all they've been exposed to are poor graphical apps (notice the same people who complain about GUIs are fine using ncurses - perhaps its not GUIs, but slow and unresponsive apps that are the problem. So much of the Unix world remains stuck in the mistaken idea that Unix philosophy (small apps that work together) is somehow magically limited to command line apps. It isn't - the way ifconfig communicates with grep via pipes is much more limited and hackish than the way khtml communicates with konq via kparts.
Re:Wake up, the answer is obvious: componnent mode (Score:2)
I think the posted question more-or-less assumes the component model, because otherwise it doesn't make any sense; the question is, how do we make stitching components together as common as stitching command-line programs together?
-_Quinn
Re:Wake up, the answer is obvious: componnent mode (Score:2)
These connects aren't limited to scripts, they'll work in most interpreted or compiled langauges with KDE bindings. But yes, one can embed for example, a netscape plugin viewer kpart inside khtml.
I disagree. (Score:2)
In general, they are.
If I take a windows PC, and a Linux PC, and I want to do something 'innovative'.. like have my IP address automatically posted to a web page, and have my incoming ICQ messages automatically logged to a file, as well as copied & zipped to another file, then ftp'd to a remote host...
These sort of tasks are very difficult to automate in Windows, and very straightforward to automate in unix. That's why people think this way.
And on your point about the unix philosophy being 'mistaken'. You seem to think ifconfig should output exactly what you want... like a single ip address, for instance. The problem is.. that philosophy requires the designer of ifconfig to determine exactly what kinds of output every potential user might want, and forsake the rest.
That's not the unix way; the unix way is to make the program output as much information as you could reasonably want, and let OTHER tools sort it out, so the user can always get what they want.
Re:I disagree. (Score:2)
It's good to hear you have an open mind.
If I take a windows PC, and a Linux PC, and I want to do something 'innovative'.. like have my IP address automatically posted to a web page, and have my incoming ICQ messages automatically logged to a file, as well as copied & zipped to another file, then ftp'd to a remote host...
That's a matter of scripting, which can be achieved in a reasonable manner with scripting tools on either platform, though it seems you haven't kept current on the Windows side of things.
These sort of tasks are very difficult to automate in Windows, and very straightforward to automate in unix
That isn't correct. You evidently have more Unix experience than Windows. With the exception of the ICQ parts (sub in use MSN messenger) its entirely possible.
And on your point about the unix philosophy being 'mistaken'. You seem to think ifconfig should output exactly what you want
No I do not. You have fundamantally misunderstood the solution presented. Rather than filtering via arbitrary text strings which may change and have no standard format, filter via fields. Ifconfig outputs structured data, I filter it by tellign it the fields and records I'm looking for.
hackish? (Score:2)
Perhaps, but there's a reason why this "hackish" communication is popular and effective. Ifconfig's text output is its API (the output half, anyway). Since I already type ifconfig to learn about interfaces on the machine, I don't have to learn a new API to stick data into a shell pipeline. Khtml may have this wonderful relationship with konq, but I feel left out. I read and write ASCII, not binary. Suppose I want to try using khtml for something - can I invoke it with different arguments in a few seconds and see what it does? And if I invest enough time to understand and use the interface, how do I troubleshoot it when it breaks? If a complex shell pipeline starting with ifconfig is malfunctioning, I could start by trying a plain 'ifconfig'. How will I isolate some K-component from its friends and see what it's putting out?
I think a lot of the power of Unix is the overlap between the machine-readable and the human-readable. When you can read and write a language yourself, it's easier to write code that reads and writes that language. And it's easier to debug.
Tongue powered interface (Score:2, Funny)
Telephone conversations may be impeded, but think of all the geek-girls you could get when they find out how much exercise your tongue gets.
Of Mice And Music (Score:3, Insightful)
The mouse is a device of the Beatles era
And the steering wheel is a relic of the Jazz age. I don't plan on giving up either of them any time soon.
We need Star Trek interfaces (Score:4, Insightful)
Computers should have voice recognition for general tasks such as environment management and collabertion. The crew uses this to dim the lights for example.
Computers should have keypad interfaces for general reading and writing. You'll notice the workstation on Picard's desk for exactly this purpose.
Computers should have remote/networked interfaces for passing sneaker-net info. This means datapads and remote voice uplinks. (Hold on captain, I'll email the info to your chair! Yeah, right.)
Computers should be able to present information in the most efficient way possible. This doesn't mean the prettiest, but instead it means dynamic graphical/audio interfaces. These interfaces must be easy to create and use. This means that the readouts might one moment be tracking the progress of Warp plasma then next moment display starcharts of the ship's course.
All of this makes the Star Trek interface very useful and is quite possible with current technology. However, there is one thing that really makes it easy to use:
Computers must be able to automatically analyze, process, and display any information. Filters should be able to be applied automatically. Artifical intelligence should be able to translate speech from any language to any language and add new patterns without reprogramming. Answers to theoriretcal questions should be able to be answered given a current set of information on a subject. Information should be able to be retrieved and stored no matter what the format and new formats should be able to be analyzed and added to the database.
This is much more difficult due to the fact that computers today have a very monolithic nature. Pipes are a good example of dynamic linkage of programs, yet at the same time are very primitive. They require a great deal of operator understanding of the information in order to use.
What if we could build layers of software on top of layers of software, which are then again used to build greater layers of software? This would require that we accept standard functionality at each level of the layering process and then allow people to write ever simpler code due to this great deal of layering. Why should anyone be required to rewrite a quicksort once it has been written?
The first and best key we have today is the Java class file format. Class files are allowed to layer on top of others by nature. The computer can pull apart the structure and investigate the usefullness of a class for a specific function. With enough layers, the computer can make extremely intelligent decisions automatically. Witness JINI [sun.com].
So here is my challenge to every programmer/newbie/manager/hardware designer in existance. Stop trying to focus on how we can optimize or tweak each individual computer function so that you gain temporary marketshare, outperform your competitor, or push your political agenda. Instead, begin pushing for hard standards in the industry. Don't worry if they have the "highest performance" or if they have the "coolest widgets". Instead worry if they have the best design, if they are technically "right" for the task at hand. Stop worrying about making languages that allow you to produce specific functionality in fewer lines of code, and worry about producing the highest level of quality.
Many people will find this a hard pill to swallow, but the end result will most certainly be worth it.
Re:We need Star Trek interfaces (Score:3, Funny)
And the intellegent chair can read my ass for authentication!
Should help with my diet/training program as well...
"OOF! Based on your cheek patterns Captian, I am reprogramming your workout for an extra 15 minutes, and I have revoked your Krispy Kreme access on the replicator. Have a nice day"
Re:We need Star Trek interfaces (Score:2)
Very good. That is exactly what I am proposing. However, I wouldn't worry about your job. No matter how intelligent you make the tool, it is still a tool. It may cause people to loose more repetitive jobs (e.g. manufacturing) but overall I think this is good for the human race as a whole as it allows us to waste our time exploring much more interesting venues. (Space exploration anyone?
Re:We need Star Trek interfaces (Score:2)
Think about it. Any alien can beam onto Voyager, and know how to use the computers and take over the ship. Among some groups of fans [nitcentral.com], this is called an "Invader Friendly OS".
Should I even bother commenting? (Score:2)
In TNG (which was why I mentioned that series in specific), the computer was locked out on several occasions. "Rascals" and "Brothers" come to mind. I'd say those are pretty secure instances. Oh, and there was an episode of TNG where they saw a tachyon anomoly and Picard remarked something to the effect of, "I thought tachyons couldn't exist in normal space time" denoting that they had no control of tachyons much less pulses and beams. So please crawl back into your hole. Thank you.
Re:We need Star Trek interfaces (Score:5, Insightful)
Artifical intelligence should be able to translate speech from any language to any language and add new patterns without reprogramming.
Oh, okay. I'll have that for you tomorrow.
I'm really tired of people making these dramatic statements that are all but impossible to realize. There are people who have spent their careers trying to do what you mentioned in that paragraph, and they've gotten nowhere, because the task is so enormously complex. Translating from one language involves taking one set of nonsensical rules and ambiguities and replacing them with another. It can not be done without a complete understand of what is being translated. That's...hard, to make the understatement of the year.
What if we could build layers of software on top of layers of software, which are then again used to build greater layers of software? This would require that we accept standard functionality at each level of the layering process and then allow people to write ever simpler code due to this great deal of layering. Why should anyone be required to rewrite a quicksort once it has been written?
There are already so many levels of complexity in any piece of software that it's completely insane. Your idea is nothing new. Here's a partial list of the layers involved in a Java Virtual Machine I'm sure I am leaving many things out, putting them in a bad order, etc.:
My point? Layering software is nothing new. If you want to add a layer, say so, but don't pretend it's a new concept.
So here is my challenge to every programmer/newbie/manager/hardware designer in existance. Stop trying to focus on how we can optimize or tweak each individual computer function so that you gain temporary marketshare, outperform your competitor, or push your political agenda. Instead, begin pushing for hard standards in the industry.
You know, I really hate it when people push for other people to make dramatic changes I suspect they don't understand (evidence: above) and use the word "we". It's entirely inappropriate.
Stop worrying about making languages that allow you to produce specific functionality in fewer lines of code, and worry about producing the highest level of quality
The language is one of those layers you advocated. The language itself is a piece of software to be reused. You suggested people add layers to make things possible with simpler code, and this is one way to do it. Don't knock it.
Re:We need Star Trek interfaces (Score:2)
I don't believe I stated that it would be easy, I stated it should be the end goal. Automatic computer analysis of data is both possible and desirable. However, as you state it will take a great deal of research and design to get to this point.
There are already so many levels of complexity in any piece of software that it's completely insane. Your idea is nothing new.
I don't believe that I stated that it was a new idea. In fact, it is a very old idea one of which pipes themselves were based on. The only difference is that right now most people are contemplating their navels instead of truely trying to build on top of the existing layers. I mean, how many iterations of the GUI do we have now? The GUI is just ONE tool!
My point? Layering software is nothing new. If you want to add a layer, say so, but don't pretend it's a new concept.
No, it is not. What I am encouraging is to standardize each layer. How many people do we hear from even here on
Please keep in mind that this is entirely a theoretical discussion and as such, cannot really be expected as the interface of the near future. It's a great goal, but I do realize that we have to get to point B before we get to point C.
Two kinds of layering (Score:2)
I'll try to explain. How do you find out the temperature in a city? To start with, we still don't have a standard way of coding cities. I encountered this when trying to (automatically) draw a world map showing the hosts in a certain network. The location information for each host was free form. It took a large amount of effort and special casing to get a program that could locate 90% of the hosts on the map.
On every project I've seen, we reinvent the wheel. Not the OS or GUI, a higher-level wheel. What kind of contact info do you ask from a user? How do you validate it? I think MS is aiming at that problem with Passport. And SOAP may be the first step towards getting computers to talk on a high level without the explicit intervention of programmers. In fact SOAP may be the real answer to the OP's request. The only place I've seen real reusability (in the high-level sense) is CPAN.
Re:like we need a hole in the head (Score:2, Insightful)
Many people have tried to build systems like Star Trek, and in fact you can get a lot closer than you many be aware. The problem is that in real life, that interface *sucks*!
In real life it is painfully annoying and insanely slow to tell a room/computer to do something.
We just don't realize how good a user interface a lightswitch is.
The Star Trek data pads are touch sensitive; we have had touch sensitive technology as long as the mouse; but the mouse is what we use because of its acceleration property.
I have built and analyzed [CMU,MIT, Microsoft et al] many systems designed to solve the exact problem you are talking about.
And although the technology isnt up to the task, you can evaluate it as if it is by using humans to "Wizard of Oz" the test. Turns out [in my and others opinions] that the whole idea is dysfunctional and annoying.
Re:like we need a hole in the head (Score:2)
I won't refute that. It is merely a paradigm that people can identify with, that's why I used it.
Many people have tried to build systems like Star Trek, and in fact you can get a lot closer than you many be aware.
Lemme see, flip phones (communicators), horseshoe bridge design (used on some aircraft carriers), the biobed design was studied by the military, phasers (tazers and other non-lethal weapons), etc. Star Trek mythology has had a large impact on civilization and technology as a whole. Only a fool would deny that.
In real life it is painfully annoying and insanely slow to tell a room/computer to do something.
We just don't realize how good a user interface a lightswitch is.
The idea behind what I am proposing (and what you actually see on Trek) is that the interface is multifaceted so that you can use what's convienient. Which is more convienient when you walk into a dark room carrying groceries, saying "computer: lights!" or fumbling for a light switch? That's not to say that some of the current attempts are really not good interfaces, but given enough technology and design and it could probably work quite naturally.
The Star Trek data pads are touch sensitive; we have had touch sensitive technology as long as the mouse; but the mouse is what we use because of its acceleration property.
Most laptops use touchpads, I once had a remote control that had a touch LCD, most fast food restaurants use touch screen registers, etc. Having actually used touchscreen cash registers, I can personally say that not only are they a good interface, but they are FAST and generally have a lower rate of error than key or touch-key (these things are a joke) registers. Part of what makes them so nice is that the interface is specialized and reconfigures on the fly. Now that's not to say that touch screens are optimal for all situations, but they definitely are useful.
To be perfectly honest, you make some good arguments, but they are based on current emerging technologies. Natural voice recognition (much less speach) and touch screens are in use in many industrial areas, but as of yet are extremely specialized and need time to mature. One day the technology will be there, and then we will be able to have a much more realistic conversation on these pieces.
regarding RSI (Score:2)
I'll avoid the theoretical for a moment and just speak to this:
My web designer friends are damaged for life because of mice, and yet we persist... Where do we go from here ?
Just thought I'd mention that when I started showing symptoms of RSI I went out and bought a couple of trackballs and a couple of Wacom [wacom.com] Stylus tablets.
For design work, the Wacom [wacom.com] products spoil me rotten, and though it hurts me to say so I've had nothing but luck with the Microsoft thumb-controlled track pads [microsoft.com].
Though if you have political problems with them try the Kensington [kensington.com] (which are excellent) or Logitech [logitech.com] versions. I might try the new Logitech units myself actually.
It really changed the way i work, any desktop I loose to the tablets is mitigated by not halving to mouse around. So anyway, no more pain for me.
Re:regarding RSI (Score:2)
Like any job, you have to be aware of how it can harm you.
If you lift things all day, you know to lift with your legs, not your back, or else you end up with long-term back problems.
Same with a computer.. If you rely on your wrists/hands for your income... please, learn to sit properly, type properly, use a mouse properly, and get some exercise on those wrists/hands (and I don't mean from frequent computer use).
That's all it takes to keep your hands healthy and strong.
You can do this now (Score:2)
At least in MS Office apps. Just record a macro that types the word "if", and assign a keyboard shortcut to it.
Possibly speech but needs more (Score:3, Interesting)
The reason they should also have mouse and keyboards are for security so passwords etc wouldn't have to be spoken (see the recent user friendly strip series for a humerous take on that), and so things you're doing could be kept somewhat private. Imagine starting up a long build or whatever on your machine and figuring you'd take a short break while everything compiles and telling your computer 'open mozilla. go to hot asian chicks dot com. click hot and horney', you might get more than a few head turns from local cube dwellers unless you bookmarked it and renamed it to something like 'intranet' but the renaming process would also have to be vocalized.
It should also accept natural language commands for complicated to speak text. The main example for this is programming. If I wanted to do:
for (int i = 1; i = 10; i++)
cout << i << endl;
I would like to just say 'for loop. local integer i from zero to ten step one begin. print i and end line. end loop'. instead of having to articulate each puntuation symbol as 'for open parenthesis int i equals 1 semicolon i less than equal ten semicolon i plus plus close parethesis. enter. c out less than less than i less than less than end l', not to mention if I had to put spaces in there too.
The next thing we would need is an abstraction from the use of applications and the file system which would go in very well with a speech interface. The user would only be concerned with documents and data. The user would just ask the computer to start a new report on photosynthesis and the computer could ask the user what to call it and they could just respond with a natural name like 'biology 101 mid-term'. Later the user would just ask the computer to open the biology 101 mid-term without having to care if it was opened with word or starwriter or kword, etc, it would just be there and they could work on it.
The abstraction from the file system would be a natural extension of this because the user doesn't need to know where anything is because the computer takes care of it for them. The user just needs to remember documents/files as he would anything else 'I was writing that letter to Bob', 'I was working on the bio mid-term', etc. This also furthers the use of a computer as a tool, because it would actually help you get things done and be easy to use by anyone because speech is a natural interface for us, but keyboards and mice are not.
The best example I can think of having something like a touch screen is for web browsing or editing documents/preparing presentations, drawing (but maybe a graphic tablet would better for that), etc. so instead of telling the computer to open the 'Read more' link, I could point and it would open whatever I pointed to.
Microsoft is trying to do this with things like the My Documents folder and automatically naming documents with the first line of the document, but it's still somewhat cludgy because it relies on keyboard and mouse interaction. They are kind of on the right track in terms of abstraction from applications and the file system, but still needs a ways to go. This is why they have the Documents folder in the start menu and New Office Document and Open Office Document on the start menu instead of the programs menu. This is also why they have extension associations with applications so the user can just click on a document and it will spawn the right application (or maybe they just stole it from macs).
These ideas are nothing new, I've seem them all somewhere else before, but I just thought I'd post them here for discussion because I think they're good ideas. It should also be noted that this type of interface is for the 'average' user not the average slashdot reader since we all like our keyboards and CLIs.
The Single-Pointer Paradigm is what bugs me (Score:2)
Mouse? What mouse? (Score:2, Interesting)
But I think you're right; what I really want to see is a 3D device, not everybody trying to improve on the 2D paradigm. Of course, that means existing drivers and existing operating systems would need to be abandoned.
new paradigm for GUIs (Score:2)
I see where he's going with this...he wants an interface where stuff blows up! Oh, wait, it's already been done [unm.edu]
Here's where to look for cutting-edge UI research (Score:2)
http://www.acm.org/uist [acm.org]
I attended UIST 94 (in Marina del Rey), and a lot of the work was cutting edge (for that time, and some even for now!). Somebody did some presentation there that was similar to the visual pipe concept. I'll have to drag the proceedings out of storage, though...
Geek public service announcement (Score:2)
Like many of my friends, I started getting RSI a few years ago, and it got worse and worse. I found out what to do though. You can reduce the RSI (like carpal tunnel syndrome) you're getting by some very simple procedures. The R in RSI stands for repetitive and that's what you get it from - having your hand on the mouse for hours and clicking it. I have a ball mouse at home and a Microsoft one at work so I use different muscles at home and work. I also switch-hit, switching the mouse from left hand to right every half hour. This way, you can stay at the computer like normal, except you're not hurting yourself as much. Switch-hitting every half hour gives your hand half an hour of rest while you keep working. Of course, resting, hand exercises and other things are good too.
Programming guru Jamie Zawinski, the guy who wrote the original Netscape for UNIX has a great page on RSI. Check it out, and other pages on RSI. I really think there should be OSHA regulations at least *informing* young guys that prolonged use of mice and keyboards can damage their wrists and leave them so they can't type.
JWZ's RSI page is:
http://www.jwz.org/gruntle/wrists.html [jwz.org]
Way to go with the smart tags! (Score:2)
But what really sucks [hoover.com] is that Slashcode's [kuro5hin.org] inane . [goatse.cx] link exposer for people who are too stupid [aol.com] to look at the bottom of their browser [mozilla.org]'s window [windows.com] to see the URL that they're clicking [clearlandmines.com] on has basically ruined [cmdrtaco.net] this joke [slashdot.org].
Direct brain access (Score:2)
Others have mentioned eye motion tracking. A cute concept, but worthless for anyone who regularly moves his head while using a computer (As I do endlessly, moving back and forth between books, a phone, and several PCs at once, typing nonstop.).
Touchscreens have been around for decades, anyone familiar with one already knows why we don't use them.
To advance at a higher level, computers must become able to interperet thought. It sounds mad, but it is an imperative.
Since life, as we know, imitates Star Trek (Score:2)
Very good NLP, and buttons.
Physical buttons in TOS; touch panels subsequently. They have pointers but seem to only use them for signatures.
Hmmm... Of course they don't seem to have taken account of the fact that you can't use an upright touch panel for more than a few minutes before your arms felel like they're going to drop off.
Re:footpedals and 3d pointers (Score:2)
They do. I have two; one at home, and one at the office. It's called a 'Trackpoint' keyboard; it's very expensive ($160, according to epinions, although someone's selling one for $70 on half.com). I got mine for $115 or so a long time ago.
I'm eagerly awaiting the gesture-sensitive keyboard from FingerWorks; of course, at that price I'll wait for a review, since I've used a membrane keyboard in the past; this isn't a membrane keyboard, but it does look faintly like one, and that makes me a little bit uncomfortable
http://www.fingerworks.com/products_frame.html
-Billy
Re:Brainwave recognition! (Score:2)
Re:Graphical pipes (Score:3, Insightful)
It's not even so much the command line as it is the pervasiveness of ASCII text for information storage throughout the system. Almost every program currently available for MacOS or Windows would have to be changed to start storing their files as ASCII text rather than as custom binary formats.
As XML conversion continues, this may become more feasible. However, few programs will use XML as their native format for efficiency reasons. Programs will need to have options for XML input and output.
This leaves aside the fact that pipes are a programmer-only feature, which no one else wants or could possibly use.
Tim
Re:The wand (Score:2, Insightful)
Re:Telepathy (Score:2)