Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

The Future & History of the User Interface 249

An anonymous reader writes "The Mac Observer is taking a look at UI development with lots of video links to some of the latest developments in user interfaces. It also has links to some of the most interesting historical footage of UI developments, here's one of the 1968 NLS demo. From the article: 'Sadly, a great many people in the computer field have a pathetic sense (or rather ignorance) of history. They are pompous and narcissistic enough to ignore the great contributions of past geniuses... It might be time to add a mandatory "History of Computers" class to the computer science curriculum so as to give new practitioners this much needed sense of history.'"
This discussion has been archived. No new comments can be posted.

The Future & History of the User Interface

Comments Filter:
  • by Cr0w T. Trollbot ( 848674 ) on Wednesday August 16, 2006 @05:54PM (#15922967)
    Where are the glorious UI innovation like Clippy and Microsoft Bob?

    Crow T. Trollbot

    • Devoured by the all-consuming CLI
    • Re:I'm outraged! (Score:5, Interesting)

      by Tackhead ( 54550 ) on Wednesday August 16, 2006 @06:26PM (#15923159)
      > Where are the glorious UI innovation like Clippy and Microsoft Bob?

      On the shitcan of history, like the unreadable [google.com] choice of default font on Slashdot, the Star Wars Galaxies NGE [joystiq.com], the changes to Yahoo's [thesanitycheck.com] stock message boards [gigaom.com], and two recent changes to Google Maps, one of which has made broke printing [google.com] impossible (users are now reduced to taking goddamn screen captures and printing those!), and and another one that auto zooms [google.com] and recenters, instead of merely re-centering the map, on double-click, making navigation a time-consuming process of setting a desired zoom level, clicking to recenter, slowly loading a bunch of tiles you don't need, then unzooming back out, and loading yet another set of tiles.

      In each of these cases, user feedback was nearly universally negative, and yet the "improvements" remain in place.

      If this is UI innovation for Web 2.0, give me Web 1.0 back.

    • Re:I'm outraged! (Score:3, Interesting)

      by pilgrim23 ( 716938 )
      Clippy came out of Bob, Melinda saw that it was good. But seriously though, about that same time period was the dawn of CDROM as a media type. Many magazines shipped with CDs, and each had a GUI. The gamer mags in particular had various custom GUIs for selecting their content. Some were based on shopping, some on Office or Home (ala Bob) Some on really weird stuff (Anyone remember the elevator ride to hell on old PC Gamers?). It seems those were some real free wheeling days of UI development, but n
  • Multi-touch (Score:5, Interesting)

    by identity0 ( 77976 ) on Wednesday August 16, 2006 @06:04PM (#15923032) Journal
    The multi-touch interface demo on Youtube was interesting, I saw it a while ago.

    The thing that makes it different is how casual the interaction is compared to file & image programs today. You see the guy just touch the screen and rotate, zoom, and move images around and organize it, instead of opening up dialog boxes, secondary windows, or menus to access the functionality. It's very basic stuff, but you see how powerful it is, kind of like how Google Maps is compared to the old static kind of online maps.

    It's like today's image programs are concerned with precicely doing something like zoom to exact levels(%100/%50/%33/etc), but this programs let you do it to "whatever zoom feels right", without worrying you with the details.

    Hey speaking of which, I wish cameraphones had a much more fluid interface for picture organization, so I can add keywords, associate it with people on my contacts, etc... but what do they care, as long as they make money off the ringtones :(
    • Re:Multi-touch (Score:2, Interesting)

      by srblackbird ( 569638 )
      http://tedblog.typepad.com/tedblog/2006/08/jeff_ha n_on_ted.html [typepad.com]

      There you go :)
      Don't forget to view the other TED talks!
    • Two mice. (Score:3, Insightful)

      by Poromenos1 ( 830658 )
      Seriously. Most of that stuff can be done with two mice. Why hasn't anyone implemented that yet? Just grab the image from the ends and drag to resize, or drag one end to rotate, or whatever. Two mice would be much more natural. Sure, you'd probably use the one in your good hand more, but for some stuff it would be great (perhaps handling 3D models?).
      • Re:Two mice. (Score:2, Interesting)

        by MadEE ( 784327 )

        Seriously. Most of that stuff can be done with two mice. Why hasn't anyone implemented that yet? Just grab the image from the ends and drag to resize, or drag one end to rotate, or whatever. Two mice would be much more natural. Sure, you'd probably use the one in your good hand more, but for some stuff it would be great (perhaps handling 3D models?).

        It probably hasn't been implemented yet because it would be quite confusing to keep up with which pointer does what. There isn't that problem with these displ

        • "It probably hasn't been implemented yet..."

          Au contraire [icculus.org], my friend.

          Works on:

          • Linux 2.4/2.6/etc through the "evdev" kernel interface.
          • MacOS X (and Darwin?) through IOKit's HID Manager API.
          • Windows XP and later through the WM_INPUT/RawInput API.
      • by Hell O'World ( 88678 ) on Thursday August 17, 2006 @12:36PM (#15927480)
        Most of that stuff can be done with two mice. Why hasn't anyone implemented that yet?
        Because all innovation in the computer industry comes from the field of pornography.
    • Re:Multi-touch (Score:3, Interesting)

      by bunions ( 970377 )
      100% agreement.

      A lot of the limitations on the UI stems from the hardware we use to talk to the computer. The multitouch stuff is awesome, and if/when we see some hardware support, you'll start to see some very, very interesting new stuff.

      As much as I hate 'media' keyboards, if they were just standardized I'd be very happy. I'd love to have several software-configurable scrollwheels and sliders. Universal out-of-the-box support for secondary/tertiary/n-ary small LCD displays would also be nice.
      • by bunions ( 970377 )
        'flamebait'? I don't get it.
      • Re:Multi-touch (Score:4, Insightful)

        by Eideewt ( 603267 ) on Thursday August 17, 2006 @04:20AM (#15925375)
        Agreement here too.

        My biggest gripe with today's computer interfaces is that attempting to funnel everything you might want to do through a mouse plus (if you're lucky) a keyboard forces you (as an interface designer) to make a difficult decision: either waste huge amounts of screen real-estate on functions you need to include, or hide them away.

        What we need are interface devices that aren't so bandwidth limited. When we want to make the computer perform an action, all we are generally able to do is locate it on the screen and say "Do." On systems with multi-button mice the situation is somewhat better. Most Firefox users are familiar with "left click to follow a link, middle click to open it in another tab, and right click to get an [ick] context menu" idiom. Scroll wheels are another instance of a bandwidth-increasing addition to the system. Rather than clicking an arrow to scroll, we are now able to spin a wheel while pointing in the general area of the thing we wedant to scroll.

        Some systems put the physical controls available to even better use. The Sam text editor, its successor Acme, and basically all of the Plan 9 operating utilize the mouse buttons to perform distinct and consistent actions. In Plan 9, button 1 selects, and the other two buttons, when used in conjunction with it, perform other useful actions. The exciting feature of this setup is that it moves the selection of possible actions out of the computer, where navigation is inefficient, and puts it literally under your fingertips. Rather than selecting an object on the screen then selecting an action, or vice versa, one can simply point at it and say "do this." The ability to convey specific actions in one fell swoop is what makes command line junkies (myself included) swing the way they do. What could be more exciting than marrying that power with a GUIs flexible expression?

        An even more extreme example is the five button keyboard (for the left hand) + three button mouse featured in the Doug Engelbart video linked from the summary. I'm not sure how his system used them, but this setup allows for eight functions using the most obvious mapping, many more than modern interfaces. Not only that, but with chording, it's possible to increase the number of possible actions to a dizzying 255, which is probably way too many to actually make use of*: Engelbart's system uses typed commands as well as clicks rather than attempting to assign a meaning to each combination of button presses. One good way to cope with the number of possibilities is to assign a general funtion to each button, and to combine those functions to perform actions. For example, if the left mouse button selects text, the middle button pastes it, and the right mouse button cuts it, one can copy by selecting with LMB, then pressing RMB, MMB in succession. Other button actions might be "system" to trigger global system functions, "window" to do window management and "inspect" to look more closely at an item. What might happen when you press these together? Exposé, anyone? But without reaching for the keyboard.

        Anyone interested in user interfaces should take a look at the Sketchpad computer program for starters, which was simply amazing, and at "Alan Kay: Graphical User Interfaces" on Google Video. GUIs have a rich history that is not evident in modern interfaces.

        * This is a good thing! Even when a the system allocates a set of global button presses and applications implement their set of commands, there will still by plenty left over for the rest of us to allocate as we see fit. This is the one point where I disagree with Bunions: I love that multimedia keyboards aren't standardized, because it means that programs don't depend on the buttons they provide, which in turn gives me 32 keys (on the keyboard I have) that I can bind to any action I want, without losing any functionality.
    • Re:Multi-touch (Score:2, Interesting)

      by Stalus ( 646102 )
      Honestly, I think the reason things like the multi-touch display is that we're too focused on having a single device for everything - or rather the price point isn't low enough yet not to. A vast majority of what people do on their computers is generate text... word processing, e-mailing, IMing, etc. That multi-touch display is no replacement for a physical keyboard. Yeah, you can pop one up on screen, but how many people have you heard complain about a keyboard just not feeling right? While such a dis
      • While it isn't a replacement for a keyboard, the bulk of today's input is actually being done on touch sensitive displays just like star trek. Or do you not count the thousands, of restaurants, movies theaters, and retail stores who have touch sensitive displays for their primary input device, with a keyboard/number pad? Those are taking off precisely because of inherit limitations in keyboards, and their limited input abilities. Keyboards won't go anywhere. people will still have to create reports, and
      • Re:Multi-touch (Score:4, Interesting)

        by Eideewt ( 603267 ) on Thursday August 17, 2006 @05:21AM (#15925482)
        I think you're selling multi-touch displays short. While I agree that a device doesn't have to do everything, it's clear from the number of people who are dissatisfied with human-computer interfaces that the things they do could be done better. You're also underestimating the amount of mousing that people do. Touchscreens are no replacement for the keyboard, but they are a good replacement for the mouse (except maybe in FPS games, a special case).

        Computers have a few things they do well: accept textual input, display data on big screens, and multi-task. From those it follows that anything graphical or textual is a good fit, and that while "one device to do everything" is a bad idea, one device that does many things that it happens to be good at is a great idea. For example, it's extremely common that a person wants to access the web while working on a project. It's better to have one device that can help you gracefully juggle everything you're trying to do than to have a typewriter, a web browser, a CD player, your clock, a "download machine" and a telegraph key (for IMing), each with its own chassis, competing for desk space. Up to a point, combining functions makes sense.

        Computers also do one thing very badly: they don't accept input from anything other than a keyboard very well. Specialized fields do have devices that work well: graphics tablets for graphics artists and MIDI keyboards for composers, for example. The driving force behind multi-touch displays is that the "interface for the rest of us", the mouse, is a difficult and inefficient thing to use. We all have ten built-in pointing devices which we can use with aplomb -- some people even manage to use their toes as a few more -- and multi-touch displays are a way to make use of those. Much as I dislike the desktop metaphor, I must invoke it here: using a mouse to interact with a computer is akin to using a single stick to push your papers around on your desk. It's just not the best way to go about it.

        I very much doubt that it would be stressful to use a multi touch display for a long time. In fact, I suspect that it would be much less stressful than making the constrained motions required by a mouse. Joints are *made* to move. It might still be a little exhausting at first.

        I agree that UIs are best when they are simple... but simple is in the eye of the beholder. To me, a UI that allows me to use my skills in a direct way is a simple one. Using my fingers to move on-screen objects = simple to me. A complex UI is one that requires me to perform actions in ways that take more effort than the direct way. The direct way is the way I would do it if I were manipulating physical objects. For example, menus (especially nested ones) and window managers that don't (i.e. I have to drag and position windows myself, when the window *manager* should do it) are complex to me. Above all, attempting to convey a huge variety of instructions by pushing a box around and clicking buttons on it is complex, because it adds another layer I have to work through. If I ever my my GUI fairy godmother, I'm asking her for a laptop with a touch screen. Maybe a multi-touch screen if she looks generous.
        • For simple things, sure, a touchscreen works wonders. Kiosks and self-contained systems (such as medical equipment) would be complicated without them.

          But for any other general-purpose computer, the touchscreen lost out long ago. There were a number of touchscreen monitors for sale in the 90s, all the way to today, but they never made inroads over the mouse. The problem is two-fold:

          1. people don't like raising their arms to horizontal and manipulating a screen while seated. It is an unnatural position.
  • by posterlogo ( 943853 ) on Wednesday August 16, 2006 @06:06PM (#15923040)

    FTA: The current state-of-the-art User Interface (UI) we've been enjoying has remained largely stagnant since the 1980s. The greatest innovation that has been recently released is based on video card layering/buffering techniques like Apple's Expose. But, there is a large change coming. Rev 2 of the UI will be based on multiple gestures and more directly involve human interaction. Apple is clearly working in the area as some of the company's patent filings demonstrate. Nevertheless, these videos might make Mac (and Windows) users experience a huge case of UI envy, as a lot of UI development (in XGL in particular) makes the current Mac UI seem creaky and old fashioned.

    The guy seems to think that the stagnation of the UI is an entirely bad thing. It seems to me that when something works well, people like to stick to it. I really don't think the majority of people need multiple desktops floating around let alone a brain interface. The only widely practical new UI technology I saw was multi-touch interactive displays (or touch screens in general, though they have been around for a long time and are still not very popular). As far as his comment that the new-fangled UIs make the Mac seem creaky and old, well, that's his opinion I guess. Some would just say the Mac UI is useful as it is. Even some of the new features in Leopard seem unnecessary to me. It's never bad to innovate, just don't automatically assume every new cool thing is practical or useful for most people.

  • by MarcoAtWork ( 28889 ) on Wednesday August 16, 2006 @06:07PM (#15923046)
    ... and I could stop working and go back to university to get another degree full time and end up into research, where would the state of the art of the UI/human-computer-interaction field be? which degree would one want to pursue? where?

    I've always been fascinated by HCI but have yet to be able to pursue this in a work-related setting (where I tend to write backend code, basically as far away from users as you could possibly get).
    • Dude, If you win the lottery have some fun. Buy a Ferrari, hang out on a tropical island for a while, do whatever you want. You don't have to go back into full-time education if your're rich.
    • We have some human-machine interaction specialists where I work. I know their educational backgrounds are varied, but I'm not sure what the basic requirements are.

      We make military aircraft, so they are concerned not only with the computer interaction in the cockpit, but also with the positions, labels, and feel of switches, knobs, controls, instruments, ejection buttons, etc. For some reason quick and reliable person-machine interaction is considered important when people are shooting at you. (Haven't we

      • I definitely am not, often when I use various cell phones, remotes, appliances etc. I think of ways things would be more user friendly, it would definitely be a lot of fun being able to come up with ways to make things work better in all sorts of human-machine interaction whether it's a GUI or a switch.

        Still it seems unlikely to find a job in the field without some sort of accreditation from somewhere, that unless you are in the right place at the right time.
    • Here's a listing of Human Factors (and associated) graduate programs, which is published by the HFES. http://www.hfes.org/web/Students/grad_programs.htm l [hfes.org]

    • Personally, I'd be opening a design atelier looking at all aspects of design.

      If you haven't already read it, look up a copy of Donald Norman's [wikipedia.org] "The Design of Everyday Things" (formally known as "The Psychology of Everyday Things"), it's a real eye opener about the visual and contextual clues we use to determine how an object 'should' work, and how often these clues lead us astray.

  • by Peaker ( 72084 ) <gnupeakerNO@SPAMyahoo.com> on Wednesday August 16, 2006 @06:09PM (#15923057) Homepage
    Heh, the issue of User Interfaces always makes me laugh at the incompetence of seemingly the entire world when it comes to User Interfaces (or the whole computing world in general).

    Some obvious trivial faults:
    1. The whole overlapping window model is bankrupt. You want to minimize the amount of information, especially redundant information, that the user has to input, and output as much information in an accessible way. The overlapping window model does the opposite: it requires that you tile your windows manually (or through tedious, inaccessible menus) rather than specifying which windows you want to see. If you don't do that (and due to the required effort, most don't) then you don't see all of the information you want even though most of the screen is wasted space!.
      For reference, just look at your screen now, and watch how much of it is covered by empty "gray areas". When you open a new window, does it hide gray areas, or real information?
      This is even more absurd when there are just a couple of windows, hiding each other, when the entire screen is free space! The computer expects YOU to work for HIM and move these windows from hiding each other.
      This phonemenon is also felt in list boxes, where you are expected to adjust the column widths manually to not be too short/too long, even when there is an optimal adjustment readily available. You again have to work for the computer, and ask for a ctrl+plus to set it up. Most people don't even know about ctrl+plus in column-listboxes.
      Some programs make it even worse, and don't let you resize their windows when the entire screen is free, and you have to scroll through their data in a little window.
    2. Internationalization and shortcut keys.
      What's so fascinating about this example - is how common it is across platforms, programs, operating systems.
      The feature is called "shortcut keys" and yet everyone is implementing it as "shortcut symbols".
      This is terrible - when you switch between languages, all shortcut keys break!
    3. Multitude of widgets, with overlapping functionalities. This is just silly and confusing to beginners. We need less widgets, not more.
    4. "Jumpyness". Today's GUI's all "jump". What I mean by that is that they don't smoothly switch from one state to the next, but rather do that with a single screen refresh. The human mind doesn't read that very well. For example, scrolling down "jumps" down a pageful instead of scrolling down a pageful in a smooth motion.
      The fact that fixing this would require modifications of all existing GUI programs is a certificate of poor architecture of GUI software.

    There are many more trivial issues to fix. Until they fix these, I find it very funny to talk about future directions for the User Interface. We haven't even gotten the basics right yet!
    • Smooth scrolling is one of many fixes needed for 4. In general I agree with your notions although I don't think that the overlapping window model is bankrupt.
    • by mcrbids ( 148650 )
      Your points are interesting. But they have already been largely mitigated!

      1) Your points on overlapping windows is interesting. But KDE already addresses that. When I open a new window in KDE, it opens the new window over the area of greatest unused space. Overlapping continues, but as unobstructively as possible. Contrast that with Windows' means of opening windows about 1/4" below and to the right of the previously opened window, which almost assuredly wastes as much screen real estate as is possible.

      2) C
      • "When you flip pages in a book, you go from one page straight to the next. It doesn't "slide", you flip it and that's it."

        And that flip takes zero time? Last time I read a book, I could actually see the page being turned.

        The problem with this sort of animation is that they're generally not fast enough to be unobtrusive. I don't need a 2-second animation of a page turn, I need a half-second one.
        • by TheLink ( 130905 )
          Uh just curious, why do you need a 0.5 second animation for page turning?

          If I'm reading a book, I'm more interested in the content I'm reading rather than the animations of page turning.

          It's like those TV/interstitial ads. The good ones might be cool the first time you see them, but after a few times, don't you want to be able to skip them and get on with whatever you were doing?

          Well, I guess I must be the strange one around.
          • by bunions ( 970377 )
            > Uh just curious, why do you need a 0.5 second animation for page turning?

            as visual feedback to confirm you actually pressed the 'next page' button. The physical world almost always gives us feedback when we perform an action - in the case of a page turning, a visual, audible and tactile one. Also an olfactory one if the book is musty. Your brain is used to that, so computers should try to reinforce that.

            >If I'm reading a book, I'm more interested in the content
            > I'm reading rather than the ani
        • The problem with this sort of animation is that they're generally not fast enough to be unobtrusive. I don't need a 2-second animation of a page turn, I need a half-second one.

          i don't think it necessarily has to do with the speed or existence of a "page turn" animation. the only efforts i've seen to simulate a page turn in digital form seem like useless eye candy. the content changing should be enough of a visual clue. the main benefit of physical pages is that you have an immediate reference point for whe

      • 4) "Jumpyness" is more natural!

        I agree 100%. Jumpiness makes sense. Computer, and the programs and data that inhabit them, are not physical objects and do not behave like physical objects. UI gurus have been declaring for years that normal human beings cannot accept this and will never understand anything that was not familiar to humans on the savannahs of Africa 100,000 years ago.

        Pointing out that a UI behavior has no counterpart in the "real" world says nothing about whether it will be difficult fo

    • If you don't do that (and due to the required effort, most don't) then you don't see all of the information you want even though most of the screen is wasted space!.

      Why make the assumption that you always want to see all of the information in all of the windows you have open? Just because a window is visible doesn't mean it is relevant to the current (and ever-changing) task.

      Right now I'm typing a comment on Slashdot, but my mail client is open behind this window because I was reading email just a few minu
    • by CaptainCarrot ( 84625 ) on Wednesday August 16, 2006 @06:38PM (#15923208)

      I disagree.

      1. Overlapping windows are used to make more information available to the user than can be displayed on the available screen real estate. The RL metaphor is a collection of papers on a desk. You can't see every paper all at once, but you bring to the top of the pile those which you need. You do this for your own benefit, based on the needs of the moment, not for that of the desk -- or the computer. The whole point is that the space isn't tiled. I don't like working that way personally, and I suspect the reason we've moved away from that model is because most people don't. Remember the early Windows versions?

        You asked how much of the screen was empty space and therefore wasted? Very little of it, most likely. Very little of mine is as I type. Space with no content in it is not necessarily wasted. In fact, it most likely isn't. Space is crucial to how our brains orgainize what we see. If every square inch of space on the screen was being used, we'd see it as a jumbled mess. The best and most eye-pleasing data presentation use of designs very carefully balance empty space against that occupied by content. Take, for example, your original post against my reply. See how I create spaces between my paragraphs with properly structured P tags? See how much more readable that is?

        I agree that some programs are badly designed and make poor use of the model. That doesn't mean the model itself is broken.

        Yes, it would be nice for those very particular about their screen arrangements if they could save state between sessions and recover it immediately when they start back up again. This is an implementation issue 00 remembering, of course, that most people prefer not to tile.

      2. Yes they're shortcut symbols really, but people have a hard time remembering arbitrary symbols. That's why we employ mnemonics, which naturally relate to the language of the interface. For example, it's easy to remember the shortcut to open a file in most word processors (ctrl-O) because "O" is the first letter in the word "open". It's not reasonable to expect such mnemonics, input through an alphanumeric keyboard, to work any other way -- unless you can think of a better one where alphanumeric input is both easy to remember and language-independent. Good luck.
      3. This is not an inherent fault in the model, but is a failure across an industry to standardize. In my own GUI design work in Motif, this is why I use the default behavior of the default widget set as much as possible. The users most often know exactly what to expect then.
      4. I remember when some word processors and the like included a "smooth scrolling" option. No one used it. It turned out that most people wanted the screen to scroll quickly instead.

      • Ironic that I killed one of my own points in part through clicking on the wrong button. As a point of design, the "Preview" and "Submit" buttons on /. are rather too close together.
      • by grumbel ( 592662 )

        I don't like working that way personally, and I suspect the reason we've moved away from that model is because most people don't. Remember the early Windows versions?

        I think the throuble with tiling is that it simply doesn't work that well as a generic concept, there are simply to much applications around that are just to small to make sense in a tiled workspace, ie. a small calculator [toastytech.com] should overlap, not tile, since else he can't be seen in full and wastes a lot of screenspace. However in Blender or Emac

        • I think the best control would be just a nice analog joystick. Further you tilt it, the faster it scrolls. Let it snap to default position and it instantly stops.

          Thought about this with fast forwarding on a DVR (which always seems to be the wrong speed), but would work just as well on a keyboard or mouse.

          As for smoothscrolling in current tech, the best I've found is just middleclicking in a browser/some documents and pulling the cursor away from that icon it creates. Further away, the faster it scrolls. It'
    • Heh, the issue of User Interfaces always makes me laugh at the incompetence of seemingly the entire world when it comes to User Interfaces (or the whole computing world in general).

      So it's you against The World on the subject of how a UI ought to work. Hmm, I wonder who is more likely to be right.

      1. The computer has no way of knowing, other than via user input, which information is important, and needs to be made viewable. I actually prefer to manually set the size and placement of my windows exactly the
    • I keep all my programs maximized and switch between them sometimes from the taskbar and sometimes with a good old-fashioned Alt-Tab.

      My keyboard layout includes a Compose key for typing weird caacs on the keyboard, and in my main program it's able to remote-control my mp3 player app. Play, back, nest, pause, stop, in that order.

      And my taskbar features a single-click run application button, current outdoor temperature, date and time.

      btw, I don't find overlapping windows intuitive at all except for file manag
    • I have to agree with the window model sucking. I tend to maximize all my windows anyway and alt-tab between them. I only window them when I need to copy and paste something where the copy and paste feature doesn't work for some reason.

      Especially annoying are those apps that spew multiple toolbars and palettes in several child windows which constantly get in the way (as in most paint programs).
    • 1-3 yeah. I usually have everything maximized - but some people seem to like to have stuff paned. I'd rather have shortcut keys that allow me to rapidly switch amongst the past N "windows"[1]

      But 4? I prefer jumpiness. When I want something to happen, I want the computer to do it NOW, not do some silly animation before it does it.

      If I want a smooth scroll, I'd be holding down the scrollbar (great invention that one) and dragging it. But if I want a "page down", I want it to go a page down NOW! If I want to s
  • by Anonymous Coward on Wednesday August 16, 2006 @06:10PM (#15923064)
    In the long term, we'll be communicating with computers the same way we communicate with our pets, kids, and coworkers - with a combination of body language, voice, gestures, etc.

    In the short term, we'll see Longhorn slowly and sloppily copy whatever Apple's doing; and we'll see KDE and Gnome both copying the bad parts of what the Gnome and KDE are doing respectively; and we'll see all real computer users using emacs/vi/pine/xterm/screen like they always did.
    • Speak for yourself - I speak to my coworkers and kids with beatings and profanity.

      Come to think of it, I can do that today! [slashdot.org]

      The Future Is Now!
    • In the long term, we'll be communicating with computers the same way we communicate with our pets, kids, and coworkers - with a combination of body language, voice, gestures, etc.

      I am not so sure about that, for some things of course voice and gestures are great, but the computer isn't just a dog or a coworker, its also a tool and I neither talk or gesticulate to my screwdriver, instead I pick it up and get the job done with it myself, since thats simply a lot faster then trying to explain what and how so

      • by geekoid ( 135745 )
        if with the wave of your hand the screw driver would leap up an remove that pesky screw on its own, wouldn't you want to do that?
        • if with the wave of your hand the screw driver would leap up an remove that pesky screw on its own, wouldn't you want to do that?

          How do I tell if the screw needs tightening at all? How does the computer figure out if I want it to go in or out? How does the computer tell me how tight it is? How to I pick a screw? If I just pick that screwdriver and start working all that information is easily available. Of course an automatic screwdriver might work better then a manual one and even with mouse and keyboard

      • >Since zooming requires yet another axis, mouse rotation might be used for that
        I'd be perfectly happy to replace my current functionality on the mousewheel for zoom.
        Virtual screens are available through other motions, anyway.

        Zoom the desktop when not pointing at a program, and a key to hold down to make it all zoom
        while in a program.
  • by scorp1us ( 235526 ) on Wednesday August 16, 2006 @06:13PM (#15923089) Journal
    Was a memory storage system that consisted of liquid mercury. A speaker at one end would cause waves to travel the length of the vat of mercury. At the other end, it was measured by a inducer(microphone) and re applified then sent back to the speaker. If you wanted to change a bit, you had to wait for it to come around and short it to ground, or inroduce a tone. Your amount of memoery was limited by the length of your tube and the viscosity of the mercury.
  • by quokkapox ( 847798 ) <quokkapox@gmail.com> on Wednesday August 16, 2006 @06:14PM (#15923093)

    At least not to common consumer devices. I cannot even count the number of remote controls, microwaves, cellphones, dishwashers, ATMs, and other devices which are seem to be designed completely without thought for the human who will need to use them.

    Remote controls - ever heard of making the buttons distinguishable by FEEL, so I don't have to look down to tell whether I'm going to change the volume or accidentally change the channel or stop recording?

    Microwaves - make the buttons we use all the time bigger and obvious. I can't use my microwave oven in near dark because the stupid thing's start button is indistinguishable from the power level button. That's just dumb. I don't need two different buttons that say "Fresh vegetable" and "Frozen vegetable" which I never use; and I have to babysit the popcorn anyway, so I don't need a "popcorn" button hardcoded for some random time limit. A microwave should have a keypad for entering time and bigger buttons labeled +1minute, +10seconds, ON, and OFF. That's all 99% of people use anyway.

    The people who design interfaces should be made to use them for long enough so that they work out at least the most obvious design flaws.

    I keep putting off buying a new cellphone because I know I will have to learn a new interface even to set the freaking alarm clock and it will probably take six menu choices to do it.

    • I have two more:

      1. The gas pump that once you pick up the pump the prices disappear asking you to "Select Product."

      2. The ATM that the button that you used to press "Withdrawl" on the next screen would withdraw $200. Shouldn't that go to the smallest amount or a "Go Back" button?
    • "Remote controls - ever heard of making the buttons distinguishable by FEEL, s"
      I haven't seen a remote control where the buttons weren't easily navigatable by feel in years.

      On my microwave the 1 minute, start are distinguishable from each other. Not in the dark, but who microwaves in the dark?

      "The people who design interfaces should be made to use them for long enough so that they work out at least the most obvious design flaws."
      the more you use it, the more intuitive it starts to seem to be.
      They could use
    • At least not to common consumer devices. I cannot even count the number of remote controls, microwaves, cellphones, dishwashers, ATMs, and other devices which are seem to be designed completely without thought for the human who will need to use them.

      One doesn't even need to look at all thoes high tech products to find bad user interface design, even something as simple as a door can be done extremly bad. Ever tried to push one that you needed to pull thanks to the fact that both side actually look the sam

    • Analog knobs rock. Heavily computerised interfaces outside actual
      computers can be very annoying. Get a nice, cheap Korean microwave :)
    • Sanyo makes a consumer level microwave/grill that is currently available at Target for about $100. The interesting UI feature is the stripped down user interface that has an analog knob to set the (digital) time.
    • Actually, microwaves can be way better than that. My microwave has a really cool interface. Rather than a keypad it has a wheel that you rotate to indicate the time. Time increases in 10 seconds increments. The wheel is speed and acceleration sensitive. When the microwave is running simply spin the wheel to add or substract time. Popcorn not quite done? Spin the wheel and add 30 seconds. Want to use it in the dark? It's a giant wheel with two buttons next to it "start" and "stop".

      It's so simple that a
      • by TheLink ( 130905 )
        "The wheel is speed and acceleration sensitive."

        How'd a completely blind person cope with that one? Say they want 2 minutes, if they spin it faster than normal they might get 3 minutes instead.

        I think same angle = same time would be better for blind people.
    • by evilviper ( 135110 ) on Wednesday August 16, 2006 @08:43PM (#15923800) Journal
      I can't use my microwave oven in near dark because the stupid thing's start button is indistinguishable from the power level button.

      Better question: WHY THE HELL ARE MICROWAVES DIGITAL? What part of "close the door and turn the dial" was so hard for people to understand, and how did typing in digits help? Microwaves aren't phones.

      Was it the extra precision? People need to be sure they are microwaving their sandwich for exactly 2 minutes and 45 seconds, and ABSOLUTELY NOT 2 minutes and 46 seconds?

      Are there a lot of people out there with only one finger, who find it faster and easier to type in 1-0-0-0-Start rather than turning the dial a quarter turn to "10m"?

      What in the world makes people believe replacing analog with digital is the answer to absolutely everything?
  • Intuitiveness (Score:4, Interesting)

    by FlyByPC ( 841016 ) on Wednesday August 16, 2006 @06:19PM (#15923120) Homepage
    Amazing how naturally he uses the mouse -- back in 1968!
  • by cadience ( 770683 ) on Wednesday August 16, 2006 @06:27PM (#15923162)
    I graduated in 2003 with a BS in Computer Engineering and a BS in Software Engineering.

    During my studies I proposed multiple times to do an independent study of the history of the computer field to count for 3 credits of my general electives. I was denied every time, even with support from the head the Engineering department. The liberal arts department continually stated that the purpose of the electives is to gain breath in knowledge. I finally took a (very interesting) class on Greek mythology.

    I agree with the premise of increasing knowledge, but not the implementation. The college should encourage independent research when a student can blend his primary interests to meet a "credit based requirement".

    What are your thoughts?

    Understanding history of your profession should be as important as understanding your culture and your history. Your profession will become a part of who you are as well! Without context, you're clueless.
    • but it shouldn't be a general elective.

      They were right.

      But there should be a history of technical advances in the computer cirr.
      Not a study of dates but a study of what was done and why. as well as a chance for students to use tghe older UIs.
  • by cpu_fusion ( 705735 ) on Wednesday August 16, 2006 @06:35PM (#15923198)
    That the biggest UI change yet-to-come has to do with moving from a single-user desktop metaphor to a collaborative virtual space that leverages a lifetime of perception of the real world. When computers evolve into a more transparent role in our life, layering this digital world on our physical world will be next. It's coming sooner than we think, will we survive that long though?
  • I don't see how the UI issues matter. I have work to do. If the UI does XYZ and I'm doing ABC, the UI is of no consequence. We regularly have idiotic flamewars here between people glued to CLI, and the zealots of GUI. Here it is, 2006, and I'm kicking builds in CLI using UNIX commands. I remember when the Mac came out all these CLI shitheads were barking "the Mac is a toy! REAL MEN use CLI and DOS!" Whatever. DOS bit the dust, and CLI is marginalised, but it hasn't disappeared because in specific ways, it's
  • by poot_rootbeer ( 188613 ) on Wednesday August 16, 2006 @06:42PM (#15923235)
    "It might be time to add a mandatory "History of Computers" class to the computer science curriculum so as to give new practitioners this much needed sense of history.'"

    Oh please no.

    I had a mandatory Computers class in 6th grade (and again in 7th and 8th grade, with the exact same lesson plan). Half of this class was rudimentary BASIC programming on a room full of TRS-80s, the ones with the integrated green monochrome displays--and this was circa 1990.

    The other half of the class was a purported history of computing, the key facts of which I can still recite today (learning the same thing thrice causes it to stick). These facts are:

    - Charles Babbage made a mechanical computer.
    - Then there were the UNIVAC and the ENIAC.
    - The term "bug" is due to an actual bug Ada Lovelace found inside a computer.
    - There are four kinds of computer: supercomputer, mainframe, minicomputer, and microcomputer.
    - RAM stands for "random access memory"; ROM stands for "read only memory".
    - Cray supercomputers are cool-looking.
    - 10 PRINT "FART!!! "
    - 20 GOTO 10
    - RUN

  • Several years ago I had the delightful privilege of talking about interface design with Jef Raskin (who designed many aspects of the Macintosh UI).

    He pointed out that "the only intuitive user interface is a nipple."

    Several days ago my wife and I had a new son, so of course I watched them learn (together) how to breastfeed. It was not obvious to either one of them how to make it work -- they had to explore and figure it out together.

    It appears that Jef was wrong: even nipples are not an intuitive user interface.
  • God. That video isn't just humbling, it's damn near humiliating. Compare it to the nextstep 3 [google.de] demo someone else posted I think today. It isn't that nextstep isn't better - in many places it is, by far. But only in detail, and only some ways. There's stuff in NLS I still want. Anybody else seen folding that good? Where? I want it. What the hell have we been doing for forty years?
  • I'm tired how all GUI development is now centered around the GPU, and more eye-candy.

    The useful features from OS X that people find useful, like a visual cue as to where a window is being iconified to, can and have been done in much faster/simpler ways. For as long as I can remember, Afterstep has drawn an outline of windows being iconofied, and quickly shows the outline spiraling down to, and shrinking into the icon.

    Why is the rest of the GUI stagnating? Keyboard shortcuts are extremely primitive at best
  • by 3seas ( 184403 ) on Wednesday August 16, 2006 @08:05PM (#15923615) Homepage Journal
    take these fancy UIs and use them to control a calculator and then decide if it right for the job.

    "Right for the Job" is the key phrase.

    There are three primary UIs:

    the command line (CLI)

    the Graphical User Interface (GUI)

    and the side door port used to tie functionality together. known by many different names, but in essence an Inter Process Communication Port (IPC)

    Together they are like the primary colors of light or paint, take away one and you greatly limit what the user can do for themselves,

    But if they are standardized with the recognition of abstraction physics (in essence what a computer impliments) then the user would be able to create specifically what they need for the job they do via understanding and applying abstraction physics. The analogy would be mathmatics and the hindu-arabic decimal system in comparison to the more limited roman numeral system.

    There are all sorts of user interfaces that can be created but they all are made up of some combination of the primary three, perhaps lower down on the abstraction ladder but none the less there.

    The reason why this is unavoidable is simple due to the nature of programming.

    Programming is the act of automating some complexity, typically made up of earlier created automations (machine language - 0's and 1's is first level abstraction - all above it is an automation). The purpose of automating some complexity is tocreate an easier to use and reuse interface for that complexity. And we all build upon what those before us have created. Its a human unique characteristic that make its our natural right and duty to apply.

    What the failure of so called computer science is guilty of is distraction by the money carrot, starting with IBM and wartime code cracking paid for by government/tax payers.

    This distraction has avoided genuine computer science, or abstraction physics as it would be far more accurate in description.

    Abstraction physics to the creation and manipulation of abstractions as mathmatics is a creation and manipulation of numbers, as physics and chemistry is a creation and manipulation of elements existing in physical reality.

    With the primary three colors of paint you can paint anything you want, but you cannot call a painting "the painting" any more than you can call a mathmatical result mathmatics. Nor can you call some interface built upon the primary UIs the silver bullet of UI's.

    All this will become much more clear, common and even second nature once we all get past the foolish fraudlent idea that software is patentable.

    A roman numeral accountant, in defending his vested interest in math with roman numerals, promoted that only a fool would think nothing could have value (re: the zero place holder in the hindu arabic decimal system.)
  • by Animats ( 122034 ) on Wednesday August 16, 2006 @10:16PM (#15924223) Homepage

    Apple, in its early days, had a good sense of what was important in a user interface, and that was expressed in the "Apple Human Interface Guidelines". Much of that knowledge has been lost.

    One of the original Apple rules was "You should never have to tell the computer something it already knows". Consistently applying this rule requires a clear separation between infomration about the host environment and individual user preferences, something most programs don't do well. Apple was reasonably faithful to that rule in their early days, but over time, got sloppy. Microsoft never did as well, and it was an alien concept in the UNIX world.

    It's common, but wrong, to bind environment decisions at program install time, which means that a change in the environment breaks applications in mysterious ways. The whole concept of "installers" is flawed, when you think about it. You should just put the application somewhere, and when launched, it adapts to the environment, hopefully not taking very long if nothing has changed. That was the original MacOS concept.

    Much of the trouble comes from failing to distinguish between primary and derived sources for information. "This program understands .odf format" is primary information, and should be permanently associated with the program itself and readable by other programs. ".odf documents can be opened with any of the following programs" is derived information, and should be cached and invalidated based on the primary information. "I would prefer to open .odf documents with OpenOffice" is a user preference. None of the mainstream operating systems quite get this right. That kind of thing is the frontier in user interfaces, not eye candy.

  • I've looked at a few of these gui's already, and have a friend with a really good XGL setup. Thus far I haven't bothered because most of my computer interaction/programming takes place in a bash console. Yes, I am that dull, *and* I like Vim, oh dear.

    Will there be anything that can do better then bash by adding extra graphical whizziness? Thus far all I've seen is that bash can be wobbled, which isn't an improvement. GUI improvements are nice to see mind. When they're aimed aat aiding physically disabled pe

Do not underestimate the value of print statements for debugging.