Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
User Journal

Journal DaChesserCat's Journal: User Interfaces - Part I

Pop quiz: what do the MacOS X, Windows XP, Linux with X-Win and Palm OS all have in common? They are all graphical user interfaces. They use the familiar system of windows, icons, text input and graphical pointers to navigate and interact with programs. Beyond that, they are very different beasts.

Yeah, I know that MacOS X has BSD running on the back-end, which is quite a bit like Linux. But, the Graphical User Interface software running on top of those systems tends to be very different. And, while libraries have been developed which will allow someone to write apps which port easily between Mac, Windows and Linux, they are still running into difficulties getting the Palm to run the same software. Quite simply, the Palm has a much smaller, lower-resolution display, and usually doesn't have a direct, text input. It typically requires tapping an on-screen keyboard or using some kind of handwriting recognition. Also, considering the much slower CPU and much smaller RAM typically found on handheld machines, getting large, bloated apps to fit into smaller, handheld machines is a challenge in and of itself. Fortunately, no one seems to have much interest in writing an application and having it run on all four of the above mentioned platforms.

Why not?

Well, most developers would tell you that handhelds are intended to serve different functions from desktop machines. And yet, they expect laptop system to function as reduced-size desktop machines. They aren't usually sure what to make of tablet systems; they have much the same resources as laptops (in terms of CPU speed, RAM and storage), but they use a pen-oriented interface like the handhelds. Maybe that's one of the reasons full-size tablet systems haven't taken off; they're too big and bulky to be used like handhelds, but too different from desktop machines.

What I propose is a system which would allow you to design the user interface once, and let it run on ANY of the above platforms. I'm envisoning some kind of XML desciption language; it would be a bit like HTML, which seems to scale reasonably well to all of the above platforms (you can web browsers for any of them), but puts a layer of abstraction between the physical input devices and the UI input types.

If you look at the constructions available in HTML, you have:
  • frames/tables (useful for laying out and grouping other elements)
  • text input boxes of various sizes
  • one-of-many selectors (<select> and radio button tools)
  • zero-or-more-of-many selectors (<select> with multiple or checkbox groups
  • boolean selectors (checkboxes)
  • buttons of various types and functions
  • images/icons

You also have the ability for various activites to be triggered if someone enters, exits, selects, modifies or clicks various other items. Additionally, we have stylesheets; a web page designer can provide a stylesheet which indicates more precisely how a page should look, but a user has the option of providing another stylesheet which overrides it (most users don't realize this fact; I'm writing this on a page which uses small, Chancery typefaces for most of the text, because I told my browser to always use my font type/size specifications; most other users see Times Roman or Helvetica fonts and 10-16 point sizes, because that's what the web page designer specified). Anyway, the fact remains that HTML provides many of the types of UI items we need, so we'll start from there.

First off, we need a container which can hold other objects. Let's call that a frame (similar to an HTML frame or table). We need some kind of two-dimensional grid structure, which selecting two different items can indicate another item; that would be a table. We need text input areas, boolean selectors, one-of-many and many-of-many selectors (text inputs, checkboxes, radio buttons/select items and checkbox groups/select multiple items, respectively). Each of these controls needs some kind of label. And, we need events, which can be triggered by various things, and can trigger other things. From here, we have the basics in place for a complete User Interface.

Some of you may have noticed that I haven't included images or multimedia in the list. Those fall, for the most part, under the category of stylesheet objects; I'll get to that in a moment.

Now, let's say you put together a text editor application. Since we haven't built graphics into the description language, you would specify some kind of label. Then, with your stylesheet, you could say something to the effect of "if we have a bitmapped graphic display, put this image up in lieu of the label." If the person isn't using a bitmapped graphic display (say they're working from a text screen), you could provide a text-based label, instead. In this fashion, we've already covered graphical and text-based displays. This also means that, if the user doesn't particularly like the image which is provided, they can provide their own stylesheet for the application, providing their own text or graphical labels. Also, while the specification does provide a way to group elements together (within a frame), it doesn't specifically provide a way to say "put this item above this one, and this other one to the right." Again, that would be a stylesheet function. However, if the application is running on something with a smaller screen, the user's stylesheet could provide alternative layout methods. Finally, with traditional windowing applications, the designer typically puts a menu bar at the top, with nested selections which can trigger various activites. This would, again, be a stylesheet item; if someone has a lower-resolution screen, they could have the menu disappear, or they could design their own toolbar, with icons or text labels which could trigger various activities. Alternately, they could provide keyboard-based triggers for the various events they would normally access through the menu. Naturally, an application designer would want to provide a default stylesheet to be used with their application, but there would be NOTHING stopping someone from developing their own stylesheet for that application, then passing it around to others.

Basically, we're separating the logic from the presentation. The application designer deals primarily with the logic, with the presentation being extremely customizable. We're talking far more than a themeable desktop; we're talking about the user being able to completely redo the layout of the elements in an application, to better suit them. The good news to all this is that it is considerably more flexible.

Next entry, I'll address how we'll get the compiled code to run on multiple platforms.

This discussion has been archived. No new comments can be posted.

User Interfaces - Part I

Comments Filter:

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.

Working...