In my last entry, I laid out how I envision a User Interface being described by XML tags, much like the way we use HTML, today. I also dealt, to some degree, with stylesheets, and how they would provide a way to use the same application on different systems, even if they have very different types of displays. This time, I'd like to touch a bit more on how the code related to the applications would actually execute.
So we've got a GUI which runs cross-platform, on Mac, Windows, Linux and Palm. These systems can have very different CPU's. So, how to you "write once, run everywhere?" Normally, that's a catchphrase used by the Java
programming language. Additinally, one of the basic tenets of Microsoft's
.NET platform is that you write the software, then (maybe) compile it to some platform-independent format for redistribution. In both cases, you have to have software on your platform which will "understand" the Common Language Runtime or Java Virtual Machine object code and execute it. This tends to be rather slow. A better example would be Slim Binaries
, so named because they tend to be very compact, and they are typically compiled to native object code, at load time. They are typically very modular, as well. That way, if your handheld is limited in RAM, it can compile the parts which it really needs (say, the display routines, the input routines, and the find and replace) and leave the parts it doesn't need at the moment (the spell checker, the rendering engine for multiple typefaces in multiple sizes). In the cases of
.NET and Java, advanced systems are coming into play which will actually convert some of this intermediate-level object code to your system's native object code, improving performance. I see such systems being the wave of the future. Java's goal of "write once, run anywhere" hasn't truly been realized; usually, Java applications written for a Unix environment don't work so well on a Windows environment, and vice versa. Also, while you CAN run Java applications on a Palm (or other handheld platforms), the presentation is still sufficiently meshed with the logic that it doesn't usually work very well. Combining such a system with this universal UI would solve that problem.
This is all fine and good, but most people still don't see any reason to want to run the same apps on their desktop and handheld machines. Allow me to point out a couple more points of this system. First off, there is no rule which says that the UI elements and all of the events have to be running on the same machine; they would use a message-passing paradigm, which works rather nicely through networked connections. And, if you have events triggering other events, you have the potential to distribute the work among multiple machines. Let's say you are, in your spare time, trying to write the Great American Novel. You've got your preferred editor, and the data, loaded on your handheld. You are sitting at work, on your lunch break, and inspiration hits. You've got a full-blown desktop machine with a high-resolution screen, keyboard and mouse sitting in front of you, but your application and your data are sitting on a machine which fits in your pocket. Instead of pulling out the handheld machine and ignoring the big system, you connect the two together. Then, you pull up the GUI part of the application, along with some of the events which directly drive it, on your big machine. The data and the events which actually manipulate the data stay on the handheld. With a different stylesheet on the big machine, you can use the same application to write or edit part of your novel, without the constraints of the small screen and more limited data input methodology.
Alternately, let's say you've got a complex presentation you're putting together, with something akin to Microsoft PowerPoint. You have two displays on your machine (thanks, boss). Powerpoint generally requires you to edit in one mode, then switch to the other to see how it would look. If you can modify the stylesheet, so that the edit window is visible on one screen while the preview is visible on the other, you can better leverage the dual-headed machine, without needing to buy a new application.
Last, but not least, let's say you've got a department full of people who moderately load their machines most of the time (checking e-mail, word processing, etc.), but occasionally need some real computing horsepower (juggling a large spreadsheet once or twice a month). With traditional applications, you can either give them moderately powerful machine and let them sit there and wait once in a while (when the spreadsheet maxes out their CPU for a couple minutes at a time), or you can spend more money and buy them all more powerful machines (the machines will be lightly loaded most of the time, but making a significant contribution to productivity when it comes time to mess with the spreadsheet). With this type of distributed application, you could give the department moderately powerful machines, but install one or two powerhouses on the network, then set the spreadsheet's stylesheet to offload some of the heavier work to those networked powerhouses. Alternately, people in the department could make heavier use of smaller, tablet-type systems (with appropriate stylesheets) and move to other machines with full keyboards, mice and high-res screens when they really need it.
I'm rather fond of that last scenario. It means that you can take what you're doing and go collaborate with someone else, without them having to come to your desk to see what you're talking about. To me, this is the way things work on Star Trek; everyone runs around with one or more of the thin, hand-held PADD objects, but they all network together to share central computing resources. The specialized displays on the bridge and in main engineering provide more screen real-estate and faster connectivity, but many of the tasks could, if necessary, be performed from a humble PADD. Consequently, the ability to change the layout to handle different sized displays and different types of input systems is a necessity.