I ROFL'd when watching the video. Totally reminds me of playing Outrun on the classic Amiga. Thumbs up!!
And now the menu has a built-in Google.
Good user name!
OS X Lion has a similar feature, you can search the menu of any application by typing the command in a menu search box. The menu still stays on the screen though. It is actually quite useful, because if a menu item is in a obnoxious place, it becomes more easy to find.
Good point about Office
I know there existed books about how to make the GUI correctly, which doesn't mean all applications, or even large parts, followed them. Anyway these guidelines were hardly covering all the different tasks the GUI would be used for. Again, what was considered "standards compliant" back then would look like a heap of trash today. Really, start up a VM with Win95 and gaze at the weirdness. Then start up Win98 and be amazed by clouds and other kitsch elements taking up screen space that was usually only 1024x768 pixels. If you are more into serious stuff, check out the IRIX desktop from Silicon Graphics from that era. It is full of useless kitsch, the most prominent example probably being the "zoom wheel" that could scale one file icon so it occupies the complete screen without showing more information (very visionary if you think about it).
And I think today is not much different.
The real problem though is that current interfaces are not Geocities enough. Geocities was the exact expression of what people wanted the Internet to be like. Everybody made their own navigation and copied ideas from others. HTML was simple enough to learn for heaps of kids and grannies to build their own world. But before there was a chance that the masses develop their own language to express their demands for interfaces and reflect about them, this culture was demeaned by professional designers and developers.
No time is wasted, because, no matter what you personally feel, in the 0.3 seconds the animation runs you are anyway occupied mentally switching the context to the next motion or thought you want to execute. The animation might only be perceived in the corner of your eye, but it will help you later to locate for example minimized objects. If the animation is not wrongly designed of course.
The binary style of "window is there" / "is not there" or "window is big" / "small", without transitions in between, is more or less wasting a lot of perceptive potential. People have learned as babies how objects are behaving, tapping this knowledge with UI animations is fine.
Also, "fastness" is not the main point to look for in efficient UIs. Most of the time is spent thinking, not typing. Except when you're receiving a dictate you just have to type down. If you find yourself doing the same thing over and over again and want your software to be faster, without animation, your task should be replaced by a script. Or the GUI needs to be completely re-organized, for example by collecting all choices you have to make and present them at one point in time, then executing them without supervision.
The menus really weren't that consistent in the olden days. I challenge you to fire up a VM with for example Windows 98 and Office 97. While it might look familiar to you, the menus of most applications mostly were a mess. Just a different mess from today. Word, powerpoint and Excel often had the same options in different places.
Event the much praised and fondly remembered Mac environment looked not much less eclectic once than it does today. In fact, apart from the "file", "view", "whatever" menus, each application was very different from any other. The graphical style varied from vendor to vendor, or even within a product line. On the Mac it just wasn't so apparent because there weren't that many application to begin with.
At that point in history, people were happy to have standardized widgets like the checkbox and radiobuttons and that "Ok" and "Cancel" would most of the time be in the same place. To what was before, aka textmode, it looked like a dream.
When we see the progress made in UIs today, the steps are of course much smaller. We do not remember a better quality when we think positively about the earlier UIs, but we remember a bigger, more radical change in quality for the better.
I don't know how much humor exactly was in the hammer comparison
But I think it is obvious that with a hammer I have quite different options than with a networked computer.
People hate IT and computers, at the same time they can hardly articulate what they want.
Ease of use always has to ask "what use" is actually easy. The world today, all its business, ideas and opinions, run on digital devices and the Internet. It is very harmful to give the guys who are in it for the money control over what happens there. Knowledgeable people's obligation should be to educate the noobz, not to make everything "idiot proof" for them.
Of course people do bad stuff with the power they had. For example, they didn't understand their responsibility not to install BonziBuddy etc. But to give them a simulcra of a perfect world with fake leather-bound calendars and cute book shelves is like spreading malware for the mind. I'd rather have the people de-infest their Windows on a monthly basis than thinking that a computer is a book shelf, or that the Internet is accessed through Apps somebody in a castel on top of a mysterious mountain is making for them.
Exactly! "Ease of use"
The computer "as a tool" for clearly defined use cases
Nothing interesting is "satisfying". Figuring things out is hard, learning is hard, democracy is hard.
I am not saying that re-installing Windows is a good way to spend your time. But not being given the chance to totally fail is really taking away an essential freedom. And I am not talking about free software freedoms here, but an essential point in Judeo-Christian culture: An expression of god's love is that he gives humans the freedom to chose the wrong path. I don't care about religion much, but understand that everybody needs this freedom.
Computer users (in opposition to developers) are not idiots. They demand better systems of course. They just cannot judge them correctly. How did millions of people, from grannies to kids, figure out HTML in the 1990s and built Geocities and the like? Because they were stupid? How did all the HyperCard stacks come into existance?
People doing their own thing is just very bad for making money. And if people on computers cannot do their own thing, the alphanerds' playgrounds will also be destroyed. Because who is going to stand up for a free flow of information on the Internet, the chosen few that can write C++?
And today people think that you're a hacker if you look at Google's second search result page.
This shouldn't have happened.
Syncing in real time is really something file systems don't seem to handle well. But that's more an issue with real time collaboration, not with syncing some documents and settings across a few devices.
Thanks for pointing out this freudian language glitch!
This actually sums up what I want to say. Most data could be stored and synced fine inside of files, even unicode text files most of the time, no binary. I don't need my chat logs in a database, I don't need calendar or contacts in a database. Especially if it is one that is running outside my user space and is not affected by backing up my home directory.
I wonder what became so difficult about syncing data that it has to be re-invented all the time?
I was happy using tools like rsync, diff and unison for a long time, until the moment when even Linux desktop software is too posh to store their data in files.
Now every software uses another database, at one time even Amarok used a MySQL backend. What is better about this than just putting the data in a file? Or at least making this file the Single Point Of Truth? If you need the database for speed, you can check if the file changed since the last time and then update the database from the file's contents. But simple files have been syncing and merging and everything perfectly for ages. Now it seems like every software needs its own syncing service.
Is there any reason for this, except brading the most simple things (like copying a file), or making money with cloud storage?
Windows are not grouped willy-nilly on the taskbar. They are grouped by the order in which they were opened, so there's a temporal flow to them..
The order in which people opened their windows is very difficult to remember. And those who found out that they can move the taskbar buttons around are spending too much cognitive energy doing it
Your bird's eye view is useless when there are multiple document opened in a program and they are all similar looking, because you still have to mouseover and read their title.
True, unless the title is also shown in the bird's eye view as well, which i believe compiz is doing. If drag and drop between bird's-eye windows would also work (like it does on MaxOS), it would be tremendously useful.
I like your example about Vista, which looks as if consumers suddenly exercised power by "voting with their wallets". The other interpretation is that Windows7 is also a piece of crap, its just better than Vista so in comparison one can more or less accept it. The same happened with Gnome 2. People hated its "simplicity", but compared to the starting out KDE4 it looked like the revelation. A lot of this is about what history has brought in front of users and what we have learned to use.
The desktop metaphor itself has lots of problems. Not surprising, it has been around for more than 30 years and was developed by Xerox for a computer to create graphics that should be printed on a laser printer. At the moment, designers seem to be bold enough to try something new. Even in FOSS. That is quite a new situation and as it seems the FOSS world is not prepared for it. Developers, users and designers need to work this out, or FOSS will become a "product" like commercial software.
The users' part in my opinion make a useful contribution. One part is to identify the problem that interfaces without text, based on gestures, have bad discoverability, like you did. Don Norman also points this out in this article: http://www.core77.com/blog/columns/gesture_wars_20272.asp Users have to acquire the language to describe their tasks and interfaces, deeper than "i don't like it" or "cannot get my stuff done". Telling the developers and designers that they're drunk will not help, because they aren't crazy. Maybe Steve Jobs could pull that off, but the FOSS community should strive for a better work model.
So, while I agree that Unity is rough and Gnome3 is lacking stuff from Gnome2, hating is not the answer. Identifying the good ideas therein and analyzing what should be changed is better. IMHO.