What's preventing you from mousing over the ribbon to explore possible commands?
Microsoft may not care because, as you say, I am probably not their target market, but that has nothing to do with it.
Of course it does. Saying product X doesn't work properly because it doesn't do use case Y for which is was never intended is fallacious. "This wine glass sucks because when I try and use it to hammer nails it shatters" is simply silly.
This is the sort of thing that Microsoft tends to say, and completely avoids a number of important points. What are you basing this determination on? I could believe the "uses more features" claim -- that can be measured -- but what about the "more effectively" claim? Whenever Microsoft says things like that, they're basing it on stuff like how many keystrokes/mouse clicks it takes to do something. That's a very poor measure of how effective users are, though.
The most common testing Microsoft does is giving experienced Office users a series of tasks often using features of Office that they aren't familiar with or necessarily even know exist. For example someone who frequently does PowerPoints may not know about transitions between slides, tell them to change the transition in a presentation. The level of success is then measured.
Prior to the ribbon, using menus, the typical Office user could complete 30% of those tasks successfully. With the first release of the ribbon it doubled and we are in the latest beta at 80%. That's a huge change in effectiveness.
They can also measure based on those tests how many of the tasks the typical users were able to complete immediately i.e. which ones they know how to do before taking the test. That number has gone up as well though not as much. They also look at time to complete simple tasks which is what you are talking about with the mouse clicks. That changes a bit with context sensitivity but the huge drop in effectiveness by that measure was moving away from keyboard shortcuts when people transitioned from WordPerfect.
You were using your personal experience, "I do this X, I do Y". That's not valid because you aren't the target market.
The people who use Office constantly most likely are able to use more features more effectively more often as a result of the ribbon. If they were to look at there 2003 documents and compare them to their 2013 documents they would see a difference. I'm not sure if you are pulling a valid sample or not, your typical Office user doesn't have strong opinions on computer issues and likely is easily led in the conversation towards and opinion depending on who they are speaking with.
People who need to stand and use a interface a tiny minority? Google's estimate on number of computer panels currently in all uses is 10b globally. If even
As for artist,s, architects... they come in around 2% of users. More than say developers.
If you don't need to use Office much or a competitor you aren't the target market.
Assuming you aren't vision impaired
if you regularly use Office applications and have that problem do a training video.
Can you point me to a new input medium aside of keyboard and mouse that offers better control in a desktop environment?
Yes the digitizing pallet. That's been used by artists for a long time. It is also particularly important for people who need to operate laptops one handed, like workers who are standing.
I want a DESKTOP operating system. If something else works better on a tablet, do something else on a tablet. Simple as that. Even Apple was smart enough to know that one size fits all works in operating systems about as well as it does with underwear.
Microsoft has always believed in ubiquitous computing. That people want to run the same applications in different environments and not buy their applications over and over and over. It may be that Apple is right that people do want to do that, but I have trouble believing the same people who whine constantly about how much Windows upgrades cost really want to pay 4x over for the software.
What would make sense? You still open files. You still save them. And you still need to close them (or have some means of releasing locks on them so that they can be moved/copied/backed up/etc).
You don't open them anymore. You do a destructive overwrite not some sort of data append. So you don't need to close. Now if you think about, why do you save them? You already have the system regularly saving updates anyway, saving is cheap. Why bother with you saving? Instead maybe have something like marked versions.
This is, essentially, what an idea "Event Viewer" should be doing.
Exactly but it isn't quite that simple. Because you don't want to just view them you need to have a queue that passes messages back and forth. The human may want to pick between dozens of events and understand which ones are easy or important or time critical or...
When you're talking about a 55" TV, you're talking about what? SD Widescreen? HD? SHD? 4K? What? Resolution's the issue, not the device itself.
No... not at all. As pixels get physically bigger ratios have to change. For example the amount of white space between characters in a font increases much more slowly than the size of a character needs to increase. That is a 5 point font magnified 200% is not the same as the 10 point font. Resolution is not the only issue. DPI matters a great deal.
More important than that though is that size of screen determines how long a person will want to use it. Sligh increases in screen size induce drastic changes in willingness to engage for extended periods. So for example the average phone (4" screen) is 30 seconds. The average watch slightly more than a second. Average 15" screen is 1/2 hour.
But forcing everyone (including enterprise partners, where retaining costs MONEY), over to a new UI paradigm when there was nothing intrinsically wrong with the old one, is Just Fucking Stupid.
The rest of the post was about what was intrinsically wrong with the old one.
Face it. Standard desktop is 1-3 monitors, a keyboard, directional controller (mouse or mouse simulant (rollerball, touchpad, or joystick)), speakers and a microphone.
I don't have to face it because it is not true. Besides quibbling with whether microphone / speakers are really standard the big point is that work has been migrating away from desktop / laptops now for almost a decade. The form factors on which people want to work are shifting. So that's not standard. The work moves.
And there was DEFNITELY no reason behind applying that crap to Server 2012!
I do see the reason for mixed factor laptops like the Yoga or Surface. Microsoft traditionally wants the server GUI to be close to the desktop GUI to reduce training complexity. I don't think it goes beyond that.
What would be the upside of them letting you skip it? They want to provide web services they need an account. The same way as you needed a mouse in older versions.
Nobody would mind a better OS, but when the GUI has reached the pinnacle of usefulness, why try to force a change?
Because your assumption is way off. The GUI wasn't at a pinnacle. A few examples:
1) The file: open, save, close is really designed around a dual floppy paradigm. It makes no sense at all with SSD hardware.
2) As the number of system services require notification increase integrated notification handling becomes key
3) As device types become much more variable (ranging from a watch to a 55+" TV) graphics need to switch more readily
4) As input devices became more variable applications needed to take better advantage of them.
Windows 7 was not a pinnacle. It did some things reasonable well on some particular types of hardware that were rapidly becoming less important and mainstream for an ever shrinking percentage of the population.
That's what the Ribbon does, it hides based on context. Which means they can have more items not fewer.
Continuum creates a continuum between desktop (windows 7) and Metro, integrating the two into a single GUI. That names makes sense.
Charms bar is a play on cntl-c to end where Win-C is used to bring up a bar to switch contexts (search web or change settings). At least the C makes sense.
Well of course the right way to do that would be to make the new style of icons mandatory. But forcing change through was the Windows 8.0 days. Microsoft chickened out with 8.1 and so now you get the half assed slow movement in a general direction kind of change.
That's not what they said. What they said was that they considered modules to be data not code and thus not covered by the GPLed i.e. no linking occurred. An explicit statement from the copyright holder that action X is not a copyright violation is a very strong endorsement. Better yet of course would be an explicit written and signed license permitting it, but the statements could and would be considered by the court in a lawsuit.
I'm seeing the same 30 characters for Teradata and Sybase. When I look at the 2008 SQL standard (last version I own) I get totally lost in the notation and I'm just not that motivated, I'm going to take their word for it. As for everyone else that matters I'd say those two matter.
As for it being big enough. Table names can have synonyms and be accessed functionally via. PLSQL. Oracle itself tends to use table names like X12A with another table that uses a descriptor. If you want documentation Oracle provides a means for documentation.
In any case This issue certainly isn't a huge constraint with Oracle. My point is that they are tremendous innovators whether one particular limitation annoys you doesn't change that.