The whole point about all of this, X/Wayland/MIR, is getting closer to the video card without having to yank one's hair out whilst doing it. Why would one need a little close interaction with the bare metal? If you've ever used Linux and saw tearing while moving windows around, then you've hit on one of the points for why closer to metal is a bit more ideal.
With that said, let's not fool ourselves and think, "OMG, they just want access to the direct buffers!" That wouldn't be correct. However, developers want to have an ensured level of functionality with their applications visual appearance. If the app shows whited out menus for half a second, blink, and then there is your menu options, then there is something very wrong.
It was pretty clear that with X, politically speaking, that developers couldn't fix a lot of the problems due to legacy and the foaming at the mouth hordes that would call said developer out for ruining their precious X. You can already see those hordes from all the "take X and my network transparency from my cold dead hands" comments. It is to a degree those people, and a few other reasons, that provided the impetus for Wayland. You just cannot fix X the way it should be fixed.
Toolkits understand that display servers and pretty much the whole display stack in general suck. Granted there is a few moments of awesome, but they are largely out weighted by the suck factor, usually when you code an application, you'll note that sometimes you'll gravitate to the "winning" parts of the toolkit being used versus the pure suck ones. Qt has a multitude for all the OSes/Display Servers it supports. Be that Windows, Mac, X11, and so on. Likewise for GTK+ but to a lesser extent, but that is what make GTK+ a pretty cool toolkit. Because let's face it, no display stack is perfect in delivering every single developer's wish to the monitor. Likewise, no toolkit is perfect either. The GNOME and KDE people know this, they write specific code to get around some of the "weirdness" that comes with GTK+ or Qt. Obviously, that task is made slightly easier with Wayland and the way it allows a developer to send specifics to the display stack or even to the metal itself.
Projects like KDE and GNOME have to write window managers and a lot of times those window managers have to get around some of the most sucktacular parts of the underlying display server. However, once those parts are isolated, the bulk of the work left is done in the toolkit. So display servers matter a bit to the desktop environments because they need to find all of the pitfalls of using said display server and work around them. Sometimes, it can be as simple as a patch to the toolkit or the display server upstream. Sometimes it can be as painful as a kludge that looks like it was the dream of a madman, all depends on how much upstream a patch is needed to be effective and how effective it would be for other projects all around.
That leads into the problem with MIR. MIR seems pretty gravitated to its own means. If KDE has a problem with MIR that can be easily fixed with a patch to MIR or horribly fixed by a kludge in KDE's code base, it currently seems that the MIR team wouldn't be as happy go lucky to accept the patch if it meant that could potentially delay Ubuntu or break some future unknown to anyone else outside of MIR feature. Additionally, you have the duplicated work argument as well, which I think honestly holds a bit of water. I fondly remember the debates of aRts and Tomboy. While I think it's awesome that Ubuntu is developing their own display server, I pepper that thought with, "don't be surprised if everyone finds this whole endeavor a fools errand."
I think the NIH argument gets tossed around way too much, like its FOSS McCarthyism. Every team has their own goals and by their very nature, that would classify them as NIH heretics. Canonical's idea is this mobile/desktop nexus of funafication, MIR helps them drive that in a way that is better suited to them. That being said, a few changes to their underpinning technology would help them do the exact same thing on Wayland. I'll add to the previous statement, while it is a few changes, those would be very large changes, changes that might not sit well in the stomach of Canonical. However, I'd say the idea for using MIR versus Wayland comes not from technical matters but by ripping a page out of the Google playbook on how to write a display server. Making the display server theirs and not subject to the, as someone in one of the comments above said, "open-source management by committee model ensures they end up bloated mockeries" flux, helps them woo would-be vendors. Because let's face it, when subject to committee, don't expect anything crystal clear to emerge, (ooo, burn on XML).
X11 is legacy. I know everyone's going to be a hater, but X11 is just so huge. There just is no turning this ship from the iceberg, it has become by its most feverish supporters, unfixable. Wayland is the obvious choice since it is trying to apply a broad approach to the problems that exist in X11 and at the same time give enough outs to developers to ensure we can undo some of the problems that Wayland has yet to invent for us, all the while giving developers the one thing they've honestly been asking for. A more consistent experience with applications. MIR serves that too, to an extent, but pretty much for Canonical's goals. Qt and GTK+ developers, specifically KDE and the variety of GNOMEish DEs, like the appeal of Wayland because if there are parts they don't like sending a patch upstream has thus far proved to be pretty painless, additionally, they have a couple of means to get around Wayland fairly easily. MIR hasn't really had such a test, at least to speak of but that's not saying that haven't already, of DE developers asking for patches to be sent upstream. However, some of those DE developers are basing it off of previous experience with dealing with the Ubuntu developers, who haven't been the most friendly bunch. Granted, the Fedora and RedHat people aren't the shit that smells like roses either.
So I know this has been pretty long winded, but this whole debate is a pretty complicated one because it has less to do with technical reasons and more political reasons. The toolkits are always working around the brain dead assumptions that display servers make, desktop developers are always working around the crazy assumptions that toolkits are making. Making the ability to easily bypass all of that has been a pretty big goal for everyone and Wayland/MIR stand to bang the drum on that pretty strongly. The main difference between Wayland and MIR is that they take different approaches for doing just that and trying to have code that works reasonably well on both would be a pain in the rear to support and having code that "just works" defeats the whole purpose of going to Wayland/MIR in the first place. That in turn is the reason for the big scream in this debate. Supporting both is either a no-go or defeats the whole point of leaving X.
Well, historically speaking, diamonds were indeed a rarity back in the pre-19th century. During the late 19th and early 20th century disposable income became the main barrier. However, we've since left those periods and we are pretty filthy with diamonds and cash to toss at them. So I'm guess lacking a third quality to really drive up prices, the industry would instead just rehash the first two to begin with. One, making diamonds seem rare by distribution control. Two, by making your cash have less purchasing power via inflated prices.
But in all honesty, a diamond is really only worth what you are willing to pay for it. That a rock of compressed carbon has any value, is just made up in our heads. So maybe the idea isn't supply and demand and the artificial nature thereof, but more a factor of our own deranged mind's making. However, that may be more [glass half full/empty] [rose by any other name]
People who complain about having to mouse over to something, loose all nerd cred with me. Shortcut keys were invented for a reason and you just cannot call yourself a hard-core user if you keep touching your mouse.
Casual users can do, "the mouse-over to the other side of the screen of shame" to pay for their inability to sit down and read a book on how to really use the tool given to you. Not saying I agree with how they have chosen to layout the UI in the Calligra suite, but honestly, at least they haven't f'ed with the shortcut keys since the 2.x series began.
I will now accept all "get off my lawn" comments to follow.
Not just getting friendly with local government, but I'm pretty sure Google will take the always wonderful stance of "secure forever". Time is always on the government's side and given enough time, all static security is rendered useless.
Unless Google plans to review their "security" on a pretty regular basis. Someone with enough money and enough time (pretty much any country's government and a few private citizens too) will eventually break into what is pretty much the Fort Knox (having large amounts of information, not the security part) of people's information.
I don't understand how it is broken by design? 144 pages is fairly short and compact for a security tool. Think about how many endless pages are written about the security tools in Mac or Windows. sudo is a pretty broad tool and its used for a lot of enterprise control. When it comes to the casual user, usually the distro will abstract the tool to a pretty common denominator. Much like Mac and Windows abstract the complex security layers within each of their OSes for the home user.
I mean seriously, I think someone is missing the point here. That 144 page manual is for someone who is sitting in an admin chair, who will need to properly craft the file so that a central Unix/Linux box is properly maintained. I know MCSEs that have read tomes of information related to Windows security and I'm pretty sure the same holds true for Mac people as well.
This is my take on your comment, and if I'm wrong on this feel free to catch me on that. You are expecting home users to read something that is more geared for admins. Maybe even small time admins to read that, when usually the default out-of-box is good enough for a 10~50 employee setup. The question being now, "Would you expect a home user or a small time admin to be up to the same level, when it comes to security, as say an MCSE?" I don't think so and likewise, the sudo man page (and also PAM and dbus) is mostly geared for your high level admins who are going to need to know that kind of stuff.
So I don't think it is broken by design because it works as intended for the audience that it was geared for. For the home user and small business, if the default sudoers, pam.d/*, or dbus.conf isn't secure, that's mostly a problem with the distro makers and poor choice for their target audience. Not a sign that users need to read 600+ pages of manual. Overall, ask any admin of any OS how many pages it would take them to completely describe a tool like sudo for their OS and I'm pretty sure you'll find that 144 pages is on the low end of that scale.
What you speak of is a direct consequence of Intel holding its monopoly over everything. So while there may be a slight technical challenge, there is a much larger political challenge.
Nope wait, wait!! That last reply, the JSR thing not correct. I'll totally admit wrong when wrong and JSR 296 isn't Swing itself but the aborted effort to build the application framework a while back. My bad, got that wrong. All shit tossing about that you want totally justified. That is all.
Well I work with Java all the time and I find Swing great to use, but when I want to build my own user interface elements, then yes, I find it very painful indeed. I've done custom widgets in Cario/GTK,
In that respect, GWT is worst. GWT makes a lot of assumptions about how you want to convert Java into JavaScript. When you want to create your own web elements, not only do you have to build it using some of the worst API I've seen, you actually have to fight the assumptions that GWT tries to make right out of the box. You are literally fighting on two fronts to just make a damn clock widget or rich editor widget. It is ridiculous.
GWT on a whole is a pretty sad state. It integrates poorly with JEE frameworks and when it does, it's literally shoe-horned. You'll find that something like JSR 356 is a waaaaaay better approach to bridging the gap between server side code and client code, and I'd hope that the Java community can keep hitting excellent projects like this. Because Google isn't doing it with their idea of interoperability.
Also since you want to be mincing words between library and framework. If you happen to head over to JSR 296 you'll see that the JSR refers to it as "The Swing Application Framework". Maybe you should head over there and educated them that they should have called it "The Swing Application Library". That being said, I believe your retort to have loss any of its credibility but seriously, this is Slashdot, does anyone think anything on this site is credible? Hope you are still laughing though.
What? Do you mean 2.6.0 RC1? 2.6 looks to be more of a clean up of the 2.5.1 stuff rather than anything new. If anything the main thing that 2.6 brings is that they brought Java 7 into the picture. I wouldn't say that Google *has* abandon GWT, but they sure are making the common gestures of getting ready for a good old fashion keelhauling.
Now for just my opinion, GWT sucks. It's a messy looking API and lacks a ton of flexibility. For example, trying to implement custom UI for your web page is painful and totally unpleasant. More so than say making the same customer UI in Java Swing (which is pretty painful in of itself). In my opinion, and you my mod me down for it, is that anything that is worst to do in (insert framework here) than it is in Java should not exist.
Life is a healthy respect for mother nature laced with greed.