Correction: I should say "similar incident", not "crash", sorry (although the XL888T flight was a crash).
Correction: I should say "similar incident", not "crash", sorry (although the XL888T flight was a crash).
There was a very similar crash involving frozen/jammed angle-of-attack (AOA) sensors with another A320 back in 2008, XL Airways flight 888T:
In this case it was caused by aircraft maintenance personnel not covering up the sensors properly when repainting the livery, causing paint-related or cleaning chemicals to fill the gaps inside the AOA sensor housing and later freeze in-place once the aircraft was airborne - which caused confusion among the pilots when the aircraft's flight envelope protection didn't work as expected during some test manoeuvres (since the AOA sensors were sending conflicting information to the ADIRU).
Regarding AOA gauges on Airbus aircraft - I find it rather perplexing that the early A320s had an analogue AOA gauge (left of the primary flight display screen) - here's a demonstration video dating back to 1988 where the pilot clearly points to the AOA gauge while demonstrating the flight envelope protection:
If you look at photos of the cockpits of the A320-1xx series aircraft, most of them have this gauge installed. I can't figure out why Airbus would remove it (instead of at least integrating it into the main display at the very least).
I'm not pathologically averse to the idea of replacing the classic SysV-initscripts in Linux distributions (having used them for 15years, I am well aware of their limitations), but I personally believe the overall migration process should be executed with far more caution, preparedness and reservation than currently demonstrated if I am to place more trust in systemd.
1. Given that the Linux kernel by design will go into kernel panic if PID 1 crashes, I think it would be CRITICAL that any systemd code that runs in PID 1 should be formally verified, and kept to a minimum size/scope as possible. The classic sysV
2. I think the salesmanship by systemd advocates is very poor - trying to sell the concept of systemd on "faster boot times" is largely irrevelant for many environments (server, some workstation) where the systems spend most of their time powered on and rarely reboot (or already boot fast enough with sysV init that the time gains are negligible).
Hence some people are reluctant to take on the increased complexity of systemd for little gain, if any, and I believe that's a valid position to hold.
3. I would not feel comfortable using systemd unless the documentation was extensive and accurate enough to allow someone to carry out a manual migration from SysV initscripts to systemd.
Understanding which functionality in systemd replaces previously needed functionality in SysV initscripts will help greatly with trying to troubleshoot failed boot processes. This will also help users plan how to migrate back to SysV init (or some other startup process) if systemd doesn't meet their needs.
I've spent the last 15 years learning the SysV-style initscripts, and I think the systemd developers need to show respect for the learning commitment that system administrators and UNIX/Linux users have invested in the SysV way.
4. I'm also very wary of userspace application software that forces adoption of systemd through dependencies (such as recent GNOME 3) - the use of systemd by user-space applications should be entirely optional (at the very least configured at compile-time if not run-time).
I think systemd should need to win its place based on its own merits, rather than being "snuck in through the back door". Distros should have an install-time option of either classic SysV or experimental systemd for startup.
The problem with relying on planting more trees to absorb excess CO2 is that in some parts of the world (e.g. southern Australia) the climate is dry yet warm enough to create conditions rife for bushfires to easily spread rapidly, undoing all the human effort spent on planting them in a matter of minutes.
Quite often these bushfires start by lightning strikes, so it's very difficult to eliminate the prospect of them entirely. The only practical alternative is to do periodic controlled prescribed fuel-reduction burn-offs, which again produce CO2 that reduces the overall CO2 reduction by the forest itself.
Fantasic comment, PotatoHead, thanks for your input. This is
I used to work at a company that made air traffic control systems - we often used X11 remoting in many ways (mainly in the development testing systems, to save on the number of machines and displays needed on a large distributed cluster) but also as part of the fault-tolerance in the finished product. If the machine for one ATC position failed, at short notice you could simply drop in a replacement thin X11 client (dedicated hardware or a bare-bones UNIX/Linux install) that connects to another machine, instead of having to completely re-image and re-configure another machine to replace it.
Those who do not understand true multi-user UNIX+X11 are doomed to reinvent it poorly. It is disappointing to see Wayland developers claim on one hand that their project will replace X11, yet on the other hand they treat remoting as a second-class citizen and push that responsibility out to the graphics toolkit developers and application programmers (who will have much more trouble coordinating their efforts in making a quality implementation).
Yes, I am aware that RDP can send primitives. According to http://blogs.msdn.com/b/rds/ar... RDP under MS-WIndows is more-or-less implemented as a special graphics driver that simply relays the drawing primitive commands from a Windows application over the network to the RDP client.
X11 when being used with drawing primitives works in a similar manner - only the primitive commands are being sent.
However, the key feature that Wayland tries to hype itself on is client-side rendering - Wayland clients draw into a memory buffer (array of pixels) and then tell Wayland what parts of the buffer have changed, in order to force an update.
The problem is that you have to expend more CPU time on the client to determine how to send those pixels to the remote machine in the most efficient manner. You have no insider knowledge on what sort of primitive was drawn (the app programmer will typically use a function call in the graphics toolkit to draw something - but Wayland won't know if the app just drew a 60deg arc, or plotted lots of little pixels all around the place?).
This is clearly a scalability problem on application servers as you add more users. Wayland refuses to go anywhere near remoting, so you have no way of
This is a loss of progress - as I mentioned above, once app programmers have to go well out of their way to make remoting work, then many of them won't bother coding for it and you get stuck with useful (but not justified needing low-latency graphics) applications that can't be remoted in a corporate networked environment because the application programmer decided to use an amateur graphics toolkit that only uses Wayland.
Move on all you like, but one of two things will happen - either Wayland will be rejected by corporate environments, or it will eventually have to grow up and establish a decent common remoting protocol that takes no more CPU load than X11 - by which case you've essentially almost recreated X.
Lastly, relying on RDP is legally dangerous as it's patented by Microsoft and we don't know if/when Microsoft will assert the rights to their patents.
The problem I have with the toolkits implementing network transparency is that:
a) there are many toolkits and they have to reach common ground for a protocol to be as universally usable as X11 is.
b) some toolkit programming teams probably don't have the resources or motivation to implement network transparency, meaning that the advantage of X11 (the application programmer doesn't have to plan for network transparency, it just happens) is lost.
If the application programmer chooses a toolkit that doesn't support network transparency (or requires special API calls to be made that they don't bother writing code for), then the application end-user is SOL if they ever find a situation (which can often come up unpredictably) where they need remoting (especially if it's an app that doesn't need low-latency high-bandwidth video access).
I explained that using a bitmap change polling system like RDP/VNC incurs CPU overhead that limits scalability. Sending core primitives over the network in a push-style mechanism is much better.
c) Just because Keith Packard and other former X.org developers are working on Wayland doesn't give it any further technical merit. It just means they have more experience programming graphics hardware and software stacks.
Wayland will be nothing more than a toy, like Windows 95. Several projects (like Berlin and GGI) before have tried to displace X11 and have failed.
If every toolkit finds that they still have to support X11 to be usable or popular, then it relegates Wayland to an optional side-extra.
It is a serious folly to think that Wayland will completely replace X11.
Who do you think you are to approach the long-time Unix community and tell them that they don't "need" a feature that they use every day, just because you don't use it, or you have a petulant want for useless eye-candy animations that other users have not needed for the last 20 years?
My argument for producing something that will be accepted by the corporate world meant that corporate environments are one big important funding source for Linux/open source development - they will always buy support contracts from vendors such as Red Hat/Ubuntu etc, which allow these Linux companies to actively employ full-time developers on relevant open source projects.
As I said - Wayland may be acceptable for gaming, and probably would also be a reasonable solution for embedded devices (where network transparency overhead is not needed). But to propose it as the "next X11" is a SERIOUS mistake for the reasons I outlined above.
You're not making any sense.
I use simple minimal themes with Gtk and Qt (e.g. flat-colour backgrounds, system fonts like Helvetica, etc). AFAIK w/ GTK v2 at least, if not later, these still get drawn using X11 primitives.
This is not nostalgia. It is GETTING WORK DONE.
The way more recent toolkits are doing their rendering (composing client-side and sending bitmaps over) just shows that WE NEED NEW PRIMITIVES (like another AC commenter suggested - encoding Cairo calls over the network), and possibly other features to reduce client-server round-trip time such as display lists (from memory I believe NeXT's display system used them?).
It is not a valid reason to throw the baby with the bathwater out and do away with network transparency altogether.
(sorry, used wrong formatting mode - moderators, you can mod the above post down to -1, redundant if you wish).
I've been using Linux/UNIXes for 15yrs now. One of the beauties of X11 has been the fact that the application programmer typically does not even have to
This means that whenever the users have a need for displaying X11 apps remotely (e.g. needing to deploy new thin clients at short notice to accommodate new staff in a corporate environment - very quick setup time), you just simply set $DISPLAY and away you go. I've long come to count on this feature and I value having that option kept open all the time.
I believe in the future fibre optic LAN equipment will come down in price and will offer much lower-latency and higher-throughput than today's copper-wired Ethernet. It may even get to the point where transmit times of sending bitmapped real-time graphics over fibre may be as fast as a CPU writing to a reasonably modest PCI/AGP graphics card.
I think the Wayland project is making a SERIOUS mistake in treating network transparency as a second-class citizen, and will likely see the project relegated to a toy-like status (useful only for gaming and entertainment, or apps that need extremely low video latency like video editing suites) and shunned by the corporate world.
If the current X11 protocol makes it hard to do anti-aliased text, glossy/brushed GUIs, zooming fading menus, wobbly exploding windows and the like, then what we need is a new set of core drawing primitives, much like Apple's Display Quartz system (IIRC). Call it X12 if you will, but keep the network transparency in and that decision will pay off many times over.
I personally have no need for such resource-hogging eye-candy - I turn all of that off and prefer a minimalistic slick-but-functional snappy inteface. I am perfectly happy with X11, and all the current-version applications I use work well with it. It has its quirks and faults, but I believe they can be reasoned with and there is certainly room for improvement: http://www.x.org/wiki/Developm...
I also think the Wayland proposals of polling (pixel-scraping) window buffers and sending them over rdesktop for remoting is only going to lead to massive CPU overhead on shared application servers, for one thing.
At the very least, I'd like to see the major graphics toolkit groups (Qt, GTK, WxWindows et. al.) collaborate on designing a standard remote drawing protocol that has similar transparency to X11 - then I might have more respect for Wayland attempting to replace X11.
(sorry for double post - accidentally selected wrong formatting mode. Mod my other post into oblivion if you wish).
Currently I only have one active project on SourceForge - however it's a Java-based one (I distribute both precompiled
I'll keep a watch in the meantime if SF attempt to insert adware, although I doubt they'll try. But any new future project of mine, especially if it supplies native MS Windows binaries, will be hosted elsewhere.
In hindsight this is one of the things that I wish Kernighan & Ritchie (the original authors of C) should have considered.
Pascal and Ada both use ":=" as an assignment operation and "=" for testing equality, so this type of error is a non-issue in these languages. Furthermore Pascal (1970) actually predates C (1972) by two years, so it bears consideration why K&R overlooked this possibility.
That said, nearly all modern compilers (incl. GCC) do print a warning if you use a "=" operation in a if() or while() condition without explicitly surrounding the expression in parentheses - but then you have to be willing to examine the warnings output (as you'll still get your executable in the end), unless you're disciplined enough to use compiler flags like -pedantic -Werror as a means of extra quality assurance.
I think the ISO C standards body should consider introducing ":=" as an alternate assignment operator in a future standard of C, and then all compilers could offer a switch that'd forbid the use of "=" for when you're writing new C code from scratch for new projects.
You'd then still have the problem of existing codebases needing maintenance still being at risk of misuse of "=", but eventually if such a newer C standard started to enjoy widespread support, people could then do a search-and replace for uses of "=" with ":=" in existing code. (I say this with a bit of pessimism since Microsoft's C/C++ compiler still doesn't support C99 fully).
How come everyone's going so slow if it's called rush hour?