Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:NO! That's misleading (Score 1) 128

No, you are misleading.

If you take a look at Wayland source code, you'll see stuff like Copyright © 1988-2004 Keith Packard and Bart Massey. quite often.
https://gitorious.org/wayland/wayland/source/0b29a2fec7801d2530bd004ae68eb9242417bafd:wayland/wayland-hash.c#L2-3

As for pushing back work to the toolkit developers, the Qt developers made the software (client side) backend the default back in Qt 4.4, because it was so much faster than the XRender based one for local clients.
And for Qt5, they simply didn't bother to implement a XRender based one.

Comment Re: (Score 1) 611

Yes I could. Or that we're just a dream of some creature.

But that doesn't mean science is pointless.
The point in science is to have repeatable experimental results and produce reliable knowledge to advance our understanding and life.

If a christian-like God exists, then he his apparently content in letting us use the scientific process to find how the Universe he built works.
He clearly doesn't screw around with the laws of physics on a daily basis. Scientists worth the title who also happen to believe in a God are as skeptical of physics defying events as any one.

But again, we have no scientific proof that he exists or not. The christian concept of God, of a sentient all power being, makes it impossible to have such proof.

Comment Re:Waste of Time (Score 1) 611

I'm an atheist.
However, your argument is wrong.

If there's a God, like the christian one, he is conscious and all powerful. He can do whatever he wants, including making the Universe work as if he did not exist. He can subvert any experiment you can can come up to test his existence.
The end conclusion is that it is scientifically impossible to prove that God exists or doesn't exist.
Stating that God probably does not exist is not science.

Don't take this the wrong way, but I think people who, like you, try to use this kind of pseudo scientific argument to state that God does not exist are just giving science a bad name.

Comment Re:Daniel Stone core X.o dev on what's wrong with (Score 1) 340

It has been implemented for Windows server, AFAIK.
But saying rootless RDP is network transparent is a hell of an abuse of the term "network transparent".
Which of course is the root of all this drama. People confuse rootless remote applications with network transparency and they jump in anger when they hear Wayland isn't network transparent.

Comment Re:Can we have a discussion - not a slagging match (Score 4, Informative) 340

No, this is Slashdot, you can't have a serious discussion.
It's full of idiots who don't like shiny new things, idiots who adore shiny new things and both types of idiots love to shout at each other.

Ok. Seriously.
Wayland is a new architecture for the Linux graphics stack.
It merges the role of the display server and the window manager/compositor into one piece, called the Wayland compositor.
It is envisioned that writing a Wayland compositor is not more complicated than writing a X window manager/compositor.
Buttet point: We will not have A Wayland compositor, but serveral of them to choose from: Weston, Enlightenment, Mutter/Gnome Shell, KWin.
This is made possible because a) Linux now has a proper graphic driver stack and b) the Wayland protocol is much simpler.

The new model and the simplified protocol will allow
A) better control of the input (keyboard, mice). Currently, the X window manager/compositor do not have absolute control about the input. Besides posing some security risks, it makes it hard to implement some behaviors sanely. Things as simple as being able to mute the sound when you have a full screen application running are hard to do.
Wayland compositors, of course, get all the input and then they forward them to applications as they see fit.

B) better performance (except OpenGL full screen applications which already mostly bypass X). This will come from a number of place.
- Reduced number of rountrips (W app/W compositor/kernel instead of X app/X server/X compositor/X compositor/kernel).
- Better implementation (the X.org server isn't the fastest cookie in the world, but the protocol is so complex it's hard to do better)
- On embedded platforms (phones, tablets, Raspberri Pi) the compositor can be written to exploit hardware compositing capabilites (there's no good way to expose it though the X server).

Additionally, the Wayland protocol fixes several issues, some of which could be fixed with more extensions, some need breaking.
- Artifacts/tearing. X doesn't specify when the data sent by applications is drawn on the screen, so sometimes you get artifacts as the server or compositor draw the contents of a window in the middle of an application drawing. Wayland fixes this by making every frame perfect.
- Saner input model. The currently used X input extensions are too complicated (by the authors own admission), as they need to maintain backward compatibility with the X Core input model.
- Saner dynamic reconfiguration (resolution, orientation). Again, by authors admission, XRandR is too complicated.
- Binding versioning. Currently, if you have an application built upon components who support different versions of an extension (ie, input), it's a russian roulette on how it will pan out.

Bullet point: despite all the drama going on on Slashdot and other sites, the simple truth is that the majority, if not all, of the developers who actually put in time and effort to maintain and upgrade the X.org server, the X window managers we use, the application toolkits, etc seem convinced Wayland is the way forward and are putting in the time and effort needed to make it happen.

Wayland is not network transparent. And despite the drama, that's OK. Nobody cares about network transparency.
People (including me) do care about having rootless remote applications. We care to have something that works at least as well as "ssh -X".
For the short/medium term, Wayland desktops will run a X compatibility server (XWayland) and most Wayland capable applications will have a X fallback mode. So "ssh -X" will just keep working.
For a longer term solution, when we get Wayland only applications, we'll need to implement something like NX or Xpra for Wayland. Which is OK too, because for many of us, it's better than running X over the network.
Despite the capabilities of the X protocol, most X applications are in fact too bandwidth intensive and latency sensitive to run remotely outside LANs. And their developers can't be arsed to do it otherwise. That's why we use things like NX and Xpra in the first place.

Comment Re:How well does XWayland work? (Score 1) 340

I can't even remember what trying to use X under Windows was like. $DEITY bless memory loss.

For Mac OS X, you have XQuartz. It consists of a modified X.org server and a custom window manager and from what my Mac OS X wielding colleagues say, it works pretty well. I don't think it suffers of any of the issues you mention.

XWayland is expected to work seamlessly as well.

Comment Re:what's the point of this? (Score 1) 293

Correct, between two flushes any number of writes may have been executed or not.

Software deals with this by using proper data structures and write / flush sequences.
The most common is write ahead logging, also known as journaling, which is used by pretty much every modern file system and database.

In this approach, any changes to the data structure are first written to the WAL. Only after they've been safely committed to the WAL, the software will issue the writes to the main data structure.
Performance wise there are slight variations but the basic sequence is:
1. Write the log segment describing changes, which may need any number of HDD blocks to be written.
2. Flush cache.
3. Write the commit block. This is a write to a single HDD block, and it usually contains a checksum of the rest of the log segment.
4. Flush the cache.
5. Write changes to the main data structure.

In case of power loss or another crash, upon restart, the software will re-apply the changes described in the WAL.
A log segment which as a missing or invalid commit block is ignored.

Writes to a block are (expected to be) atomic. The HDDs have enough momentum and energy storage to, in case of power loss, finish writing that last block (HDD blocks are 4 kB plus a bit more) and park the heads (failure to park the heads would thrash your HDD in case of power loss, by the way).
Additionally, blocks have error detection/correction code. A partially written block will yield a read error.

Comment Re:what's the point of this? (Score 2) 293

Depends on the kind of documentation you're asking for.
The behavior is described in the HDD interface standards (ATA/SATA, SCSI/SAS).
The interesting bits are the description of the desired behavior of
- write through caches
- write back caches and FLUSH CACHE EXT or SYNCHRHONIZE
- write back caches and FUA or DPO

If you want documentation on how many drives support and honor this behavior, then I can't give you much pointers.

I don't think there's a SATA HDD in the market which doesn't support and tries to honor FLUSH CACHE EXT. Many support FUA/DPO.
Bugs are known, but seem rare: http://forums.seagate.com/t5/Desktop-HDD-Desktop-SSHD/ST3250823AS-7200-8-ignores-FLUSH-CACHE-in-AHCI-mode/td-p/82046.

The PostgreSQL folks keep a page with some information about this issue.
http://wiki.postgresql.org/wiki/Reliable_Writes
They recommend a test for drives.

Comment Re:what's the point of this? (Score 3, Informative) 293

HDDs, even the cheapest ones nowadays, allow the software to enforce the order in which pending data is written to safe permanent storage and software to known that pending data has indeed been safely committed to permanent storage.

The operative systems, file systems and applications build upon this to ensure that, in case of an unexpected crash, you don't end up with a corrupted file system or data. You may lose files created in the last 5 minutes, but you won't end up with a file system so corrupted that you need to re-install your computer.
Databases uses this to ensure that, once you've clicked "pay" in a e-commerce site, it will either record it properly or not at all, so you don't end up with half-way situations where you get charged and don't get the product you paid for or vice-versa.

According to reports like TFA and the article TFA was attempting to reproduce, a lot of cheap SSDs break this guarantees.

Comment Re:It goes both ways (Score 1) 270

The cause of that accident was poor interaction between computer and human.

Short summary:
Due to a faulty sensor, the auto-throtle dediced to throtle down the plane.
The crew recognized that and manually throtled up again.
But then they made two errors: taking the the hands off from the throtle and without disconnecting the auto-throtle.
The auto-throthle then throtled down again and it took the crew 100 seconds to realize their mistake, at which point it was too late.

Comment Re:I love the pro US swing (Score 1) 270

Chrisq's poins is that the US Airways plane which landed in the Hudson was also an Airbus, with the exact same type of automation as the Air France which crashed in the Atlantic.

One could also point out that, during the process of landing it into the Hudson, the pilot flew the aircraft into the alpha protection limit, which means the flight control system stopped the plane from landing.
One could argue that pilot knew what he was doing and simply to chose to make the best use of the flight control system
But the flight control system did play a role in landing that plane.

Slashdot Top Deals

8 Catfish = 1 Octo-puss

Working...