Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:GNOME Shell == Clusterfuck (Score 1) 419

You make a lot of good points. No clue what you're talking about regarding "two processes to manage one desktop",

I was referring to the combination of kdesktop and SuperKaramaba to give you a somewhat-hackish widgets+desktop.

but I will just make the observation that as a relative layperson, it's extremely difficult to understand why a desktop environment would concentrate on the inclusion of what were previously 3rd party apps before attaining adequate stability and basic feature-level.

Well 4.0 wasn't just about Plasma. Obviously Plasma was by far the most notable change at-a-glance, but there were other improvements too:

  • The unmaintained-since-KDE-3.2 aRts sound server was dropped, and a multimedia API layer (Phonon) was developed to wrap around whatever eventually won out. Unfortunately the Xine backend is paradoxically (IMO) better than the gstreamer backend but this was a good move in hindsight given the rise of yet-another-sound-server, PulseAudio.
  • The adoption of the Qt 4 toolkit which caused most of the pain in the first place brought with it many improvements as well, including much better threading support.
  • We have a hardware access library (Solid) that is used for e.g. the neat-o Removable Drives widget present by default in new installations of KDE 4.
  • KWin received support for Composite (I know it's eye candy and therefore you don't care but it does make the desktop actually more usable for me at least)
  • DBus replaced DCOP for inter-process communication, which was the first time that GNOME, KDE, and other desktop environments could all send messages over the same IPC system.

Of course not all of this required a major version bump to change and there are even today things that are harder or impossible to do compared to KDE 3.5, but that's been the case across every major desktop upgrade except from KDE 2 to 3. I remember when I first got into KDE development still hearing people complain about missing KDE 1 apps. :)

The reasons for not holding off 4.0 have been discussed ad nauseum because a few high profile holdouts from the KDE team won't admit that it was a complete disaster. Which it was.

Well the expectation handling could have certainly been improved in retrospect but even now I agree with doing the release. I just wish we had make it more clear on non-Planet-KDE and non-mailing-list feeds what the expectations of the desktop should be in line with.

KDE4 is getting a lot better and has some pretty sweet eyecandy, but is still slow and buggy for me. I am on ubuntu, so YMMV.

I run Gentoo on a quadcore with ATI and Kubuntu on a laptop with Intel graphics, and the Kubuntu until very recently kicked my desktop's ass in terms of eyecandy support (until I started running git versions of Mesa, the kernel, and xf86-video-ati). It's all about the graphics drivers unfortunately.

XP has been and is the most popular OS environment partially because it is stable and fast, and provides a simple environment to launch applications from (while having all desktop options available, unlike wonderful WMs like openbox). Applications including 3rd party desktop widgets. I know it's difficult to control the relative popularity of different coding projects, but I would think keeping a sane priority for feature progression is part of the reason for having an all inclusive desktop environment.

Honestly when I used XP on the boat underway I would have to spend a week removing Alt-F2 from my muscle memory so I wouldn't even call XP an improvement in usability unless you were already used to it. It is probably faster and more stable though, I'll admit. I've often pondered if I would ever get time to start a real Quality Control subproject for KDE to aggressively focus on stability bugs. It's not looking like it though. :-/

Comment Re:GNOME Shell == Clusterfuck (Score 2, Insightful) 419

So true... both desktop environments are missing the point. You have misguided ego-hounds like Aaron Seigo chasing after some elusive new "desktop paradigm" which no one has asked for nor wants.

Except that people have asked for and do want it. Do you really think Plasma appeared out of thin air (or fully-formed from Aaron's over-active imagination)? The answer is no. When Aaron took over maintainership of KDE 3's kicker application one of the most popular third-party KDE programs was one called SuperKaramba, which added widgets to your desktop, similar to other third-party programs for Mac and Windows.

What Aaron "innovated" was that there's no reason that you don't have to have two processes to manage one desktop (or three processes to manage one workspace). Plasma was an attempt to codify existing practice with a saner underlying design. Of course the desktop replacement wasn't as fully featured in KDE 4.0 as kdesktop was in KDE 3.5 but the reasons for not holding off forever on 4.0 have been discussed ad nauseum.

The formula for a popular successful desktop is so simple: something fully integrated with all options available via menus (program launching, suspend/hibernate, screensaver, etc), and something fast and stable. Very few everyday users care about some translucent twitter widget on the desktop. They want a platform to launch applications from that is simple, fast and stable. That should be priority number one.

We have a fully integrated menu-enabled desktop, and KDE 4 is fast and stable for me (with the exception of a glibc 2.10.1 issue :( )

You conflate the issues of stability/speed with "translucent widgets". These issues are not mutually exclusive. kicker in KDE 3.5 was translucent (via evil hacks, but still). SuperKaramba widgets were translucent via the same hack. And yet whenever people talk about KDE 4 disparagingly they usually bring up 3.5 as some paragon of perfection. I mean, yeah 3.5 is better than Windows, but there was still plenty of room for improvement.

Comment Re:Nuclear power is green power (Score 1) 853

Although I like the gist of your comment I have a nitpick:

The risk of being injured by a nuclear meltdown today is on par with being injured by lightning.

Your risk of injury from nuclear meltdown is orders of magnitude less than getting hit by lightning. Think about it, people get hit by lightning all the time in comparison the number of nuclear meltdowns in the nation. And no one was "injured" at our last nuclear meltdown (Three Mile Island) so even given an incredibly rare meltdown you have a incredibly minute chance of injury unless you happened to be working in the containment building.

Now, Three Mile Island did release radioactive contamination to the atmosphere, but the affect on the surrounding population was slight. Of course, not everyone agrees. I can't speak to the findings of the various researchers but I can say that several of Mr. Wasserman's claims are either misleading or flat-out wrong:

The public was told there was no danger of an explosion. But there was, as there had been at Michigan's Fermi reactor in 1966. In 1986, Chernobyl Unit Four did explode.

Even the Chernobyl explosion was non-nuclear, caused by the water in the coolant tubes being instantly converted to steam during the accident and literally blowing the lid off of Chernobyl. A hydrogen bubble was present in the reactor core after the meltdown but could not have exploded without the presence of oxygen to combust with, but oxygen is kept out of the coolant due to corrosion concerns.

there is no safe dose of radiation, and none will ever be found.

Well there is no set level under which you can say that a person will be just fine, but at the same time each and every single one of us live in a field of ionizing radiation from natural/cosmic sources all the time. Life has been adapted to low-level ionizing radiation due to this. If it were not the case then therapies such as medical radioimaging would not be performed, not to mention procedures like X-raying. For the same reason, people are allowed to fly on airplanes even though the radiation you receive in flight is much higher than on the ground due to less atmospheric shielding while in flight. Mr. Wasserman is correct that radiation damage is more harmful to fetuses, unborn babies, and children due to the reduced amount of time available to repair the damage before cell division. However we let pregnant women fly so apparently there must be some level of ionizing radiation that we believe unborn children can withstand.

Much of the rest of his assertions is a they-said/I-said where he discounts studies and government reports that disprove his claim by invoking the ever-popular conspiracy theory and then he submits his claims based on experts who agree with his claim. I can say people in Harrisburg didn't suffer symptoms, as I've certainly never walked door to door there. I can say that the trial court in Pennsylvania where the TMI cases were adjudicated ended up throwing out the lawsuits due to lack of evidence.

In addition Mr. Wasserman talks about "anecdotal" evidence of "Many [central Pennsylvanians] quickly developed large, visible tumors, breathing problems, and a metallic taste in their mouths that matched that experienced by some of the men who dropped the bomb on Hiroshima." This is nice and all except that the metallic taste is due to gamma radiation, which was produced in copious amounts during the Hiroshima bombing, but not so much in the radioactive release from TMI (otherwise there would have been more than "anecdotal" evidence for its existence). I'm not sure if Mr. Wasserman was leading the questions or simply allowing peoples fears to guide what they thought they were feeling but this kind of effect is very far-fetched.

Speaking of fear, Mr. Wasserman seriously mentions "a harrowing broadcast from ... Walter Cronkite" as evidence that something bad happened. Well, something bad did happen, but that doesn't speak to whether people received serious injury or not. I could go on but this article has apparently been debunked better already.

Anyways, back to my main point. I'd be more worried about getting struck by lightning 3 or more times than by suffering injury due to nuclear meltdown here in the USA.

Comment Re:Grrr... (Score 1) 853

IFR-style (Integral Fast Reactor) was designed around a slightly different principle of nuclear physics, such that you aren't even trying to prevent a meltdown, because the very physics of the reaction is such that if it starts getting 'too hot', the nuclear reaction itself starts to shutdown

I thought that there were many designs that were in part based around this idea, not just IFRs. I've heard the nuclear physicist types call it "Negative Something" where "something" is the ratio between temperature and reaction rate.

Even bum-standard pressurized water reactors can be designed to have a negative correlation between temperature and reactor power. Decay heat is the major concern, even for plants that tend to shutdown as temperature rises.

Comment Re:Grrr... (Score 1) 853

I really hate the comparisons of Three Mile Island to Chernobyl. Three Mile Island was an example of a failure at a nuclear facility that was solved correctly.

TMI wasn't even handled correctly at first. The operators bungled the initial response several times over. TMI had a meltdown occur and released radioactive contamination (a small amount, but still) to the atmosphere, which is almost as bad as you can get with that reactor design. Even with all of those failures however, the design of the plant precluded long term serious effects. Modern plant designs are much safer still than TMI was.

Comment Re:So has anyone asked the question... (Score 3, Interesting) 140

This is a quintessential military approach to a problem:

*snip*

Examples abound. A perfect one is the primary mode of communication on ships is radio, even though the networks (i.e. chat) are far faster and more reliable. We'll spend hours troubleshooting radios over chat in order to pass voice messages over radio. Then we'll chat again to confirm that the recipient actually received the radio message properly.

This would be funny if it weren't for the fact that it's true (and I've dealt with it as well :-/ )

Comment This is awesome news (Score 3, Informative) 52

I've been listening to it myself. I had to download the torrent from Legit Torrents as bt.ocremix.org was down when I tried to grab it.

Unfortunately bt.ocremix.org is also the tracker so establishing the swarm was difficult, but it did happen and download rate was good from there. I'd recommend using a client that enables you to pick-and-choose what you want to download as the vast majority of the download is composed of lossless FLAC recordings. If you like FLAC you're not going to want 423 MB of MP3, if you aren't an audiophile you aren't going to want the 3GB or so (IIRC) of FLAC files. The download is only 423 MB if you want everything but the FLACs.

I've only listened once so far so I have no definite impressions, but it's hard not to take note of Act 1-15 Fighting for Tomorrow which includes a choir, of all things...

Comment Re:I guess I should prepare for extinction then (Score 1) 422

I stood OOD on a SSBN (just recently rotated off no less) and even the ship's non-portable military GPS was probably about the size you mentioned (sans batteries).

We had hand-held units on the bridge however (which were merely bum-standard Garmin units). Of course being a ship the "real" GPS was simply received over one of our huge communications antennas. But then even our force protection radios which had crypto were half again the size as the GPS units you refer to, essentially merely bulky walkie-talkies.

Comment Re:ext4 / KDE issues overblown (Score 1) 289

Why would you switch from one busted ass corruption prone file system to another busted ass corruption prone file system?

ext4 isn't busted by design. The major change (in regards to this bug) is that the default data mode changed from ordered to writeback. writeback is explicitly noted in their documentation as having increased risk of corrupted files in a powerloss situation so I wonder why they made the change, but you can switch it back and still retain the other upgrades ext4 provides over ext3 (which is what I've done).

I guess the default may change again in later kernel releases to be safer and then corporations with fancy datacenters and UPSes everywhere could still use data=writeback but I can't speak for the kernel devs.

Comment ext4 / KDE issues overblown (Score 1) 289

The comments on this thread seem to be a bit mistaken on average on what the hubbub was regarding ext4 and KDE, so I'll try to clear it up a bit (I'm a KDE dev but I'm not speaking for KDE here of course).

The "issue" with ext4 was that it's handling of the standard write(); close(); rename(); idiom for replacing existing files by writing out a new file and then renaming it in-place over the old one could leave zero-length files laying around if the system crashed .

ext4 never would spontaneously delete data merely because rename() was used, it was a side effect of its implementation that if the system crashed before the data had been written to disk but after the rename had taken effect then a zero-length file would be left in its place after restarting the system.

Where KDE comes into the picture is that KDE 4 writes its updated settings to disk too frequently (which is a known bug, now fixed, KDE bug 187172 pertains). So, if you were starting up or shutting down your KDE session when the system crashed you'd likely have had quite a few config files written out in the past 60 seconds or so. ext4 is very good about writing out metadata so the renames would have taken effect. But apparently ext4 didn't force the actual file data to disk until 60 seconds had gone by (unless asked to via fsync()). So after the reboot there was a great chance that the $KDEHOME/share/config/*rc files had been effectively truncated, thus causing loss of settings.

Many people have complained that KDE should do "the right thing" and use fsync() everywhere, but most people don't know that KDE had always done that... until ext3 became popular. ext3 suffers massive slowdown in the face of fsync() (although I guess some kernel hackers will have that mostly fixed in 2.6.30?) so KDE actually removed fsync() calls in response to user demand. And there's no less than two fsyncs() that would be required, one to force the file to disk, and a second to force the update to the directory after the rename().

People claim that KDE violates POSIX standards but really the effect we get from using rename() with fsync() is exactly what we want, a kind of lazy "version A or version B, one or the other". At least for rc files, it is not at all important that version B be reflected system wide afterwards at the time the write() happens so fsync() is overkill. Of course ext4 isn't "violating" POSIX either, but most agree that its behavior was undesirable in this situation, so patches have been committed to ensure ext4 adds ordering in this case to ensure that metadata updates happen after data updates.

I actually just converted all my filesystems to ext4 (from XFS) the other day, since it's at least possible with appropriate mount flags to get ext4 to act as a proper desktop FS. (I didn't know about XFS's similar issues with power loss until it was too late). So far everything is working nicely for me, although I haven't intentionally power cycled to test that case either.

Comment Re:And if they sold the heat as well as electricit (Score 1) 426

The placement of the nuclear reactor to the sea is a safety issue. You NEED guaranteed large cool water in the condenser stage or reactor goes boom.

Nuclear power plants don't go "boom" (at least not due strictly to loss of condenser cooling). You can most certainly meltdown a nuclear core if you remove condenser cooling if you without alternative sources of cooling water. But western-designed cores have multiple sources of alternative cooling available to at least keep the nuclear core cooled in this event. (Part of the failure at TMI was the operators not recognizing that the core was actually being drained of water -- this fooled them into deliberately turned off essential safety systems during the casualty that would have prevented the meltdown).

The problem here is related to safety. It is harder to produce intrinsic stability into non-water-based fission. Namely, in boiler-based reactors, when a greater ratio of steam is produced, the reaction naturally slows down, thus naturally regulating the system if electronic control mechanisms don't catch and compensate the control rods in time.

What you're thinking of is the coefficient that relates change in coolant temperature to change in reactor power. Although it is true that boiling water reactors probably power down as the temperature increases this feature is by no means specific to coolant type.

With non steam based systems, you use complex chemical fission-poisons (in high-pressure based reactors as found in subs) or are fully reliant on control-rod actuators. (possible single point of failure).

IAAS and your description of submarine reactor systems is inaccurate (although Naval Reactors probably won't let me describe it in any more detail).

The environmental DAMAGE [from Chernobyl], however was due exclusively to the fact that it was a warhead manufacturing site, and the construction apparatus is too large to enclose with a hardened concrete barrier.

RBMK was designed to be useful for producing warheads but even neglecting that the more pressing issue preventing a proper containment is that the reactor was designed to be refueled at power, which necessitated a complex machinery arrangement to depressurize and open up a fuel channel, replace spent fuel with new, and reseal the pressure barrier (while operating the whole time). This mechanism above the reactor made building a containment much more difficult (so they went without instead...)

Currently boiler and pressure based reactors are 'cheap' to build and are cheap to operate (so long as raw Uranium ore is cheap).

Well designs that require a single reactor vessel are actually fairly expensive. It takes quite advanced materials science and design to do (AFAIK only Japan has the means to do it for civilian-grade reactors at this point). The design of Canada's CANDU reactor was influenced by this fact.

Other than that (and the other major reply you got) I'm glad you're at least reading about it, which is way farther than most people who have points to make go. Personally I feel nuclear should be at least understood before removing it as an option. If solar, wind, etc. is more cost effective then by all means use those but something has to provide baseload energy and I see no reason it shouldn't be nuclear instead of coal or oil (and that's even accounting for the cradle to grave concerns IMO).

Comment Re:I always viewed both as procedural failures (Score 1) 309

Reports that I've read indicate that the scram rods were the most likely trigger of the accident. As you pointed out, the RBMK had a positive void coefficient, initial insertion of the scram rods caused the power to increase, which then caused more water to boil, which then led to even more voids and inceases in power. Now couple that with the event occurring at the end of core life, the reactor was primarily burning 239Pu with a much lower delayed neutron fraction than 235U, so it didn't take much to exceed the prompt critical threshold.

Don't misunderstand me, the only part which is uncertain in my mind was whether the core was already prompt critical before they started the scram or not.

Honestly given the insanely short generation period for a prompt critical core I think the most plausible explanation is that the scram tipped it over, as SL-1 seems to me to indicate that not a lot of time at all is needed to blow up a prompt critical core.

Slashdot Top Deals

Ya'll hear about the geometer who went to the beach to catch some rays and became a tangent ?

Working...