Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment This Is Lennart's Defense? (Score 4, Insightful) 774

Every time the systemd thing comes up, I want to hate it, but I don't truly know enough about it to actually hold a defensible opinion.

One of the defects constantly levelled against systemd is its propensity to corrupt its own system logs, and how the official response to this defect is to ignore it. The uselessd page has a link to the bug report in question, which was reported in May 2013 and, over a year later closed and marked NOTABUG. However, it seems Mr. Poettering is getting annoyed by people using his own bug reports against him, and added a comment to the bug report today purporting to clarify his position.

Unfortunately, his "clarifications" serve only to reinforce my suspicion that systemd is a thing to be avoided. To wit:

Since this bugyilla [sic] report is apparently sometimes linked these days as an example how we wouldn't fix a major bug in systemd:

Well, yeah, corrupt logs would be regarded by many as a major bug...

...Now, our strategy to rotate-on-corruption is the safest thing we can do, as we make sure that the internal corruption is frozen in time, and not attempted to be "fixed" by a tool, that might end up making things worse. After all, in the case the often-run writing code really fucks something up, then it is not necessarily a good idea to try to make it better by running a tool on it that tries to fix it up again, a tool that is necessarily a lot more complex, and also less tested.

Okay, so freeze the corrupted data set so things don't get worse, and start a new data set. A reasonable defensive practice. You still haven't addressed how the corruption happened, or how to fix it.

Now, of course, having corrupted files isn't great, and we should make sure the files even when corrupted stay as accessible as possible. Hence: the code that reads the journal files is actually written in a way that tries to make the best of corrupted files, and tries to read of them as much as possible, with the the subset of the file that is still valid. We do this implicitly on every access.

Okay, so journalctl tries to be robust, assumes the journal data might be crap, and works around it. So we can assume journalctl is probably pretty solid and won't make things worse.

Hence: journalctl implicitly does on read what a theoretical journal file fsck tool would do, but without actually making this persistent. This logic also has a major benefit: as our reader gets better and learns to deal with more types of corruptions you immediately benefit of it, even for old files!

....Uhhhhh-huh. So, yeah, newer tools will do a better job of working around the corruption, and we'll be able to recover more data, assuming we kept known-corrupt logs around. But what I still don't understand is WHY THE LOGS ARE CORRUPT. And why aren't there log diagnostic and analysis tools? If you already know your logs can turn to crap, surely there are structure analysis tools around that let you pick through the debris and recover data that your automated heuristics can't.

And why do I get the feeling that implied in the above is, "You don't need to know the log structure or how to repair it. We'll write the tools for that. We'll release better tools when we get around to it?"

File systems such as ext4 have an fsck tool since they don't have the luxury to just rotate the fs away and fix the structure on read: they have to use the same file system for all future writes, and they thus need to try hard to make the existing data workable again.

....AAAAnd you lost me. Seriously, this is your defense: "Filesystems are more important than system logs, so they have to try harder?" I find this insinuation... surprising. You do realize that btrfs didn't become worthy of general use overnight, right? (Some might argue it still hasn't.) It took years of development, and hundreds of people risking corrupt or destroyed filesystems before the kinks got worked out, and the risk of lost or corrupt files approached zero. More significantly, during this long development time, no one ever once suggested making btrfs the default filesystem for Linux. People knew btrfs could ruin their shit. No one ever suggested, "Oh, well, keep a copy of the corrupt block image and format a new one; we'll release better read tools Real Soon Now." No one suggested putting btrfs into everyday use until it proved its reliability.

Likewise, until it can demonstrate to the same level of reliability as common filesystems that it doesn't trash data, systemd is experimental -- an interesting experiment with interesting ideas and some promise, but still an experiment. I would appreciate it if you didn't experiment on my machines, thankyouverynice.

I hope this explains the rationale here a bit more.

No, sir. No it does not.

P.S: Is there any evidence to suggest that systemd log corruption issues have since been solved?

Comment Re:ARE YOU LIKE STUPID???? (Score 1) 577

1) fix the PAGEFILE. Go inot the settings and change ti to fixed size - 2x-3x size of ram - both of minimum and maximum size. Do not let WInodws manage it! [ ... ]

Better still, move PAGEFILE.SYS off of C: entirely, preferably on to its own spindle if you can. That way the swapper isn't having a fight with every other application in the system for accessing system files; and PAGEFILE.SYS itself won't become fragmented.

Consider moving %TEMP% and %TMP% off of C: as well.

4) Dump the System Restore from time to time. This is just junk removal. [ ... ]

Sadly, this appears to be an all-or-nothing affair -- on XP, you can either delete all restore points or none of them. It would be nice to delete those that are, say, more than a year old.

Comment Re:No defrag! (Score 1) 370

Yes. Alas, this is a consequence of ZFS's COW (copy on write) design.

In a filesystem like EXT3, if you open a file, seek to some offset, and write new data, EXT3 will write the new data to the existing disk block in place. ZFS, however, will allocate a new block for that offset (copy on write), write the modified data to it, and update the block chain. The result is that it's apparently very easy to badly fragment a ZFS file (do a Google search for "ZFS fragmentation" to see various stories and tests people have written).

You can apparently mitigate the problem by occasionally copying the entire affected file -- Oracle's own whitepaper on the subject apparently reads, "Periodically copying data files reorganizes the file location on disk and gives better full scan response time."

Bottom line: ZFS is not a panacea, nor is it simple. There are myriad options, and trade-offs to all of them.

Comment My Experiences (Score 4, Informative) 163

First, a gratuitous plug for my Let's Play/Drown Out video series, currently focusing on 3DO console titles: http://www.youtube.com/playlis...

Why is that link relevant? Because they were all made using Kdenlive.

When I first started mucking around with digital video, I tried a bunch of free/libre packages, and formed the following opinions of each:

Windows Movie Maker
Yes, $(GOD) help me, I gave it a serious try. To my utter surprise, it mostly worked and did what I wanted without crashing. However, the UI was rather inflexible, and I needed more than the handful of features it offered, so I kept looking.

Cinelerra
Every Google search for free video editing software always turns this up, so I tried it. Then, ten minutes later, I had to stop trying it because it kept crashing and/or hanging at the slightest provocation. It has an impressive-looking array of features, and the editing timeline looks quite powerful. Evidently, you can do some fairly impressive things with Cinelerra, provided you can identify and avoid all its weak spots.

Pitivi
The last time I tried this, it was unreliable, under-featured, and incredibly slow. Just loading a one hour-long video clip into the timeline took several minutes as it tried to generate thumbnails and an audio waveform for the clip.

OpenShot
Assuming I'm remembering this package correctly, all it does is assemble edits -- that is, you can tack together a bunch of clips one after the other to create a larger work. If you want to do any effects or titling, you're SOL. Perhaps the Kickstarter-funded upgrade will yield some improvements.

Lightworks
I had to learn something the hard way with this package: This is a professional package. By that, I don't mean it has a ton of features (although it certainly does). I mean it expects a certain level of media asset before it will operate on it in the manner you expect. Us mere proles are satisfied to use MP4 or MKV or ($(GOD) help us) AVI files. However, in the pro space, you have files that contain not just compressed audio and video, but also timecode. And not just timecode measured relative to when you last pressed the RECORD button, but also a master timecode from an achingly accurate central timecode generator fed to all your cameras and microphones. This not only means all your cameras and mics are in precise sync ('cause otherwise their internal clocks will drift relative to each other), but you can trivially sync all your master footage and then intercut shots without even thinking about it. Also, near as I can tell, there's no such thing as inter-frame compression in professional video. Each frame is atomic, which means you can cleanly cut anywhere, but it doesn't compress anywhere near as small as, say, H.264.

The result is that, if you don't have equipment that generates all this metadata for you, then you need to convert it from the puny consumer format you're likely using. This means having truly monstrous amounts of disk available just to store the working set, and tons of RAM to make it all work. And hopefully your conversion script(s) didn't cough up bogus timecode.

So, yes, Lightworks is very very nice, if you have the proper resources to feed it. I don't, so I've set it aside for that glorious day when I get some proper equipment :-).

Kdenlive
Kdenlive is built on top of the MLT framework, and is about the best and most reliable thing I've found out there that doesn't cost actual money (either directly or indirectly). It has a non-linear timeline editor, it supports a wide variety of media formats, and it has a modest collection of audio and video effects (almost none of which you will use).

One of the more amazing things Kdenlive does is transparently convert sample and frame rates. Without thinking about it, my first video involved using a 44KHz WAV file, a 48KHz WAV file, and a 44KHz MP3 file, with the output audio to be 48KHz AAC. I feared I was going to have to convert all the sources to the same format, but Kdenlive quietly resampled them all when compiling the output video file, and everything came out undistorted and in sync.

Kdenlive does occasionally crash, which is annoying, but it has never destroyed my work. It has a fairly robust crash recovery mechanism, and you may lose your most recent one or two tweaks to the timelines, but you won't lose hours of work.

Kdenlive is not perfect, of course. It has limitations and annoyances that occasionally make me search for another video editor. But if, as I was, you're new to video editing, it will take you a while to find those limitations. Kdenlive has certainly served me very well in the meantime, and I think it's the most reliable, most capable, and most easily accessible Open Source video editor out there.

Comment Re: No silver bullet (Score 2) 116

That's a good point, and consistent with what I meant but didn't explain very well. Maybe "struggling" or some other word is better than "difficulty". The point being that the article talks about some symptoms that they're trying to identify, but they fail to discuss that those symptoms can all occur under normal circumstances when there is nothing that could/should be done (e.g., it's a good difficulty that encourages focus and the developer is working on something that is intrinsically difficult, or it's a bad difficulty and the developer is struggling on something that isn't very difficult because they're hungover or distracted because they had a terrible date last night).

Comment No silver bullet (Score 1) 116

For a given developer, even a very skilled developer, some tasks will be difficult even if the developer is working in an optimal state and there is no "intervention" that could change that. The discussion doesn't seem to acknowledge that point or discuss how they would distinguish between the events they probably care about and could do something about (developer is experiencing great difficulty because they are hungover or drowsy after lunch), and those they can't do anything about (developer is experiencing great difficulty because they are trying to debug a subtle concurrency bug that they're having trouble even reproducing).
Java

Oracle Hasn't Killed Java -- But There's Still Time 371

snydeq (1272828) writes Java core has stagnated, Java EE is dead, and Spring is over, but the JVM marches on. C'mon Oracle, where are the big ideas? asks Andrew C. Oliver. 'I don't think Oracle knows how to create markets. It knows how to destroy them and create a product out of them, but it somehow failed to do that with Java. I think Java will have a long, long tail, but the days are numbered for it being anything more than a runtime and a language with a huge install base. I don't see Oracle stepping up to the plate to offer the kind of leadership that is needed. It just isn't who Oracle is. Instead, Oracle will sue some more people, do some more shortsighted and self-defeating things, then quietly fade into runtime maintainer before IBM, Red Hat, et al. pick up the slack independently. That's started to happen anyhow.'

Slashdot Top Deals

Recent investments will yield a slight profit.

Working...