Forgot your password?

Comment: Re:Congratulations, FTDI, You Just Killed Yourselv (Score 1) 687

by ewhac (#48207355) Attached to: FTDI Reportedly Bricking Devices Using Competitors' Chips.

The chips are not destroyed.

Yes, the bricked chips can (allegedly) be restored to working order through the use of a utility. "Hang on. Would this utility be furnished by the very same company that wrecked my device in the first place?" Why yes; is that relevant? "Very fscking hilarious; I'll be looking elsewhere for my USB-serial adapter needs from now on..."

This is a distinction without a difference, as they say. You wouldn't cut any slack to a malware author who tried to claim, "Oh, the files aren't destroyed. They're merely encrypted, and can be restored to their previous condition through the use of this handy-dandy decryption key, available exclusively from me... for a modest fee..."

Comment: Congratulations, FTDI, You Just Killed Yourselves (Score 3, Insightful) 687

by ewhac (#48206865) Attached to: FTDI Reportedly Bricking Devices Using Competitors' Chips.
Assuming FTDI manages to weasel out of lawsuits for willful destruction of property (do NOT let them hide behind the so-called EULA), they have basically made themselves the vendor to avoid for either chips or drivers for said chips.

Can you tell, by merely looking at it, whether a given device is using GenuineFTDI(TM)(R)(C)(BFD) chips, or whether it's a counterfeit? Can you tell by using whatever the Windows equivalent of lsusb is? No? Then there is a random, non-trivial chance that plugging in your serial-ish device will either:

  • Work (old non-destructive drivers),
  • Not work (new, non-destructive drivers),
  • Ruin the device (new, destructive drivers), so that it not only Not Works, but also Stops Working on every other machine on which it previously worked.
  • Thus, in the mind of the user, FTDI == Flaky. And Flaky == Avoid.

    Congratulations, FTDI. Ten points for avoiding your feet, but minus several million for shooting yourself straight in the head.

Comment: This Is Lennart's Defense? (Score 4, Insightful) 774

by ewhac (#48097261) Attached to: Systemd Adding Its Own Console To Linux Systems
Every time the systemd thing comes up, I want to hate it, but I don't truly know enough about it to actually hold a defensible opinion.

One of the defects constantly levelled against systemd is its propensity to corrupt its own system logs, and how the official response to this defect is to ignore it. The uselessd page has a link to the bug report in question, which was reported in May 2013 and, over a year later closed and marked NOTABUG. However, it seems Mr. Poettering is getting annoyed by people using his own bug reports against him, and added a comment to the bug report today purporting to clarify his position.

Unfortunately, his "clarifications" serve only to reinforce my suspicion that systemd is a thing to be avoided. To wit:

Since this bugyilla [sic] report is apparently sometimes linked these days as an example how we wouldn't fix a major bug in systemd:

Well, yeah, corrupt logs would be regarded by many as a major bug...

...Now, our strategy to rotate-on-corruption is the safest thing we can do, as we make sure that the internal corruption is frozen in time, and not attempted to be "fixed" by a tool, that might end up making things worse. After all, in the case the often-run writing code really fucks something up, then it is not necessarily a good idea to try to make it better by running a tool on it that tries to fix it up again, a tool that is necessarily a lot more complex, and also less tested.

Okay, so freeze the corrupted data set so things don't get worse, and start a new data set. A reasonable defensive practice. You still haven't addressed how the corruption happened, or how to fix it.

Now, of course, having corrupted files isn't great, and we should make sure the files even when corrupted stay as accessible as possible. Hence: the code that reads the journal files is actually written in a way that tries to make the best of corrupted files, and tries to read of them as much as possible, with the the subset of the file that is still valid. We do this implicitly on every access.

Okay, so journalctl tries to be robust, assumes the journal data might be crap, and works around it. So we can assume journalctl is probably pretty solid and won't make things worse.

Hence: journalctl implicitly does on read what a theoretical journal file fsck tool would do, but without actually making this persistent. This logic also has a major benefit: as our reader gets better and learns to deal with more types of corruptions you immediately benefit of it, even for old files!

....Uhhhhh-huh. So, yeah, newer tools will do a better job of working around the corruption, and we'll be able to recover more data, assuming we kept known-corrupt logs around. But what I still don't understand is WHY THE LOGS ARE CORRUPT. And why aren't there log diagnostic and analysis tools? If you already know your logs can turn to crap, surely there are structure analysis tools around that let you pick through the debris and recover data that your automated heuristics can't.

And why do I get the feeling that implied in the above is, "You don't need to know the log structure or how to repair it. We'll write the tools for that. We'll release better tools when we get around to it?"

File systems such as ext4 have an fsck tool since they don't have the luxury to just rotate the fs away and fix the structure on read: they have to use the same file system for all future writes, and they thus need to try hard to make the existing data workable again.

....AAAAnd you lost me. Seriously, this is your defense: "Filesystems are more important than system logs, so they have to try harder?" I find this insinuation... surprising. You do realize that btrfs didn't become worthy of general use overnight, right? (Some might argue it still hasn't.) It took years of development, and hundreds of people risking corrupt or destroyed filesystems before the kinks got worked out, and the risk of lost or corrupt files approached zero. More significantly, during this long development time, no one ever once suggested making btrfs the default filesystem for Linux. People knew btrfs could ruin their shit. No one ever suggested, "Oh, well, keep a copy of the corrupt block image and format a new one; we'll release better read tools Real Soon Now." No one suggested putting btrfs into everyday use until it proved its reliability.

Likewise, until it can demonstrate to the same level of reliability as common filesystems that it doesn't trash data, systemd is experimental -- an interesting experiment with interesting ideas and some promise, but still an experiment. I would appreciate it if you didn't experiment on my machines, thankyouverynice.

I hope this explains the rationale here a bit more.

No, sir. No it does not.

P.S: Is there any evidence to suggest that systemd log corruption issues have since been solved?

+ - Details of iOS and Android Device Encryption

Submitted by swillden
swillden (191260) writes "There's been a lot of discussion of what, exactly, is meant by the Apple announcement about iOS8 device encryption, and the subsequent announcement by Google that Android L will enable encryption by default. Two security researchers tackled these questions in blog posts:

Matthew Green tackled iOS encryption, concluding that at bottom the change really boils down to applying the existing iOS encryption methods to more data. He also reviews the iOS approach, which uses Apple's "Secure Enclave" chip as the basis for the encryption and guesses at how it is that Apple can say it's unable to decrypt the devices. He concludes, with some clarification from a commenter, that Apple really can't (unless you use a weak password which can be brute-forced, and even then it's hard).

Nikolay Elenkov looks into the preview release of Android "L". He finds that not only has Google turned encryption on by default, but appears to have incorporated hardware-based security as well, to make it impossible (or at least much more difficult) to perform brute force password searches off-device."

Comment: Re:HP (Score 1) 118

by ewhac (#48075339) Attached to: HP Is Planning To Split Into Two Separate Businesses, Sources Say

- the Windows 8 era machines include Windows 7 AND 8 installation disks - choose whatever you like.

If you custom-build a machine from their ZBook "Mobile Workstation" line, you can even configure a machine to not have Windows installed at all. Saves you about $100.00. Still rather pricey, though...

Comment: Re:ARE YOU LIKE STUPID???? (Score 1) 577

by ewhac (#48042889) Attached to: Will Windows 10 Finally Address OS Decay?

1) fix the PAGEFILE. Go inot the settings and change ti to fixed size - 2x-3x size of ram - both of minimum and maximum size. Do not let WInodws manage it! [ ... ]

Better still, move PAGEFILE.SYS off of C: entirely, preferably on to its own spindle if you can. That way the swapper isn't having a fight with every other application in the system for accessing system files; and PAGEFILE.SYS itself won't become fragmented.

Consider moving %TEMP% and %TMP% off of C: as well.

4) Dump the System Restore from time to time. This is just junk removal. [ ... ]

Sadly, this appears to be an all-or-nothing affair -- on XP, you can either delete all restore points or none of them. It would be nice to delete those that are, say, more than a year old.

Comment: Re:No defrag! (Score 1) 370

by ewhac (#47883827) Attached to: The State of ZFS On Linux
Yes. Alas, this is a consequence of ZFS's COW (copy on write) design.

In a filesystem like EXT3, if you open a file, seek to some offset, and write new data, EXT3 will write the new data to the existing disk block in place. ZFS, however, will allocate a new block for that offset (copy on write), write the modified data to it, and update the block chain. The result is that it's apparently very easy to badly fragment a ZFS file (do a Google search for "ZFS fragmentation" to see various stories and tests people have written).

You can apparently mitigate the problem by occasionally copying the entire affected file -- Oracle's own whitepaper on the subject apparently reads, "Periodically copying data files reorganizes the file location on disk and gives better full scan response time."

Bottom line: ZFS is not a panacea, nor is it simple. There are myriad options, and trade-offs to all of them.

Comment: My Experiences (Score 4, Informative) 163

by ewhac (#47812045) Attached to: Ask Slashdot: the State of Free Video Editing Tools?
First, a gratuitous plug for my Let's Play/Drown Out video series, currently focusing on 3DO console titles:

Why is that link relevant? Because they were all made using Kdenlive.

When I first started mucking around with digital video, I tried a bunch of free/libre packages, and formed the following opinions of each:

Windows Movie Maker
Yes, $(GOD) help me, I gave it a serious try. To my utter surprise, it mostly worked and did what I wanted without crashing. However, the UI was rather inflexible, and I needed more than the handful of features it offered, so I kept looking.

Every Google search for free video editing software always turns this up, so I tried it. Then, ten minutes later, I had to stop trying it because it kept crashing and/or hanging at the slightest provocation. It has an impressive-looking array of features, and the editing timeline looks quite powerful. Evidently, you can do some fairly impressive things with Cinelerra, provided you can identify and avoid all its weak spots.

The last time I tried this, it was unreliable, under-featured, and incredibly slow. Just loading a one hour-long video clip into the timeline took several minutes as it tried to generate thumbnails and an audio waveform for the clip.

Assuming I'm remembering this package correctly, all it does is assemble edits -- that is, you can tack together a bunch of clips one after the other to create a larger work. If you want to do any effects or titling, you're SOL. Perhaps the Kickstarter-funded upgrade will yield some improvements.

I had to learn something the hard way with this package: This is a professional package. By that, I don't mean it has a ton of features (although it certainly does). I mean it expects a certain level of media asset before it will operate on it in the manner you expect. Us mere proles are satisfied to use MP4 or MKV or ($(GOD) help us) AVI files. However, in the pro space, you have files that contain not just compressed audio and video, but also timecode. And not just timecode measured relative to when you last pressed the RECORD button, but also a master timecode from an achingly accurate central timecode generator fed to all your cameras and microphones. This not only means all your cameras and mics are in precise sync ('cause otherwise their internal clocks will drift relative to each other), but you can trivially sync all your master footage and then intercut shots without even thinking about it. Also, near as I can tell, there's no such thing as inter-frame compression in professional video. Each frame is atomic, which means you can cleanly cut anywhere, but it doesn't compress anywhere near as small as, say, H.264.

The result is that, if you don't have equipment that generates all this metadata for you, then you need to convert it from the puny consumer format you're likely using. This means having truly monstrous amounts of disk available just to store the working set, and tons of RAM to make it all work. And hopefully your conversion script(s) didn't cough up bogus timecode.

So, yes, Lightworks is very very nice, if you have the proper resources to feed it. I don't, so I've set it aside for that glorious day when I get some proper equipment :-).

Kdenlive is built on top of the MLT framework, and is about the best and most reliable thing I've found out there that doesn't cost actual money (either directly or indirectly). It has a non-linear timeline editor, it supports a wide variety of media formats, and it has a modest collection of audio and video effects (almost none of which you will use).

One of the more amazing things Kdenlive does is transparently convert sample and frame rates. Without thinking about it, my first video involved using a 44KHz WAV file, a 48KHz WAV file, and a 44KHz MP3 file, with the output audio to be 48KHz AAC. I feared I was going to have to convert all the sources to the same format, but Kdenlive quietly resampled them all when compiling the output video file, and everything came out undistorted and in sync.

Kdenlive does occasionally crash, which is annoying, but it has never destroyed my work. It has a fairly robust crash recovery mechanism, and you may lose your most recent one or two tweaks to the timelines, but you won't lose hours of work.

Kdenlive is not perfect, of course. It has limitations and annoyances that occasionally make me search for another video editor. But if, as I was, you're new to video editing, it will take you a while to find those limitations. Kdenlive has certainly served me very well in the meantime, and I think it's the most reliable, most capable, and most easily accessible Open Source video editor out there.

+ - Some raindrops exceed their terminal velocity->

Submitted by sciencehabit
sciencehabit (1205606) writes "New research reveals that some raindrops are “super-terminal” (they travel more than 30% faster than their terminal velocity, at which air resistance prevents further acceleration due to gravity). The drops are the result of natural processes—and they make up a substantial fraction of rainfall. Whereas all drops the team studied that were 0.8 millimeters and larger fell at expected speeds, between 30% and 60% of those measuring 0.3 mm dropped at super-terminal speeds. It’s not yet clear why these drops are falling faster than expected, the researchers say. But according to one notion, the speedy drops are fragments of larger drops that have broken apart in midair but have yet to slow down. If that is indeed the case, the researchers note, then raindrop disintegration happens normally in the atmosphere and more often than previously presumed—possibly when drops collide midair or become unstable as they fall through the atmosphere. Further study could improve estimates of the total amount of rainfall a storm will produce or the amount of erosion that it can generate."
Link to Original Source

+ - How Facebook Sold You Krill Oil

Submitted by Anonymous Coward
An anonymous reader writes "With its trove of knowledge about the likes, histories and social connections of its 1.3 billion users worldwide, Facebook executives argue, it can help advertisers reach exactly the right audience and measure the impact of their ads — while also, like TV, conveying a broad brand message. Facebook, which made $1.5 billion in profit on $7.9 billion in revenue last year, sees particular value in promoting its TV-like qualities, given that advertisers spend $200 billion a year on that medium. “We want to hold ourselves accountable for delivering results,” said Carolyn Everson, Facebook’s vice president for global marketing solutions, in a recent interview. “Not smoke and mirrors, maybe it works, maybe it doesn’t.”"

Comment: I Suppose Next We'll Be Seeing Benghazi Stories... (Score -1, Flamebait) 465

by ewhac (#47260871) Attached to: IRS Lost Emails of 6 More Employees Under Investigation
I don't know who the miserable asshat is who keeps front-paging this blithering right-wing horseshit, but they need to be fired yesterday.

This is a non-story. It has always been a non-story. It has already been investigated, and what turned up was a gigantic pile of nothing. But then, that's all Daryl Issa's "investigations" have ever turned up.

Yes, the IRS investigated a bunch of applications for tax-exempt status for a number of "Tea Party" groups. They also performed the same investigations on so-called liberal groups. They're supposed to do that; otherwise any moron could claim tax-exempt status. Were there problems with the investigations? Yes, because the tax law that requires them is so vague that it's basically left entirely to the discretion of the investigator.

Were any applications denied? No, not really. Did the IRS investigate more "Tea Party" groups than liberal groups? It would appear so. It would also appear that there were a hell of a lot more "Tea Party" applications flooding in during the timeframe in question (which makes sense, given that the "Tea Party" is not grassroots, but entirely the construction of FreedomWorks).

As for how "terribly convenient" it is for multiple IRS personnel under investigation to have lost the data in question, well... Considering that the IRS is underfunded (sounds weird, but it's true); and considering that they have tens of thousands of personal computers, none of them brand new, and all of them in various states of disrepair and subjected to various forms of abuse; and considering that every one of those tens of thousands of computers are running FUCKING WINDOWS , then you are provably a drooling idiot if you think the probability for unrecoverable data loss is anything less than 1.0.

The only story here is that IRS regs concerning tax-exempt political advocacy organizations are hopelessly vague. Moreover, it's not a story that belongs on a tech-oriented site. If I wanted to read about fabricated right-wing ghost stories, I'd visit RedState. Get this shit off Slashdot.

In any formula, constants (especially those obtained from handbooks) are to be treated as variables.