Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Also why are they doing it? (Score 1) 520

It's likely not to work well for good quality stuff. It can handle DVD and ok-quality divx/xvid standard-def encodes, but the CPU in the Wii is just too slow to play any hi-def content. A 700-ish MHz PPC CPU doesn't get you very far for media playback these days.

Not sure if it's possible to make use of any graphics acceleration hardware to do decoding, but this certainly isn't implemented yet and I don't think anyone's working on it.

Comment Re:BIOS (Score 1) 437

Well, you can't get to most bootloaders without initializing some hardware, like, say, the disk controller (which presumably requires at least a part of the PCI bus to be initialized). Of course, if you put the bootloader (or even the entire kernel) into the BIOS itself, then it doesn't matter.

Comment Re:Same as bugzilla? (Score 2) 283

I've finally come to the opinion that locking is unnecessarily expensive, and doesn't tend to enhance collision handling capabilities beyond a simple concurrency timestamp check.

I guess that's fine, depending on the users' needs. If "typical" edits require spending 20 minutes fiddling around with a web form before the user clicks the save button, I bet they're gonna be pretty pissed when they get a rejection message and their 20 minutes of work gets thrown away.

With this in mind, if you're going to use your optimistic locking approach, you'd need to add more code that, instead of rejecting conflicting changes outright, presents the user with options, possibly listing the conflicting changes.

Personally I'd find the auto-expiring lock option to be the best (the user will know going into it that someone else is editing), though hitting the DB to update the timestamp every 10 seconds doesn't scale to a large number of users. But there are other options, like ones some wiki software uses, where you get a longer locking period (on the order of several minutes), and you have to manually refresh the lock before it expires. Or you can possibly refresh the lock automatically, by checking to see if the user isn't idle.

The unspoken question is, why not design a user workflow that avoids synchronization issues...?

Yeah, that would be ideal, but may not be practical. The article poster says they're migrating a native app to a web app, so changing the users' workflow may not be an option.

Comment Re:case background (Score 2, Insightful) 152

Assuming this guy has been scamming the whole time and didn't just start in the past few years (or didn't stop in the 90s and just start up again), it's pretty sad that it's taken 11 years from the original complaint to get any meaningful actions taken. Though you could argue that's 13 years, and not 11, since apparently he hasn't complied with the court order for 2 years, with no consequences until now.

Comment Re:Microkernel (Score 1) 639

Indeed, good for him. At least he can admit he was wrong (even if he was a dick about it) and adopt a new viewpoint. He may not be a "nice guy" in the traditional sense, but at least he doesn't try to hide his mistakes when he realizes he's made them.

Comment Re:Problem (Score 1) 639

He was referring to the original poster who said, "BSD however, really only has one user base - and they largely want the same thing. Stability, security, and performance. So all the cute little desktop friendly stuff that Linux keeps adding and all the server-specific stuff that Linux keeps adding aren't there." So he (semi-jokingly) asked, if BSD doesn't have "server" or "desktop" stuff, then what does it do?

Comment Re:The Rules are the Rules... (Score 1) 104

True, but given a set of conditions including "value" or "complexity" of the submission, it'd be damn-near impossible to judge the final result. Your hypothetical example is easy: if the dirt-simple algo got a 10.2% improvement, whereas the algos with magnitudes greater complexity netted a 10.4% improvement, it'd be easy to say the simple algo is more "valuable." But what if the complexity differences are much smaller? How do you judge them when the improvement is also close? I think Netflix was smart by providing a numerical target and detailing specifically how the numerical results are calculated. While there's always room for people to argue a result, it's harder when the criteria are simple.

Comment Re:Anonymous Coward (Score 2, Interesting) 104

It is? I only see a bit in a question about licensing (somewhat tangential) that suggests that Netflix hopes that participants will be able to build a business out of the algorithm they design, but that sounds pretty weak, and doesn't have all that much to do with what the participants got, aside from the prize money.

The contest has been going on for three and a half years, and the winning team of seven will be splitting a cool million, which gives each person just under $145k, minus taxes. Now, I don't know how much time these guys spent on it, but even if they only worked a year's worth of regular work hours over the 3.5 years, $145k per year each for seven developers is a pretty damn good bargain from Netflix's perspective for what they got (not just the new algorithm, but a lot of good PR and buzz).

I'm not saying the BellKor guys got the shaft; they were certainly compensated (not just monetarily; I'm sure their employability went up as well), and I'm sure a big part of their desire to compete was the challenge itself. But I'd bet that Netflix would've had to pay quite a bit more.

And it's not like the BellKor team did all the work; all the other teams did some of the same work independently. I imagine many (most?) of them didn't stand a chance, but let's just throw out a conservative number and say the top 5% of teams managed to improve on Netflix's existing algorithm (even if not by 10%). It's conceivable to believe that an in-house team of paid developers/researchers would end up doing an analogous iterative process, achieving smaller gains, eventually reaching the 10% goal. Depending on Netflix's hiring skills, it's possible they wouldn't reach a 10% increase without many more man-years of work.

This contest was a very smart move on Netflix's part: their only real downside is that their -- self-imposed -- competition terms will allow the contest participants to competitively license their implementations to other companies.

Comment Re:Linux audio (Score 1) 374

Other tasks like live midi stuff tend to require low latency or it will sound extremely off.

... which is why you should be using jack for these sorts of things.

Yes, they have been trying to work together, but serious question have you ever tried this setup? It's not pretty, the pulse audio sink for jack has a myriad of problems, and most of the time it just plain doesn't work.

This may improve in future, but for now it is extremely hit or miss, with most of the time being miss.

I never said it worked perfectly, just that there's been a commitment made to support this setup. PA is a new technology, and the basic concept of sharing an ALSA device between two sound-server-like apps is completely new as well. These things just don't magically start working overnight.

Better yet, try implementing them, I have.

Have you filed bug reports and tried to help fix the issues, or are you just complaining uselessly?

Comment Re:Linux audio (Score 1) 374

I'm sorry you have shitty drivers for your audio hardware, but cutting in PA as default for several distributions was a calculated move designed to quickly expose these sorts of problems so they could be fixed in the audio drivers themselves (PA exercises the ALSA API in many different interesting ways that most apps don't).

If you object to the idea of your distro using you as a guinea pig, I can certainly understand that, but that's a separate issue.

Slashdot Top Deals

The rule on staying alive as a program manager is to give 'em a number or give 'em a date, but never give 'em both at once.

Working...