Comment Re:unwise (Score 1) 49
Shouldn't be allowing language in kernel that doesn't have a standard.
Even before Rust, the kernel wasn't written to ISO C, but to GNU C (there's plenty of quotes from Linus to that effect).
Shouldn't be allowing language in kernel that doesn't have a standard.
Even before Rust, the kernel wasn't written to ISO C, but to GNU C (there's plenty of quotes from Linus to that effect).
Rust does not make deadlocks safe (they cannot be). Rather, it disallows some ways of structuring your code and will then refuse to compile your code until you've written it in a way that the compiler can prove that a deadlock (or memory error, or other types of error) cannot occur. That can sometimes make Rust code much harder to write, but the benefit is that you can be much more aggressive at using threads because you don't have to "play it safe because of the risk of errors".
From my (limited) understanding, I think dark matter would have to interact with *something* other than gravity, or else it would be massless and hence go flying at the speed of light and be unable to clump together in galaxies. That something could be (hopefully) the Higgs but I guess it could also be some kind of "fifth force" that doesn't interact with normal matter, in which case we're still going to have a hard time detecting it.
The more digits you have, the more bits you need to get sufficient accuracy in your FFT. So your FFT isn't *quite* O(N*log(N)) in that context. Once you consider that, I believe your FFT becomes slower than the SchÃnhageâ"Strassen algorithm. Still much faster than the naive O(N^2) algorithm of course.
Trolls are always a problem for anything you do, but at least here's a long list of companies that are providing royalty-free licensing of their video patents for AV1. It's no guarantee, but it sure beats any other free video codec effort.
It's not so much that the numbering changed. What really changed was the development methodology and the numbering just reflects that. Going from 2.4 to 2.6 took forever because there were too many changes, some not so well tested and because it was taking forever to stabilize, more changes would come in because otherwise it would take years before they could come in. So progress was (relatively) slow and new features had to be shipped through custom patches rather than mainline. There's been a general realization (not just for the kernel) that that kind of development cycle just couldn't work anymore. That's why the kernel has moved to a shorter development cycle. It means there's less pressure to put new features "as soon as possible because otherwise it will be delayed by years", so much fewer things to debug in each release and overall, everything's better. The only drawback is for people who don't want to upgrade as often and that's why there are a few special "stable" releases once in a while (and Firefox has ESRs). If Linus had kept the old development model, I suspect the current "stable" kernel would still be a 2.6.x and there would be a 2.7.392 development kernel that still wouldn't be ready to ship.
At a previous employer, we lost an entire row of servers in a DC after a water leak (somehow) triggered the suppression system. The 'explosion' was strong enough to knock the doors off cabinets, bend 2 cabinets, and cause a couple hundreds drives to be dead. Thankfully our service was spread out far enough to survive the loss of a row for a few week while we waited for all new disks to arrive from IBM.
The pictures were crazy, it looks like a bomb went off.
What you're saying has been historically true. Jobs have moved from farming to manufacturing. Then from manufacturing to services. The problem is that for now we have nothing after services. Sure, there's R&D, but unless something radical happens, it's not a sector where you can reasonably expect a huge number of service workers to move to. So unless we find a "new sector" real soon, we're heading into uncharted territory (not necessarily bad, but we don't know).
The problem is not so much any sort of addiction as it's a time sink. Sites like YouTube are designed so that you keep clicking on more videos until hours have passed and no homework were done. If it weren't mobile devices, it'd be TV or something else, but mobile devices is what we have to worry about recently. When they're teens you can't just look over their shoulders, so the best I've come up with is openwrt (LEDE) DNS blocking.
I suspect they were also trying to cover NTSC's 29.97 Hz and maybe even 44.1/48 kHz audio.
Now, training is a little trickier because I cannot share the data.
I cannot share the current data I'm using because it's copyrighted. Hence asking for people for help getting data that I can redistribute.
So weâ(TM)re supposed to just give jmv a bunch of data with no way to know how he is using it?
Yes, because I have such a track record for keeping things private.
It's the first time we try this. We'll look at the quality of the data we get (yes, noise quality!) and if it's sufficiently good/useful, then we'll also make it available. It might take some time to sort out the useful samples from the ones that aren't since some already have noise suppression applied by the OS or browser.
Some browsers/OS, already have some noise suppression running. That may be why you're not hearing anything on playback?
(I'm the author of the article)
You may not be aware, but around 10 years ago, browsers started including audio technology. This now includes WebRTC which lets you do videoconferencing in the browser (without Flash). As surprising as it may sound, some people like doing VoIP/videoconference. And those who use WebRTC tend to prefer when their audio doesn't have too much noise. And that is why RNNoise is useful.
Parkinson's Law: Work expands to fill the time alloted it.