Comment Re:Backstory? (Score 0) 51
Hey buddy, you watch your mouth when you're talking about NYCL!
All riggghhhhtttt. Thanks Amicus
Hey buddy, you watch your mouth when you're talking about NYCL!
All riggghhhhtttt. Thanks Amicus
No... I think people want something in between 70 words and 56 pages.
Oh. OK. How many words do they want?
It does seem insane. I mean how can the court not see that this case is clearly about killing vimeo and by extension video sharing sites. How can they expect all employees to be 100% diligent. It's never going to happen. If the only option to adhere to Safe Harbor is to have google class content filter Youtube is going to be the only game in town in the US.
The legal fees alone are the killer. Veoh won every round, but had to go out of business due to the legal fees.
Maybe it's not about killing Vimeo, but rather making it "play nice" the way YouTube has: Pay for sync licensing of the music and support the licensing costs with ads.
In my experience, their primary goal in every instance is to put people out of business, if at all possible. YouTube has been 'playing nice' with them for many years, but they haven't dropped the pending case.
The blog post linked from TFS is a brief (~70 word) summary of the recent development with no links to other posts on your blog for the background on the story, only the big PDF of the decision.
The decision, IMHO, gives you what you need to know about the facts of the case in order to understand the significance of the decision. 56 pages is enough reading in my view, for our purposes. If you want more you can go on PACER and get hundreds of additional pages from the case file.
I clicked on this story because I was interested in the original topic, but this whiny, defensive stuff is way more interesting.
Yeah, definitely
Haha, way to drive people away
Well he shouldn't call something "obscure" just because he's too lazy to read it, and wants someone else to tell him what it said.
So what's the backstory behind this for those of us who dont read obscure blogspot blogs.
Obscure? You calling my blog obscure?
There is no "backstory". Just read the front story.
I've also seen Skype work when it shouldn't - behind corporate firewalls that are supposed to be blocking traffic.
When parties on both sides of a firewall are cooperating in getting data through the firewall, there is little you can do to stop them. The solution is to limit what software gets to run on the trusted side of the firewall. If you don't want Skype on your network, then don't install it. Some corporations do use Skype as part of their work. Those corporations are happy that Skype is so easy to get working through their firewall.
The point where it gets difficult to get data through is when there are two firewalls in play, and each of them blocks traffic in opposite directions. The only reason any communication is possible at all in such a scenario is, that somebody between the two firewalls is cooperating with the parties, which want to communicate.
Up to a month ago such a comment would've been modded to -1 because historically, NSA had helped improve the security of encryption standards.
Schneier has been speculating about the possibility of an NSA planted backdoor in Dual_EC_DRBG since 2007. Which by the way took me a few attempts to find again since there are many hits if you search for NSA backdoor on his site.
As Schneier has said, the revelations about recent NSA activity has completely evaporated the goodwill NSA earned in the cryptographic community from back then.
Goodwill might be an exaggeration. Learning that NSA had improved security of DES did reduce the distrust in NSA, but it did not eliminate it. The first evidence of the Dual_EC_DRBG probably brought that distrust back to the previous level. By now I guess the trust in NSA is at an absolute low. (If it got any lower you would start trusting anything from the NSA not to be trustworthy.)
The first thing Microsoft did was route all traffic through their servers. No more routing via anonymous "volunteers" or off-shore peer-to-peer technology.
That's not true. Earlier this month I have seen my Skype calls get routed through peers, who were not participating in the call. That however resulted in very unreliable calls, so I got the machine running Skype onto a public IP address. With that in place I could see the traffic was going directly between me and the IP addresses of the people I was communicating with. At one occasion I did however notice other people's calls getting routed through my computer, now that it had a public IP.
Anybody using Skype can look at their own network traffic to verify my observations.
Why Skype hasn't started supporting IPv6 is beyond me. It is so abundantly clear how Skype user experience is suffering from NAT. They could even have a Teredo client built into the client as a fallback when all other methods fail. Teredo is the only standardized tunnel protocol I know, which can be implemented in user mode without administrator privileges.
That said, I've always wondered why FLAC does not have FEC capability built in. It makes sense for a lossless format to have some support for more reliable archiving.
Such features don't belong in the file format. They should be at the storage layer. Whether they should be in file system or block layer is not entirely settled yet, so you can find implementations with redundancy at either layer. Reliable storage should be the default behaviour provided by the file system regardless of what files you put there. There are people who store files with contents that have nothing to do with audio, and those files need just as much reliability.
As in, if you have a sample rate of 48,000Hz, you can play back a frequency of 24,000Hz
But can you do so reliably? If you sample twice per wave and the original was a sine wave, the values you are going to sample depends on the phase of your sampling. If you are lucky you'll sample the peaks and get alternating high and low values, which would allow you to reproduce the original frequency. If you are unlucky you'll sample zero points and get a zero value at every sample. In general my calculations says you'll end up with with the correct frequency but a random amplitude, which is not very useful.
With three or four samples per waveform you should be able to reproduce not only the frequency but also the amplitude. More samples would be needed to reproduce the actual shape of the waveform, though at frequencies of 20kHz and upwards I don't think you could tell the difference between different waveforms anyway.
The brain is a wonderful organ; it starts working the moment you get up in the morning, and does not stop until you get to work.