I was actually more fascinated that the once-pioneer and market leader in mobile phones (outside the US) was being sold off for more than $1Bn less than the sloppy-thirds of Skype which is widely duplicated by free services.
Winston Churchill, meet Clement Atlee.
Except I think that involved winning a war, not just surviving in a currently tenuous second position...
Me too. Apparently we're in quite the minority.
Well I guess that answers the question about what *didn't* go in that big new data center.
In a previous life several years ago we looked at buying 300 of them to run Yellow Dog (yes, several years). They were nicely engineered units, but Apple clearly wasn't series about enterprise sales. They offered a kit of spare parts for field replacements, but not much beyond that.
If you stoop to RTFA you'll see there's a lot of sensible stuff in it, with the two main points being: flight data recorders record a limited (25h) sliding window of data, and that you have to go and fine them and sometimes this isn't possible. Both those make crash investigations harder than they might be, and delay the results. If you could get results more quickly and reliably, that'd obviously be a good thing.
The author doesn't suggest a sudden wholesale replacement of black boxes, but a supplementary mechanism for also transmitting data in real time. That data could be aggregated and mined without waiting for a crash to occur, potentially providing a much richer source of information about aircraft behaviors in both normal and abnormal operation.
He confuses the point a bit in some of his summary sentences by implying that he *is* talking about a prompt wholesale replacement of black boxes.
You know, you may well have a point in there. But thinking that opt-in bicycle sharing schemes are a great example of the thin end of that wedge is just, you know... fucking bonkers.
That's what I was getting at -- it's not as if it's a simple case of blowing on the end to clear out some fluff. Detailed procedures, including not least unplugging the other end of said cable to make sure it's unlit, which would include finding said other end. And likely go and get various the items required for the cleaning procedure. Which would add up at least to a conversation or two, and perhaps one with us the customer discussing the topic. I'm not disagreeing with cleaning of fiber cables sometimes being necessary, but I didn't for a moment believe all that had actually gone on.
I had one a few years back which highlighted issues with both our attention to the network behavior, and the ISP's procedures. One day the network engineer came over and asked if I knew why all the traffic on our upstream seemed to be going over the 'B' link, where it would typically head over the 'A' link to the same provider. The equipment was symmetrical and there was no performance impact, it was just odd because A was the preferred link. We looked back over the throughput graphs and saw that the change had occurred abruptly several days ago. We then inspected the A link and found it down. Our equipment seemed fine, though, so we got in touch with the outfit that was both colo provider and ISP.
After the usual confusion it was finally determined that one of the ISP's staff had "noticed a cable not quite seated" while working on the data center floor. He had apparently followed a "standard procedure" to remove and clean the cable before plugging it back in. It was a fiber cable and he managed to plug it back in wrong (transposed connectors on a fiber cable). Not only was the notion of cleaning the cable end bizarre -- what, wipe it on his t-shirt? -- and never fully explained, but there was no followup check to find out what that cable was for and whether it still worked. It didn't, for nearly a week. That highlighted that we were missing checks on the individual links to the ISP and needed those in addition to checks for upstream connectivity. We fixed those promptly.
Best part was that our CTO had, in a former misguided life, been a lawyer and had been largely responsible for drafting the hosting contract. As such, the sliding scale of penalties for outages went up to one-month free for multi-day incidents. The special kicker was that the credit applied to "the facility in which the outage occurred", rather than just to the directly effected items. Less power (not included in the penalty) the ISP ended up crediting us over $70K for that mistake. I have no idea if they train their DC staff better these days about well-meaning interference with random bits of equipment.
I don't agree with software patents. I think it's a silly idea. Also, there are Shazam alternatives already available, ostensibly without infringing on the US patents now owned by Landmark LLC.
All that aside, though, this letter shouldn't come as a surprise. This guy didn't discuss alternatives, shortcomings, possibilities or even come up with something equivalent but independent. He called his post "Creating Shazam in Java", referenced someone else's detailed posting about how Shazam works, then went on to suggest sample code. Having some familiarity with the Shazam algorithm and having read the patents around it and the original white paper by Avery Wang, the linked article by Bryan Jacobs is very much a lightened up translation of the gory details of Shazam. The code is, it is stated, a rough guide for how to do that in Java (note he glosses over the FFT).
So it's not a surprise that Landmark is after him on patent grounds. The fact that such patents are allowed to exist is -- to me -- a problem. But the shock and whining about this particular case is naive. It's really, really obvious that this is something that would fall foul of that legal mechanism.
This isn't Shazam. Odd sounding, but Shazam doesn't actually own the Shazam algorithm anymore, although it does retain the right to use it. Landmark LLC is a separate entity.
I'm a hobbyist photographer and videographer, and I've been hassled for ID before when shooting in a public place. I read plenty of stories about photographers being harassed improperly, and reading the article I don't think this is one of them. They started at 300ft, which was silly, and scaled it back to 65ft when called on it. Leaving aside the who and why, 65 feet doesn't make this stuff hard to photograph. Even with a 200mm lens on a digital SLR (especially crop sensor) you can get very serviceable shots of "what's going on" at 65ft. Professional press photographers on assignment usually have a healthier complement of lenses than that, before considering telconverters, cropping in on the subject and so on.
If the story is something highly specific to do with equipment and handling of it then perhaps you need an even bigger lens or to be closer to the subject. But if you're taking shots of how they're laying out booms, who's involved and so on, 65ft isn't a big deal at all. Seems like a not unreasonable tradeoff to keep people from getting under the workers' feet. The subjective standard I'm applying here is does the restriction make it likely we'll not find out something that the public interest demands should be disclosed? No, it really doesn't.
... before it even happened. A few Lexus introduced the automatic parallel parking feature, and Audi responded with this:
Amusing retort. Irrelevant for 99.9%+ of people, but sold right into the person you'd love to be.
Ever hear of Ice Nine? This sounds like much more fun.
What exactly did everyone think "Don't be Evil" would mean once the company went public, grew up and grew larger?
Not that this is necessarily anything premeditated and sinister, but notice how thinking through whether something might seem weird or discomfiting isn't at the top of the list?
And here comes a chopper to chop off your head.
Sorry. Bad habit.