It says in the article (iirc, read it a few days ago) that this was a problem a few years ago but now is mostly licked. So I'm not sure why it's coming up today.
One of the cool things about the Beaglebone Black and the Raspberry Pi is that they've got GPUs powerful enough to drive an HDMI display, and give you 1080p graphics if you make sure there's enough electric power and not too much interference (my RPi was a bit wonky on the last display I tried), so you can drive a decent monitor for programming or use it as a TV video player.
But if you don't need that, because you're doing X windows or just doing a bunch of ssh terminal sessions, you've got more potential choices, possibly lower power, possibly more memory. It depends a lot on what the target platform for your development is going to be, and on how much effort you plan to spend getting things set up, compared to just taking the BBB or RPi and calling it a day.
You may or may not have noticed that the US press hasn't mentioned the name of the departing CIA Station Chief, but they haven't. Why not? Because it's A Secret! The Germans know who they're kicking out, but the US press goes along with the pretense that it's secret, and other people he might spy on in the future won't know he's a spy, and people who he's hung out with in the past might be exposed as having been spies too. In some cases it's illegal for US government officials to reveal the names of spies, but if they leak them for administration political purposes, like Scooter Libby outing Valerie Plame, they get pardoned, and if they get leaked by accident, like a White House Press Release "notice what name is missing" oops a few months back, the press politely pretends they didn't see anything.
If the Germans are really mad? Merkel can tell the German press the guy's name, and ask them to print it and put it online.
How do you get to the World Cup? Same way you get to Carnegie Hall - Practice!
They have shown that they can not be trusted. They must lose the power to do this.
Pull someones certificates or kill some CA. Someone needs to suffer because of this.
What happens now is that there's an investigation. Depending on the outcome the CA may be revoked for good, or merely forced to reissue lots of certificates. The deciding factor is the reason for the screwup - for instance they may have got hacked, rather than been actively corrupt. In that case Microsoft will have to decide if they have patched things up enough to continue as part of their root store program or whether to pull the plug. I doubt many people have certs issued by this CA so the damage would be relatively minimal.
Unfortunately you can't just kill any CA that screws up. For one, if the CA was widely used it'd be disrupted. For another, nothing is unhackable, especially when you get the NSA involved. Expecting CA's to be able to reliably fight off professional hackers from dozens of governments and never ever fail is likely an impossible standard to ever meet.
Hard decisions ahead for browser and OS makers for sure
This seems to be quite typical for government consultations. There's very little in the way of rigorous process. I remember years ago in the UK there was some poll that showed people were worried about anti-money laundering laws and their effect on freedom and civil liberties (it was a poll about risks to civil liberties, Ithink). So the British government said they'd respond to this by ordering a consultation on how best to improve Britain's AML laws. They invited public comments, etc. 6 months later the consultation was published and it recommended making the laws even stricter. There was absolutely no evidence-based approach used at all.
Probably the future of wearables is the personal hub.
The problem with wearables is that a radio capable of sustaining a connection to the outside world - be it 4g or wifi - needs a fair bit of power and consequently quite a lot of battery. So devices have to be fairly chunky, or else have to be recharged more often than you'd like. But your bluetooth mouse probably goes months on one charge - mine certainly does. So the solutions is to have a device mounted discreetly on your belt or in your handbag, or carried in a pocket, which just acts as a personal hub/firewall, doing backhaul for your wearables. It doesn't need a screen. It doesn't need apps. But once it's paired with your wearables, you can use a device which has no backhaul capability to make phone calls or to access any service on the Internet.
This is an extension of how Google Glass or your Pebble watch already uses your smartphone. The smartphone acts as a personal hub. But if the display you actually use is the one on your Glass or the one on your Pebble, you don't need the big, fragile, power-hungry screen on your smartphone any more; so the personal hub can be cheaper and much more durable than any smartphone.
Once you've got that concept, there are other services that a personal hub can supply to your wearables, for example storage.
Everyone knows that the year of wearable computing is the year after the year of Linux on the desktop.....
The only problem with that is I have now had Linux on my desktop for TWENTY-ONE YEARS.
Duh, that should be obvious. The only reason they would have failed is if they were DOA or smoked when I plugged them in or something else was defective or the lamp fell over; bulbs that are supposed to last tens or hundreds of thousands of hours that I put in this year haven't had time to fail.
CFLs are different - they've been out a few years now, and I've had plenty of them fail, and worried about whether dead ones break before I get them out of the house and over to the recyclers.
My most recent not-really-energy-saving bulbs failed in 2-3 months. They were little red night-light bulbs from the dollar store post-Christmas discount, and one can argue that they're "energy-saving" because they're only a few watts (3 or 10 or something), but they were incandescents, not LEDs, so they're really not. I've replaced a couple of them with LEDs that haven't failed yet.
Oh, by the way, one of my pet peeves is seeing vector animations from Homestar Runner, AtomFilms, etc uploaded to raster streaming video sites. The original vector animations had bitrates low enough for dial-up, ran smoothly on a Pentium III, and scaled flawlessly to any resolution. The raster (usu. H264) versions frequently look much much worse despite 20x the bitrate and dedicated processing hardware.
Vector animations like Homestar Runner are the original purpose of Flash- the one thing it is actually quite good at, and has been quite good at since Macromedia released Flash 3 in 1998. That's part of how it became ubiquitous- it did one thing and did it well. Even now there isn't really a better alternative- there's nothing that has the capabilities, the cross-environment rendering consistency, the install base, and the tool support Flash vector animations have.
It's just really unfortunate that after the Adobe acquisition Flash became a way of shoehorning a subpar and insecure "rich content platform" into that ubiquitous install base. For quite a while now streaming raster video has been a dominant use of flash, where it's been inferior to other solutions and only used because of its large install base and its support for DRM.
You might want to look in the mirror.
Scripting languages usually feature dynamic, strong typing. (The runtime always knows exactly what type its dealing with.)
Most compiled languages have static, strong typing. C is somewhat of an exception, being relatively weakly typed. (It's easy to make all sorts of bizarre type casts, sometimes implicitely.)
A few languages are very weakly typed, such as Forth.
Ain't going to happen, sadly. As the temperate zone moves closer to the world's poles, and the regions we're currently growing cereal crops on become progressively more arid, there is simply less area of land (square miles or kilometres or however you want to measure it) on which crops can be grown - and that's ignoring the costs of clearing and draining that land, and all the effects of ecocide.
At the same time as this is happening, of course, all our critical infrastructure will become unusable unless we make huge new investments in flood walls. For example, I work for a major international bank, which, obviously, has its critical data infrastructure replicated in seven cities across the globe. Only one problem: in six of those seven cities, our data centres are within ten metres of current sea level. Most major financial centres are old port cities, and all old port cities are on the coast. So over the next fifty years we have to either all relocate our trading infrastructure, or else abandon it. What I expect will happen is that we'll delay and dawdle until it's too late, and then our whole civilisation will collapse under the combined pressures of hunger, refugees, and rising water levels.
We're already past the point where there's any hope of the planet being able to support even half its current population in 100 years time. The real policy question is how we now radically reduce the population without war, pestilence, famine and death.