After reading the article from dmbrun's post, it seems what they're doing is a single code base, more shared APIs across Windows variants and a single store interface. So it's mostly focused on making it easy for developers to support multiple Windows variants. A smart move, but nothing revolutionary.
The third link is not actually a link, since the <a> tag is missing the href attribute. I wanted to check what the CEO actually said, since "unify" could mean a lot of things.
Are they going for x86-64 only, killing the ARM-based WIndows RT, as Hot Hardware is reporting? They'd still have to keep ARM support for Windows Mobile. Perhaps they should have put Windows Mobile plus some tablet extensions on the low-budget tablets, that would have fit people's expectations a lot better.
Are they going for a single code base? In that case there would be multiple products created from that code base, so that doesn't tell us anything about the fate of Windows RT or any other specific products.
Are they going for a single product named Windows? While I think it would be good to drop the artificial home/pro/ultimate differentiation, having a different Windows for client and server use is still useful. Although that could be handled by having a different default configuration rather than an entirely different product.
Yes, it's a reference board. What's new about it is that it contains a 64-bit ARM processor.
For what it's worth, I thought the summary was very informative.
Another reason cheat codes existed is that without them, a lot of players couldn't finish the game. I think there are several reasons for this: the arcade roots, a larger percentage of hardcore gamers, the need to prevent the player from finishing an expensive game quickly after buying or renting it and game design being a much younger discipline.
Don't get me wrong, I actually prefer today's easier games, but it does mean that you don't really need a cheat code anymore to finish most games. Instead of having the difficulty increase a lot as the levels progress, games now have selectable difficulty from the start and achievements to add challenge for more talented and/or experienced players.
Chrome extensions are tied to your Google account, and Google has pretty much complete control over them. Chrome, as a browser, does not need to be tied to a Google account (although it will suggest that you do so) and its automatic updating can be disabled.
Not updating your browser will also leave you vulnerable. You could download updated Chrome installs from a generic download page, using a different browser and an IP address that is not associated with you, instead of accepting (possibly customized) automatic updates. That would be safe under the assumption that the generic Chrome build is not trojaned.
More to the point, though, I can securely send messages even though a compromised browser, if I encrypt the messages externally.
True, but then it would be more convenient to send messages from an external mail application and not use web mail at all.
That and the lobbyists: if there were fewer of these agreements in negotiation there would be less work for them. Not all GDP increases are actually useful. In the Netherlands we had an exceptionally soft winter; the GDP decreased because less natural gas was sold.
If you're worried about Google itself being forced to compromise this extension, you shouldn't be using Chrome at all.
In any case, the current state of webmail is typically messages stored as plain text, transmitted over secure sockets. Encrypting the message itself is a big step forward.
This is Google's hedge against increasingly higher costs for peering and neutrality breaking ISP's, so why would they then turn around and be hypocrites by ruining the very reason they're moving intro infrastucture to begin with?
Android started in much the same way, to avoid telcos getting control over the content people access on their phones. While the base OS of Android is still free, a lot of the standard applications are now licensed from Google and the terms for licensing them are becoming more strict. Google's fiber is neutral today, but that doesn't mean it will stay neutral forever.
They tried to go for the infotainment market with the ARM-based Windows RT, but it found very few customers, mainly because there are not many apps for it. A "Surface Mini" would only have a chance if it runs on x86 and I don't know how feasible it is to produce a small light x86 tablet that gets a decent battery life, while also being affordable and powerful enough to run Windows 8.
So I don't know if I would call this a long-term strategy or just facing the realities of today.
I would like to spend more of my time creating new things rather than fighting to make existing things work together.
Yesterday there was a headline saying 300,000 servers remain vulnerable to Heartbleed. So the bug is still (ab)usable even after it has been published.
We could try to raise funds to pay for reverse engineering of the VPU in the Novena laptop -- if we could find skilled reverse engineers ready to take the job. Can you introduce me to any?
Does anyone know what he means with "VPU"?
The GPU is a Vivante GC2000, which has been partially reverse engineered already; support is being added to etnaviv, which is a user-space driver -- the part connecting Mesa + Gallium to the kernel driver -- for the Vivante graphics cores (support older cores like the GC860 is good enough for everyday use). The kernel driver itself (galcore) is available under GPL, although it could use a cleanup. So there is no need to reverse engineer everything from scratch, but the etnaviv project could certainly use more contributors.
There is also a video decoding acceleration block in the i.MX6, but like all things H.264 that is likely a patent minefield, so I'm not sure it would be worth spending a lot of resources on reverse engineering that.
Also, drives aren't proper backup, unless they're offsite, and these discs pack 50GB each, more than enough for most discrete items on your 3TB drive (what do you need that for anyway, HD porn?)
Optical discs aren't a proper backup either unless you store them offsite: they are easily destroyed in a fire or taken by a burglar.
I think encrypted online backup is a far more convenient solution than optical discs: it can run as a background process instead of requiring the user to insert a blank disc regularly.
Well, I'd argue that a library that needs a single global init call is itself a poorly implemented singleton with all the associated problems. It is unfortunately a common occurrence and wrapping it in a singleton class is a way to deal with it. But in my opinion that is making the best of a bad situation rather than a pattern that I'd recommend if you have anything to say about the library interface.
I have seen a lot of singleton use in C++ unrelated to libraries and most of those uses became problematic at some point. In C++ in particular, the fact that with a singleton you can't control the moment it destructs can be a problem if the destructor needs to do more than free memory.
If you like Dijkstra's style, I can recommend Programming: The Derivation of Algorithms by Kaldewaij. For details, see my favorites list in a later post.