No, if you read the article they clearly state the keys were fraudulently obtained. If you obtain keys via fraud, they are almost by definition not "good keys".
RedPhone is free and open source end to end encrypted telephony that works OK (not amazingly, but as well as a typical commercial VoIP app does). People authenticate each other using their voices.
This is very true. However, WhatsApp appears to be a counter-example. They are deploying full end to end encryption and instead of ads, they just
The big problem is not people sharing with Facebook or Google or whoever (as you note: who cares?) but rather the last part - sharing with a foreign corporation is currently equivalent to sharing with its government, and people tend to care about the latter much more than the former. But that's a political problem. It's very hard to solve with cryptography. All the fancy science in the world won't stop a local government just passing a law that makes it illegal to use, and they all will because they all crave the power that comes with total knowledge of what citizens are doing and thinking.
Ultimately the solution must be two-pronged. Political effort to make it socially unacceptable for politicians to try and ban strong crypto. And the deployment of that crypto to create technical resistance against bending or breaking those rules.
That's not what the second link says is happening though.
My reading of the second article is that there is the following problem. Website G2A.com allows private re-sale of game keys, whether that's to undercut the retail prices or avoid region locking or whatever is irrelevant. Carders are constantly on the lookout for ways to cash out stolen credit card numbers. Because fraudulent card purchases can be rolled back and because you have to go through ID verification to accept cards, spending them at their own shops doesn't work - craftier schemes are needed.
So what they do is go online and buy game activation keys in bulk with stolen cards. They know it will take time for the legit owners of the cards to notice and charge back the purchase. Then they go to G2A.com and sell the keys at cut-down prices to people who know they are obtaining keys from a dodgy backstreet source, either they sell for hard-to-reverse payment methods like Western Union or they just bet that nobody wants to file a complaint with PayPal saying they got ripped off trying to buy a $60 game for $5 on a forum known for piracy and unauthorised distribution.
Then what happens? Well, the game reseller gets delivered a list of card chargebacks by their banks and are told they have a limited amount of time to get the chargeback problem under control. Otherwise they will get cut off and not be able to accept credit card payments any more. The only available route to Ubisoft or whoever at this point is to revoke the stolen keys to try and kill the demand for the carded keys.
If that reading is correct then Ubisoft aren't to blame here. They can't just let this trade continue or it threatens their ability to accept legitimate card payments.
In this case UBISOFT has a dispute with gray marketeers and decides to take it out on the customers instead of taking it to the courts
Ubisoft might not be able to take them to the courts. For example if these resellers are in China or developing countries where the local authorities don't care about foreign IP cases. Technically speaking, it's actually the customers who have a dispute with the resellers, because those resellers knowingly sold them dud keys. It's not much different than if you buy a fake branded Mac, take it to an Apple repair centre and they tell you to go away. Your dispute is not with Apple. Your dispute is with the entity that sold you the fake goods.
Look at it another way. What if these "resellers" were actually selling you random numbers instead of game activation keys. When you try them out and discover they don't work
Legally speaking that would be a dispute between you and the bogus reseller. They sold you something that was effectively counterfeit. There's lots of well established law in this area.
How many unauthenticated remote exploits in a HTTP stack does it take to lose a customer?
Not many, I should imagine, but your comment is irrelevant because there were no such bugs fixed in this Java update. The way Oracle describes these bugs is horribly confusing. Normally we expect "remotely exploitable without authentication" to mean you can send a packet across the network and pwn the box. If you actually check the CVEs you will see that there's only one bug like that, and it's an SSL downgrade attack - doesn't give you access to the box. All the others are sandbox escapes. If you aren't trying to sandbox malicious code then they don't affect you.
Java doesn't have security holes like C or C++
.... or so I was told.
Then again, I haven't seen too many security patches for gcc or libstdc++ or glibc
You're comparing apples and oranges. The "remotely exploitable bugs" in this Java update, like all the others, are assuming you download and run malicious code in the sandbox. GCC and glibc don't have protecting you from malicious code as a goal, in fact Linux typically requires all software to be installed as root no matter what. Obviously if you never even try, you cannot fail.
The interesting story here is not so much that sandboxes have holes (look at the Chrome release notes to see how many security holes are fixed in every update), but rather than the sandbox makers seem to be currently outrunning the sandbox breakers. In 2014 Java had security holes, but no zero days at all - all the exploits were found by whitehat auditors. Same thing for Chrome, people found bugs but they were found by the good guys.
I'm not sure if this means the industry is finally turning a corner on sandboxing of mobile code or not, but it's an interesting trend.
GC tuning can do a lot, but yes, huge heaps where the GC cannot keep up with the rate of garbage requires a full stop the world collection. However, if your application is really keeping a 15 gigabyte working set, I suspect you'd hit problems with fragmentation and memory leaks using something like Rust long before scaling to such sizes.
Why don't you watch the talk and find out?
Actually I'll just summarise it for you. If you run a lot of Tor nodes you will eventually get picked to host a hidden service directory. Then you can measure lookups for the entries of hidden services to measure their popularity, and crawl them to find out what's on them.
[Java took a very different approach to the problem of "how to we get rid of segfaults and memory corruption". Java basically banned all interesting use of the stack, forcing everything onto the heap, and barred developers from using RAII. Nowadays, with more advanced compilers able to do advanced lifetime analysis, we can reconsider languages - such as Rust - that take a less draconian approach.]
I think it's rather misleading to state that more advanced compilers have obviated the need for Java's approach.
Firstly, Rust doesn't solve automatic memory management like garbage collection does. Their solution appears to be basically smart pointers with move semantics + reference counting for the cases where data doesn't have a lifetime cleanly tied to scope. Well, great. It's back to the 1990's and COM. Reference counting notoriously cannot handle cycles, which are very common in real programs. Any tree structure where you want to be able to navigate both up and down, for example.
In addition to the difficulty of breaking reference cycles and preventing memory leaks in complex programs, refcounting also has poor performance especially if you want threads involved. Garbage collection has now been optimised (in good implementations like HotSpot) to the point where it's faster than refcounting.
If we start seeing teams of non-expert programmers writing large programs in Rust, you will see programs with memory leaks all over the place.
Additionally, you realise that Java compilers have got smarter over the years too, right? HotSpot can stack allocate objects in a bunch of different circumstances, when analysis reveals that it'd be safe.
Basic Income is welfare, not something that sounds like it. The difference between it and normal welfare is, everyone gets a basic income whether they want it or not. It's meant to be enough to live off.
The idea of a BI is a very old one. It has nothing to do with cryptocurrency, and I'm not sure what relevance cryptocurrency has (and I say that as a Bitcoin developer, so I'm a fan of CC in general). In theory a society rich enough to afford it would have moved to the oft-fictionalised post work utopia that you sometimes see in things like Star Trek. Because everyone gets it whether they want it or not, unconditionally, the basic income would be supposedly stigma free. Thus if you want to pursue things that are not very profitable but are beneficial to society nonetheless (production of art, charity, etc) then you could do that and not have to worry about being seen as a welfare sponger.
I love the concept in theory, but a society rich enough to afford one is pretty unimaginable in today's world. Western societies are clearly incapable of even providing the current levels of welfare let alone a vastly larger level. I see a BI as a useful goal to inspire people about the future rather than something practical for today.
That was true 10 years ago. These days browsers make them un-ignorable and in some cases like with HSTS unbypassable.
They aren't allowed to impersonate another company, I suspect that's rather the point. Look at the screenshot: the HTTPS indicator was crossed out. I guess you have to click through a big fat warning to get there
In all of my years of being a network engineer, I've never heard of managing bandwidth that way and can't think of why someone would mange bandwidth that way.
Me neither but we have no idea what kind of filtering system you can install onto a plane.
My guess is that they can't filter by DNS lookup for some reason (people's devices have cached answers?) but they can do SSL rewriting, and for big sites like anything Google runs IP address blocking isn't useful because all their sites share IPs. They know browsers and apps won't accept their fake certs, it's just a way to create an unbypassable error.