I'm not sure Apple Pay will "win", but I'm absolutely certain CurretC will "lose". It's a great change for the merchants, and horrible for the consumers (in contrast to Apple Pay, which is neutral for merchants, and positive for consumers). Unless the merchants stop taking credit cards (and I think that's unlikely), CurrentC is already dead.
Sure, everyone running TOR on their gateway for all internet traffic would be horribly inefficient. Sure, it would preclude some things, like IP multi-casting and content geo-caching.
But you know what? It would pretty much make net neutrality a de facto standard, irrespective of what the horribly corrupt FCC decides. And you know what else? It would effectively end the NSA's collection of everyone online activity. Oh, and you would get all the privacy benefits for free, forever.
On balance, given the openly hostile actors in the government, I think it would be worth it.
For those who don't see why this is bad, consider this:
In order to route/cache by data, the data must be visible to the routing nodes; in essence, you would no longer be able to use end-to-end encryption. You could still have point-to-point (eg: encryption for wireless connections), but everything would be visible to routing nodes, by necessity. This means no more hiding communications from the government (who taps all the backbone routers), no TOR routing, no protection from MTM attacks, by design. You get the promise of more efficiency, at the cost of your privacy/freedom... and guess what, you'll get neither in this case, too.
I don't run a local firewall on my work system, for reference. As a developer, it's common to need to have "random" ports open for various things for testing, and having to deal with a firewall is one more nuisance I don't want to account for. A local (on system) firewall won't prevent most attacks anyway, so I don't feel I'm giving up much real security.
I do run a local firewall at home, but only because it has not annoyed me enough to be disabled yet.
I don't know how useful that information is; consider it a data point.
It's a good PR attempt, to address what they must perceive as a significant problem, but...
Good luck convincing companies to trust your cloud infrastructure with their data, when they know for a fact that the US government (and probably other governments) could compel you to grant them secret access at any time, regardless of whatever client-access protections are in place. If MS could solve that massive security flaw, I'd be impressed; anything less is just polishing the proverbial turd.
Google's only really viable option, as far as I can tell, is to create a tailored censored portal for each country (really, legal jurisdiction, but basically the same thing), and allow anyone in that jurisdiction to request that anything be censored in an automated manner. Then they can create an "uncensored" jurisdiction, which you would need to opt into, with a disclaimer and such.
Once you have that, you can much more effectively fight these sort of "censor for the entire world" orders, by asserting that you already support per-jurisdiction "removal", and to remove globally would violate the rights of other jurisdictions to self-censor as appropriate. It's not perfect (nothing in international law is), but at least it would give Google a way to somewhat comply with the flood of censorship demands which are coming, without trying to fight each new demand independently.
I will not willingly buy another Android device. Google has proven to me that their goals, with respect to my privacy and control of my data, are completely opposite of mine. Their philosophy is just not for me.
This could turn out to be a good thing, imho.
Consider that there are basically two types of users, where privacy is concerned: people who are oblivious and/or don't care about their privacy, and people who try to preserve some of their privacy. For the former group, this change will not affect their app usage, and will make it easier for them to get app updates automatically, which will make their experience better. For the latter group, the Android developers are actively hostile toward your privacy desires, have no desire to help you, and in fact probably _want_ to drive you away from the platform. In both cases, it's a win for Android, the "all your data belongs to us and everyone else, and there isn't anything you can do about it" platform.
I personally think there's a market for platforms which allow some privacy (Apple does a much better, but still imperfect, job of this), but I acknowledge that there's also a market (and probably a larger one) for platforms which cater to people who share all their personal data with everyone, and are totally oblivious to what any/all of their apps are doing behind their backs. Google is making it crystal clear which type of platform Android, and their other services (see also: Nearby), will be.
Questionable, in the sense that the theory is a very speculative extrapolation of the data we have been able to observe, about the origins of the universe before the "time" we can actually observe. Just because something fits a mathematical model doesn't mean we have solid evidence for it; it simply means it's a model which matches what we've been able to [indirectly] observe. You could say the same thing about n-dimensional string theory as a unified model, for example.
It's gratifying to see that the public's general acceptance of scientific theories is roughly proportional to the actual evidence to support the theories themselves. For things which there is good evidence, there is broad understanding; for things which are highly questionable and politicized, there is much skepticism.
Good for the US population.
It is an interesting conceptual argument, although it ignores a couple a real-world points.
First, not all bugs are equal, in terms of exploitation opportunity, as he's glossing over; the vulnerability is only as valuable as what it can be exploited to allow access to, in monetary exploitation terms. A bug in something which cannot be exploited for any particular gain is next to worthless, in market terms.
Second, not all companies will pay for vulnerability information, because it's not just a value proposition, but also a risk and resources assessment. If nobody expects your software to be "secure", there's no point is spending too much money on software security; for example, nobody pays much attention to the software in cars (yet), so manufacturers have little financial incentive to make it secure. Moreover, if you don't have deep pockets, you're not going to pay for exploits, especially if you're struggling to simply produce features that potential customers want. In either of those scenarios, the value proposition for paying for exploits is inconsequential.
Most (by volume) software has an effectively unlimited amount of bugs, which nobody will pay for. That's the real world of software.
Well, speaking as a [software] engineer...
In my profession, there are certainly certifications one can get, and ethical considerations (as a general statement), although there is no particular licensing. Regardless of these, though, I am employed to write software, but I would not certify that the software I write is flaw-free (nor would anyone else that I know). It's entirely possible that, due to flaws in my work product, someone will lose money, or have other negative outcomes befall them.
If that happened, and my employer blamed me publicly (explicitly or implicitly), I would be seeking large monetary damages, even if the flaw was my fault. My argument would be that I'm employed to write software, not write flaw-free software, and if the company causes me damages (in current or future income) by stating or implying that I did not perform by work duties appropriately, then that is slander, and they are liable. In this case, the "lie" would be to imply that my work product was supposed to be flaw-free, which I never asserted or consented to, regardless of what they desired. Implying that someone is unable to perform one's occupation is textbook slander, and the company would find themselves writing a large check. And yes, even naming the engineer in this context, without strong evidence of gross or malicious negligence, would be cause for civil penalty (imho).
I guess it just comes down to this: there are laws which protect people from having their lives and/or livelihood ruined by false accusation (direct or implied), and implying that an engineer must create a flaw-free work product to be proficient is a false accusation (unless there's a specific contractual obligation to do so, and that would seem suspicious). If I were a company considering this, I'd think twice, and then not expose myself to the obvious liability.
I could see two potential outcomes, if blaming engineers for product flaws becomes commonplace...
First, engineers will (or should) demand an indemnity clause as part of their employment contract, where the company agrees not to blame them publicly for any product flaws, and/or take any action which would identify them. Depending on the repercussions for the test cases, this might become a necessity for employees.
Second, I could see some significant lawsuits for slander, since the company is causing real (and substantial, and more importantly provable) financial loss for the engineers they blame for product deficiencies. Unless they have a pretty solid intentional negligence defense, they could (and absolutely should) find themselves paying out a few million more to each engineer they throw under the metaphorical bus.
Companies are responsible for their products, not the people they employ to make/provide them. Companies reap the rewards when they work, and bear the responsibility when they don't. Absent malicious negligence, naming/blaming individual employees is irresponsible at best, and should absolutely expose the company to civil liability.
"Been through Hell? Whaddya bring back for me?" -- A. Brilliant