It's not "gun controllers bringing it up", it's manufacturers working on them. What do you have against manufacturers developing new products?
I have absolutely nothing against manufacturers developing new gun safety products and offering them on the market. The concern with these "smart" guns is that they'll be mandated by law. This has already happened in New Jersey. The 2002 Childproof Handgun Law says that three years after "smart" guns are available for sale in the US, all guns for sale in New Jersey must be "smart". The law doesn't require that the guns be in any way reliable or have obtained any significant market share, just that they've been available for sale. So if these actually make it to market people in NJ who want reliable guns are screwed. And if any other states, or Congress, passes a similar law, then all of us are screwed.
Actually, I'd have no problem with smart guns if they were really reliable. And there's a really simple reliability screening test we can use: offer them to military and law enforcement personnel. Cops in particular should see a lot of value in smart guns because cops occasionally get shot with their own guns. However, they also need their guns to be extremely reliable, and big departments and the FBI have the institutional resources and motivation to seriously test them. So, once the technology reaches a level where police are not only willing to use smart guns but actively want them then it's fine to mandate them for civilians.
Of course, thanks to the NJ law, civilians are going to fight like hell to keep these things off the shelves, which means that the years of refinement needed to make them reliable is never going to happen. Not in the US, anyway.
Google no longer supports non-security questions for account recovery.
FTFY. Security questions are a joke. The answers are almost always easy for an attacker with a little bit of information about you to find, and a lot of the time the legitimate user can't remember them. Moreover, those two traits are strongly correlated: the harder it is for an attacker to find the answers, the more likely it is that the user won't be able to find them either.
Everyone should stop using them.
Google doesn't actually want your phone number for security. Google wants your phone number so that they can link the account in their database to other information that contains your phone number.
The number is to make account recovery possible in the event you've forgotten your password. The assumption is that attackers won't have access to your phone. That assumption is violated if your telco will transfer your number to the attacker's phone, of course.
If you prefer not to give your phone number to Google, don't. Just turn on two-factor auth using a non phone number-based auth method, either the Authenticator app or (better yet) a security key, or both. Then download and print out some backup 2FA codes and keep them somewhere safe. Google won't have your phone number and you won't be vulnerable to mistakes by dumb telco customer service reps.
They obviously know, but are legally forbidden from commenting.
I think people often forget that corporations are about the furthest thing possible from monolithic. It's entirely possible for one organization within a corporation to receive a request that is within its own ability and authority and to handle it without bothering to tell anyone else, or with only brief consultations with legal, who may not have kept any records. Given government secrecy requests/demands, that possibility grows even more likely. Further, corporations aren't static. They're constantly reorganized and even without reorgs people move around a lot, and even leave the company. There are some records of what people and organizations do, but they're usually scattered and almost never comprehensive.
It's entirely possible that they did something like this, that the system was installed and later removed, and that the only people who know about it have left the company or aren't speaking up because they were told at the time that they could never speak about it, and that the organization that was responsible for doing it and/or undoing it no longer even exists. It's possible that Yahoo's leadership's only option for finding out whether it happened is to scan old email to see if anyone discussed it via email (which may not have happened; see "government secrecy requests/demands") or to look in system configuration changleogs to find out if the system was ever deployed (and it may have been hidden under an innocuous-sounding name)... or to ask the government if the request was ever made.
Of course, my supposition here depends on a culture of cooperation with the government. I don't know if that existed at Yahoo. I think most of the major tech corporations at this point have a strong bias towards NON-cooperation, which would cause any request like this to go immediately to legal who would immediately notify the relevant C-level execs. But I have worked for corporations where the scenario I describe is totally plausible.
I was expecting a Warrant canary. e.g. something to say they have not yet been been given secret orders by the NSA/CIA to install a backdoor for spying on users.
Like Apple used to have. Is there some reason Google cannot do that?
I think their absence of an existing Warrant Canary speaks volumes. (That is - they've already been issued such an order or warrant.)
Google's head lawyer, David Drummond, has explicitly said that Google has done no such thing. Of course, if the government could order him to lie, then that doesn't mean anything. But if the government could order corporations to lie, then it could order them to publish a false warrant canary statement.
This is not an Apple problem, it's an industry and maybe even a societal problem. I don't even think it's possible to get a good job, get an A+ rating for every performance review ever, and expect to stay at that job for 5+ years. After 10 years, you are too expensive to keep around.
Lol, just left one job after 10 years, not because I was too expensive but because the new company had more resources to spend and could offer me significantly more. The average seniority at the new company for IT workers is 17 years and not a month goes by that our Office of ~700 people doesn't have an announcement for someone celebrating their 25 or 30 year anniversary. You just need to develop valuable skills, expertise, and a proven track record and there WILL be someone willing to hire you. Any time I've gone looking for top tier talent for a specific area of expertise the number of qualified respondents has been very low because the majority of people with the applicable skills are generally already gainfully employed, the unemployment rate for the last few IT focused surveys I've seen results from were under 3% which is an incredibly tight market. If you're IT, not entry level, and having trouble finding employment it's either something with your local market (and you're not willing to relocate) or you've done something very wrong with your career.
I'm sorry but that's just not true. The two systems are vastly different in implementation. Google are acting as a financial intermediary for every transaction through use of a "virtual credit card" which is what is on your phone and what the vendors see (they never see your actual cards as they are only on Google'a servers). As a result, Google have access and knowledge of every detail of every transaction you make using their system. This aligns with their panopticon business model. By effectively acting as a middleman financial institution they don't need any agreement with banks etc. Every transaction you make actually becomes two 1. Google pays vendor, 2. Google charges your bank.
Your information is out of date.
What you say was the mechanism that Google Wallet used, in its second version. The evolution of Google's NFC payment system went as follows:
1. The initial release used a secure element (essentially a smart card chip) and installed your actual credit card information in the SE, using the standardized EMV solution straight up. (EMV is EuroPay/Mastercard/Visa, a consortium that creates payment standards). Initially only Chase cards were supported because this approach requires support from the issuer.
In this version Google was not a middleman.
2. Due to banks being very slow to get on board with SE-based NFC payments, and due to lots of opposition from carriers (who wanted to become the new payments infrastructure, see ISIS/SoftPay), Google abandoned the SE-based solution and invented something called Host Card Emulation (HCE). In this model, your actual credit card information was kept off the phone entirely, stored only in Google's servers. A proxy card was used to make payments at the point of sale, using pre-computed single-use cryptographic tokens computed on the server and stored on the phone. The proxy card allowed Google Wallet to support any and all credit and debit cards -- in theory any payment mechanism that Google's back-end payment infrastructure could support.
In this version Google acted as a middleman, as you say.
3. AndroidPay deployed after ApplePay and uses a payment architecture very similar to ApplePay, called "network tokenization". The idea is that the interchange networks can produce cryptographic credentials which can be validated by the network, which then passes the validated transaction back to the card issuer. This means that the issuing banks have dramatically less work to do to support NFC payments than in the original EMV-specified model (the one used by Google Wallet). Network tokenization was under development when Google Pay deployed initially, but far from ready to go. Apple waited until it was before launching, and as soon as it was available Google shifted to it as well. They still work somewhat differently, in that Apple uses long-lived multi-use tokens stored in the secure enclave, while Google uses short-lived, single-use tokens stored in Android, and encrypted with a key kept only in RAM and re-downloaded after each reboot.
In this version Google is no longer a middleman.
I expect that a future iteration of AndroidPay will shift to using tokens stored in the Trusted Execution Environment (TEE), discarding the RAM-only key, but that will have to wait until all of the devices using AndroidPay have the TEE with the necessary software.
On the other hand, I agree that there needs to be a rule requiring officers to turn the cameras on -- but I don't think that arrests without the camera on should be invalid. Police have been making valid arrests without cameras for a long time.
Over time, that may take care of itself. When judges and juries become accustomed to always having footage of the arrest, often from multiple angles, they may begin to consciously or unconsciously discount the officer's statements if not supported by video evidence.
Also, unless they have a very specific reason to turn it off, most cops will realize they're better off having it on because the fact that they're not recording doesn't mean someone *else* isn't, and that someone else may well produce carefully selected out-of-context footage that shows the officer in a bad light. In various articles I've read from around the US, police on the street are overwhelmingly in favor of body cameras. They feel like the cameras do more to protect them than to harm them.
I know of several times that the US govt paid for data, but the data wasn't exactly private data, and the purchase wasn't secret. They may also have done it with private data, or have kept their purchase secret, but I don't know about those cases. And it may well depend on which arm of the federal government you are dealing with.
What, you mean like above-board purchase of GIS mapping data or such? What we're talking about is purchase of information about people that would normally require a court order to compel. There's a common belief that companies have been selling user data to government agencies as a secret profit center, but I can't find any example. We know that telcos were giving them huge amounts of data, but there doesn't seem to have been any fee for it.
Third, it's safe to assume Google tracks revisions to their pages, so yes, they would soon know who made the 'mistake'. Also, a letter like this should be shared with extremely few people within the company, so it shouldn't be hard to follow the chain until suspicious activity is found. Punishment for this sort of mishandling would be limited to a fine, however, so the FBI would go after Google's deep pockets rather than try to pin the crime on an individual. The employee should be safe from criminal charges, though not, presumably, from Google discipline.
Also, it's very likely that the set of people with access to the letter and the set of people with access to the systems to publish the letter are disjoint.
You know, Callahan's is a peaceable bar, but if you ask that dog what his favorite formatter is, and he says "roff! roff!", well, I'll just have to...