All right, then. Try it. Let's see what happens.
In particular, I'm interested to see what will happen to TLS-encrypted streams between Europe and the US, most of which pass through London.
The goal was "to develop a voluntary, enforceable code of conduct that specifies how the Consumer Privacy Bill of Rights applies to facial recognition technology in the commercial context." But after a dozen meetings, the most recent of which was last week, all nine privacy advocates who have participated in the entire process concluded that they were thoroughly outgunned. "This should be a wake-up call to Americans: Industry lobbyists are choking off Washington's ability to protect consumer privacy," Alvaro Bedoya, executive director of the Center on Privacy & Technology at Georgetown Law, said in a statement. "People simply do not expect companies they've never heard of to secretly track them using this powerful technology. Despite all of this, industry associations have pushed for a world where companies can use facial recognition on you whenever they want — no matter what you say. This position is well outside the mainstream."
The article's viewpoint is dangerous. We must solve the Friendliness problem before AGI is developed, or the resulting superintelligence will most likely be unfriendly.
The author also assumes an AI will not be interested in the real world, preferring virtual environments. This ignores the need for a physical computing base, which will entice any superintelligence to convert all matter on Earth (and then, the universe) to computronium. If the AI is not perfectly friendly, humans are unlikely to survive that conversion.