I am actually sympathetic to the idea of an exemption for raw public data sets not for human consumption. Today the default is HTTP and you have to have a good reason to go HTTPS. The goal here is to flip the default and get people thinking in terms of HTTPS by default. There is always room for exceptions from the rule. A use case like this seems like a reasonable exception. But the risk here is that the purpose or scope of the site changes. Maybe next year they're hosting raw data sets about something more politically charged, and a researcher in a country whose government doesn't like that kind of research could find herself with unwanted attention simply for accessing that public raw data set. Alternatively, someone decides to tamper with that data set in flight. Or someone decides to dual-purpose the site for some reason and serve content to people, forgetting that it isn't an HTTPS site, in which case we're where we are today.
Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
Please read my comment as dealing with this particular axis of tax policy. Obviously, not everyone in the EU wants the EU to be like the US in all respects. It is also wrong to suggest that no one in the EU wants to be like the US in any respect.
Please follow up at https://github.com/WhiteHouse/.... We are keen to understand these issues and find solutions. We also do know a thing or two about web hosting and HTTPS.
On one hand, the EU wants to be more like the US: Create an EU internal market (http://en.wikipedia.org/wiki/Internal_market). Open the borders for trade and business. Let companies set up shop in a single EU state and sell to anyone in any other EU member state without having to do a mess of paperwork, currency conversions, or taxes (aside from VAT). On the other hand, some EU states see other EU states doing things to attract business, and they see their tax revenues going somewhere else, and they want to fix that. EU seems to be in this situation where it has competing goals and competing feelings on how taxes should work and I'm really interested to see how they reconcile that. Either each country needs to be able to operate and tax independently, or they need to work together as a single cohesive union and stop trying to perpetuate their pre-union tax schemes. In many respects this feels like a US state getting upset that a company in the next state over is selling to its people and the other state is getting all of the income tax revenue. Can you imagine what it would be like if you had to deal with income taxes in every US state in which you did business?
(Granted, this is somewhat independent of the whole Bermuda thing, but usually when people complain about these tax avoidance schemes it's about Ireland or something.)
Hi oneiros27, please take a look at the open issues and provide your feedback at https://github.com/WhiteHouse/...
The "additional CPU" nowadays for SSL is fairly trivial. If you've done some experiments that demonstrate a meaningful performance impact, and you can quantify the costs of that, we'd LOVE your feedback so that we can help you mitigate that or convince you that the benefits are worth the costs. We'd like to see data here.
Likewise with the caching issue. The use of CDNs can mitigate some of the performance impact you're worried about. If you're working with a specific scientific project or experiment where you need to shuttle around a lot of data, and are presently using HTTP and HTTP caching solutions to implement that, I would propose there are better ways of efficient data distribution. Again, submit an issue at the link above about this and someone can work with you to talk about your situation.
The IDS problem can be solved by moving the SSL termination to the other side of your IDS. It's not necessary for the origin server to serve HTTPS. It can also be resolved by changing your approach to IDS to one that doesn't require inspection of the payload at a distance from where it's served.
We do see privacy incidents routinely due to someone thinking "gosh, I didn't expect that would be private" or "I forgot to move that to the https site". We also routinely see ISPs and governments inject ads and tracking mechanisms into HTTP responses. We are also just simply concerned about the privacy and safety of people that browse government web sites and by standardizing on HTTPS everywhere, it eliminates the need for these mistakes and oversights and ensures a minimum bar for privacy and data integrity. It also makes it super easy to be FISMA compliant without having to spend extra to lock down a particular feature or product.
Please raise your concerns with the link given above and let's chat.
Privacy is in the eye of the individual. Is the location of an AIDS clinic private information? No, but the fact that you're looking for that information could be intensely private. Is the location of a US embassy private? Job postings? Things we think of as non-private information here could get you detained or worse if your Internet connectivity is monitored by an oppressive government. We want the information on government web sites to be useful and for people to feel safe and comfortable accessing.
Who do you trust to make those judgment calls? Every one of a thousand government contractors building your web sites? Or does it make more sense to just standardize on HTTPS everywhere and simplify your world?
And this doesn't even begin to cover the cases of ISPs injecting ads or tracking or worse into your HTTP responses, which happens all the time.
FWIW, just because the NSA does something doesn't mean every other government employee or agency approves or is culturally aligned with that attitude. This effort represents a genuine push by a self-selected group that is privacy-conscious, interested in doing the technically right thing, and for the first time in a position within the government to actually start making the Right Thing reality. Interested in joining us?
If there are specific concerns you have with the memo as it applies to the federal agencies it's talking about, we'd love to get your feedback on how we can achieve these goals while minimizing the issues you allude to.
This isn't about mandating HTTPS everywhere outside of government, and those agency sites that might perform worse due to losing intermediate caches can always implement the policy using existing CDNs to try and get the content as close to the user as possible.
Is there something about what the memo proposes that looks to be obsolete soon? We're trying to get ahead of the curve here, because it does take time to change things in the government. We'd love to better understand your "when the government gets involved" concerns.
Do you think you might be interested in participating in things like this on a more ongoing basis?
Developers should grossly outnumber operations. If it doesn't, your ops people probably aren't doing enough automation. Depending on how important that scalability and automation is, you might want more "devops" types in your operations team than other companies. Truly large tech companies call this SRE and don't have a traditional ops role at all. So I'd say having your three-way split would be OK for some companies, but a two-way split between non-ops developers and dev-ops operations works well for others. Really anything that minimizes the rigid wall between the two sides and gives each visibility and influence into the other is good.
I think the idea is to *find* good people that already have interests and skills that encompass the union of the two, and supplement the "good developers doing development" and "good operation guys doing operation stuff".
To be honest, I think a developer that has no interest in infrastructure is a developer that can't design a scalable, supportable service (you need to know how the infrastructure works in order to effectively use it). An ops person that has no interest in programming is an ops person that can't scalably support a service (who's going to build the automation and monitoring?). In my eyes a good balance is to have your "good developers doing development" supplemented with some "developers that know operations" to make sure they're designing things well. On the operations side, supplement "developers that know operations" with "operations people that know how to code" so they can work together to scale up automation, not staff, as a service grows. This is essentially how SRE works at many large tech companies.
For a better idea of why "reversible" matters, and experimental evidence suggesting that if you do reverse the effect of the interaction, you can restore quantum behavior, check out http://en.wikipedia.org/wiki/D....
You're misunderstanding the OP's point, I think. You and I don't think to ourselves, "let's store a history of our journey in our spin!" We just remember it. We perceive ourselves to be macroscopic classical systems. We have learned, however, that quantum effects can apply to macroscopic objects (as the OP points out, the C60 molecule most recently). Since your mind is simply a product of the arrangement of the molecules and energy in your brain, the implication is that while you would perceive yourself behaving classically (moving through one "slit"), if you were sufficiently isolated from outside observers to prevent decoherence, you would actually be behaving non-classically from their perspective. We just can't perceive that because decoherence is a local thing and our brains are a classical arrangement of matter.
Another way to think about it: decoherence is the process of the observer becoming entangled with the system being observed. Since perception is classical, a classical result is observed and the observer reacts accordingly. But if the system + local observer are isolated from a second observer, the pair are just another quantum system and decoherence occurs a second time when the second observer interrogates the first. Until the second decoherence happens, the observer is in a superposition of states--each state being a classical observer who has just observed different things, unaware of the other state.
Taking this back to the post the OP is responding to, "consciousness" doesn't matter. The nature of the "observer" doesn't matter. That it's even an observation is a concept we made up to relate our perception to the world we perceive. It's just fundamentally thermodynamic interaction.
I think for the most part you are right. However, if the customer knows he is exploiting an error on their web site to get a product at an unreasonably low price (bad faith), I believe the merchant would have grounds to contest the transaction and could be entitled to reverse it, even if it's completed and even if the customer has a receipt in hand. That being said, "Merchant Makes Error, Sues Customers" isn't a flattering headline.
He is still obligated to deliver them, at the price he charged
I don't believe this is true; the merchant can issue a refund pretty much at any time and cancel the deal. If the merchant was paid, but hasn't performed his obligation, he can't really be *compelled* to. That's essentially slavery. You always have the right to breach a contract. If the other party was harmed by your breach, they also have the right to sue you to get compensated for that harm. It's unlikely that the average person is going to be harmed much more than the money they sent the merchant, so a refund is entirely reasonable compensation.
That's not how I read it, but that would make more sense, I suppose. I'm thinking of situations where you have a multi-pronged attack, and one prong accesses one set of sensitive data, and the other prong accesses another. One access may be discovered, the clock starts, and 72 hours later they may not even be far enough into their forensics to find out about the other prong of the attack. But if you're defining each as its own "breach", even though it's part of the same larger complex attack, I suppose it's a little more reasonable than I interpret it.
But what if you're investigating something like this:
1. Breach of data A occurs
2. First breach of data B occurs (small set of data accessed)
3. Second breach of data B, by the same attacker from a different attack vector, occurs (accessing more data)
1 is discovered, clock starts, but you're able to get a full report out after 72h.
2 is discovered, separate clock starts, and you're able to get that report out after 72h.
3 is discovered. Should that have been part of (2)? What happens if you don't notice this during your investigation of (2)?