If the protocol sucks, it'll go mostly unadopted.
See also: xhtml and arguably ipv6
I'll bite. While xhtml can be ignored rather safely, IPv6 not so much. IPv6 adoption is like the Y2K problem, but with no clear cut off date. We know we will run out of IPv4 addresses, but when depends on who you speak to or what your analysis is based on. As someone who takes care of infrastructure, I would rather start addressing IPv4 exhaustion problem with something other than double or tripple NATting, and provide a solution that is already working when others are screaming for lack of foresight.
To the people suggesting we could have taken an alternative approach to IPv6: any changes to IPv4 would break everything anyhow, so you might as well come up with a solution designed for the long term. NATs are tolerable up to a point, but once you double or n-Nat, then you are in a territory where doing things properly would have been better.
Certainly IPv6 probably creates new problems, but not ones that can't be solved with the proper tools. For example, you lose the apparent security of a NAT, but at that point Firewalls are already providing an alternative and capable solution.
With all these issues, I am wondering whether beyond the firewall to the external network, internal portions of a corporate network should be firewalled too. For example HR related data should be on a sub-section of the network protected by its own firewall. I would imagine the chances of breaching multiple firewalls being low, unless the penetration into the network is either done by an insider or someone who has been able to lay low on the network for a while?
This may already be the case in many organisations, but I don't know enough about security specifics?
Found it. For anyone else interested: https://github.com/erlerobot
It runs Linux, but is the controller code open source? I couldn't find any links to the source code and without it, whether it runs Linux or something else becomes irrelevant. If anyone finds the link, please share it.
The problem is when Google decides something is good for everyone they don't give us ways to switch back to the old behaviour, even if that change feels like a middle finger. You can have a thousand people open bug reports and Google devs will politely tell you that they know better than everyone else. Sometimes it makes me want to grab a bunch of eager developers and fork Chrome. In the meantime there is still Firefox and Opera to move to.
For the session tokens, their values can be encrypted and they can be tied to an IP address. If the client does not need to do anything special with the cookie values, then the server can do whatever it wants. The session ID cookie may not even need to be encrypted and instead the server side holds which IP address the session is locked to, so it can't be reused.
I am just fed up with Google dumbing down the web browser and turning Chrome into our way or the highway. Cases in point:
- refusal to support APNG
- hiding protocol in address field URL
I am hesitating whether to go back to Firefox.
Don't live next to the freeway if you don't like traffic
Sometimes that is the only place to live. Gating a community is not a better option either.
The solutions I have seen in other places include:
- narrowing the intersections to reduce speed of traffic
- making one way streets that locals know how to use, but end up diverting traffic back onto the main arteries.
- introducing speed bumps to slow traffic
- lowering speed limit on these secondary roads
- blocking part of the street with a park, to force traffic to have make more detours
- adding public transport lanes, while sacrificing car traffic lanes.
The solution will depend on the exact location and will probably end up being a hybrid
The problem I see here is a symptom of Europe run by people who are from another era, at least in terms of thinking. The reaction by the papers is a natural one, but it is more of a knee jerk reaction that trying to understand the technology and how it works. What we need are younger people getting into politics, at least in terms of technology advisors, such that decisions aren't being made based on a reality that is 40 years past.
For the journalists, often the best way to be able to write open their own country is actually to be based outside of it. The irony is that sometimes a true patriot needs to be outside of their own borders to raise the issues that that would rather be swept under the carpet.
They could have probably achieved the same thing by just having people use their wifi service? No GPS needed. The bonus is devices such as tablets could be used too. Sure it would mean needing to sign into wifi, but maybe giving people choice between wifi and GPS?
Maybe as an extension, they could even have someone walk the line, in busy locations, taking orders on a tablet, equipped with a card reader?
Paper ballots are pretty damn open-source.
Just because a voting machine is supposedly running open-source software doesn't preclude tampering - hardware or software.
I can remember one wise lecturer in my computer science course gave a challenge to come up with a system to solve a customer's problem. Being CS students we designed everything requiring the use of a computer. At the end he asked us if we had considered whether a non-computer based system would have actually have done a better job. While in the particular case the answer was no, it did show us that sometimes we use technology for technology's sake and not to solve the problem in the best possible way. Voting machines should be approached in the same way and the opti-scan mention by another poster certainly seems to strike the right balance between solving the problem and not throwing the wrong technology into the mix.
Is it time to have a Cameron meme with 1984 on it?
Would changing Tor to use exclusively IPv6 help at any level? Does IPv6 provide any benefits here, other than being 128-bit addresses?