Comment Re:Why isn't this auto-update? (Score 1) 174
And at 3.3M they may as well just push it out rather than delaying it. A couple of checks will be more costly than just letting everyone download it straight away.
And at 3.3M they may as well just push it out rather than delaying it. A couple of checks will be more costly than just letting everyone download it straight away.
Actually we are getting Dr Who simultaneously with the UK if you are willing to get up at that time of the morning to watch. Or you can just record and time shift.
Sunday 7:30pm is a replay of the morning's broadcast.
Not DANE the people, DANE (DNS based Authentication of Named Entities) http://tools.ietf.org/html/rfc... Mozilla are in a position to both publish TLSA record and authenticate the CERT.
Firstly ICANN had a black list of TLD labels that is wasn't going to allow anyone to apply for because they know they were likely to be in use.
If they looked at every "bad" TLD name that hit the root servers they could never add any new TLDs.
Having awarded contracts for TLD's they are try to minimise the impact on those labels that didn't make the black list or that they were unaware of.
Actually they do own and run one of the root servers. The company I work for owns and runs another of them. I submitted arguments, as a private individual, to not expand the root zone when this was being mooted. That all being said they are the legitimate party to decide what gets added to the root zone.
This isn't RFC 1535 all over again unless you are using partially qualified names where the end of the partially qualified name just happens to match one of the new TLDs. Partially qualified names have always been dangerous.
I just wish I had been able to convince Paul to break all existing use of partially qualified names back then by not appending search elements to any name with multiple labels. As much as foo.lab is convenient to type, foo.lab.example.net was safe as was foo + lab.example.net as a search list element.
They don't own you. However they are the authority for which names are added to the root zone. New TLD labels have always been possible and have been added from time to time.
The RBDMS vendors that squatted on a TLD were not rational actors. They knew or should have known that new TLDs could be added to the DNS at anytime. That new TLDS would be added to the DNS was published as part of the switch from a flat namespace to a hierarchical namespace. They failed to do due diligence. If they wanted a reserved name they could have requested one or heaven forbid registered one.
This is like vendors that squatted on 1.0.0.0 address space.
Firstly ICANN didn't just assert ownership of the root. They inherited it along with the rest of the IANA.
And the administrators gambled that no one else would ever register that tld. Sorry they just lost that bet.
The DNS is designed to allow everyone to have their own namespace. To do this you need to register the name so that it can be uniquely yours. If you can't register it, don't use it. Period.
As for those protocols they could have requested a reserved name. They just failed to do so. There have always been processes to get reserved names.
Multiple address, source address routing and multi path TCP will address lots of the reasons people want PI addresses today. IPv6 has enough addresses to make that mix of technologies a viable solution space. IPv4 is too resource constrained to make that a viable solution.
Until there is sufficient IPv6 penetration that continuing to run IPv4 becomes pointless. If you turn on IPv6 on home networks over half the incoming traffic will be IPv6 traffic. Globally IPv6 is 4-6% IP traffic depending upon where you measure it. IP has replaced many networking protocols in the past. IPv6 will replace IPv4. The writing is already on the wall.
Many networks today are IPv6 only internally with protocol translation to talk to the legacy IPv4 Internet.
Other are dual stack translated to IPv6 only then translated back to dual stack on the Internet.
With IPv4 you are only going to get less and less functionality now that many ISP's are getting to the stage of having to deploy CGNAT. As a home user having a publicly reachable address will become a thing of the past.
How much more gradual do you want? I've been running dual stack for over a decade with a tunnel back to HE. At this stage most of your equipment runs fine with IPv6.
Actually you get something that has passed several different analyses.
Silencing "gcc -Wall" is a good thing. Modern gcc versions catch lots of errors. Add to that clang static analysis and others you get pretty reasonable error detection which is what they are aiming for.
It saves the government money to consolidate the checking to one place. Otherwise every department would need to do the checking themselves.
By doing this continuously you end up with releases which are free of known errors.
Because it is a change of contract and if they did it to those still in the minimum contract period they would let them break the contract without them having to pay ETP.
So Verizon made a bet that customers wouldn't use the unlimited data that they sold them and they lost. Tough!
It looks like Verizon should start offering plans that reflect the actual cost to supply. Those that use the most pay the most.
Have data caps. Throttle users once they reach those caps. This puts back pressure on the users in terms of cost.
Provide incentive to time shift data transfers to the quieter periods. e.g. Only count 1/2 the data between 02:00 and 06:00 for
example and let the customers know.
It costs NetFix $X to supply the cache per month over the lifetime of the cache box. It also costs them $Y to populate the cache as this still has to go over paid transit + the cost of the tail $Z. Against this is the cost of just sending it all via paid transit $T. Remember the "cache" isn't a pure cache. It has movies pushed to it without there being a request for them in multiple forms.
For small nets $X + $Y + $Z > $T. As the size of the net increases the balance switches to $X + $Y + $Z $T.
Do you really think it is fair to demand that Netflix take a cost hit just to provide you with a cache?
HELP!!!! I'm being held prisoner in /usr/games/lib!