Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:See no need to go to git (Score 1) 245

If you have more than 1 programmer you have a need for distributed version control.

Bull. Shit.

Programming teams -- LARGE programming teams -- got along just fine without distributed version control for LITERALLY decades. I am on the fence about its usefulness in pure programming applications (like, as someone else mentioned, the kernel), but am absolutely opposed to its use for things like your configuration-management repository, the repo you store your DNS zonefiles in, etc., etc.

I would suggest to you that if you think the only thing people are using version control for is "source code", then you don't understand the enterprise.

Comment Re:SVN will have its place for a long time (Score 1) 245

There are many usecases where people will need a centralized version control system (VCS). SVN was written ground up to be best in class centralized VCS and they accomplished this goal while building a very elegant and efficient client server framework to boot.

this, this, a thousand times this.

I wish I'd seen your comment before I posted my own similar comment.

Comment Git Is Not The Be All End All (Score 3, Insightful) 245

I wish people would stop pretending that the DSCM model is the "only way of the future". There are plenty of completely valid use-cases for monolithic source control models. For instance, I am a firm believer that configuration management repos belong in a strictly monolithic architecture, with a single source of truth, deterministic version numbering, etc., etc....

Certainly I could see a case for moving people from CVS to something more modern (but in the same basic vein) like SVN, but here's the thing:

If their existing SCM application is working for them, and they're happy with it, then it's perfectly fine.

Comment Re:Lacking Credibility (Score 1) 149

I'm afraid that possibility has been discounted. Netflix has paid up. Didn't you get the memo?

Just because Netflix paid to improve the bandwidth cap on their peering point doesn't mean that the assertion ("my traffic is being throttled because it looks like netflix") was accurate.

Nope, it isn't safe to assume that. If that was the case then this traffic would be blocked completely, but it isn't, and what's more it is being modified. Do read the article.

I did read the article, which is how I was able to point out the numerous flaws in it.

Thanks for playing, though.

Comment Re:Lacking Credibility (Score 2) 149

It's really quite simple. If you have a download speed topping out far lower than your maximum and you then connect through a VPN and get more available bandwidth then there is a rabbit away somewhere. Netflix have already now paid up anyway to get rid of this 'issue' for their users, so that debunks this bit of dog shit.

It means you've routed out your ISP through a peering point that isn't Level3, and that the peering point between your VPN provider and L3 is less saturated than your ISPs. That's all it proves.

Connecting to something on port 25 and allowing inbound connections to something you have running on port 25 are two entirely different things. If you don't know that then you really don't know anything at all and frankly aren't qualified to comment.

Connections to port 25 have been set aside for "server to server" (e.g., MTA) communications for quite some time now, with "client to server" (e.g., MUA) communications moved to tcp/587 for over a decade. Thus, if you are connecting to tcp/25, it is safe to assume, in this day and age, that you *are* an MTA. If you were an MUA, you'd be using tcp/587.

If you don't know that, then you really don't know anything at all and frankly aren't qualified to comment.

Comment Lacking Credibility (Score 2) 149

When the original article cites as its first example of network tinkering the already thoroughly debunked "faster Netflix through my VPN" video, the level of technical credibility to the article is already set at "abysmal". There's no argument that the VPN tunnel was faster (obviously), but the alleged reason (which many sites, including this fine establishment, got on the bandwagon for, even though they should know better) was horseshit.

Second, the article demonstrates the problem with a connection to tcp/25. Unless the customer is running a mail *server* on their residential ISP line, they should be connecting to tcp/587. The wireless provider in question here is absolutely within their bounds to say "they don't want you running an SMTP MTA on the wifi", but that running a normal MUA is fine. Is there any evidence that this problem also exists for connections to tcp/587?

Comment Re:Changes require systematic, reliable evidence.. (Score 1) 336

The pipe was more than big enough, but ISPs chose to not allow all the packets through.

You might not be aware of this, but that happens with peering, data-centers, etc., all the time. Where the physical layer is capable of, say, Gigabit throughput, but you're only paying for 10Mbps (and they're only reserving for you throughput through their infrastructure of 10Mbps). You pay for the larger "pipe" (e.g., a higher cap), and voila, the valve on the pipe is opened wider.

That's exactly what happens in this Comcast/Level3 situation.

If you've got fiber installed, and switch port connections available, lighting up the fiber costs pennies per terabit transferred,

You are looking more and more like someone who doesn't understand the fundamental cost-centers of large scale network administration with every post.

Comment Re:Changes require systematic, reliable evidence.. (Score 1) 336

No, I get that. I just don't see the backbone providers changing. Nobody is going to want to be the one who makes such a massive change that impacts billions of dollars of revenue (in connected customers) and the "trailblazer" in finding the pitfalls that need addressing along the way. This is an area where jumping out to the bleeding edge is rarely, if ever, rewarded, and where there is precious little ability to adequately test lab environments under anything approaching "real world" conditions.

It's an idea that might've taken hold if the Internet was being deployed from the ground up today, but I don't see it being implemented in the current environment.

Perhaps that's me being a 20 year industry-vet curmudgeon at this point. Not sure. :-)

Comment Re:Changes require systematic, reliable evidence.. (Score 2) 336

You invested? like with a prospectus and such?

I've addressed this fallacy elsewhere: the parts of tax subsidies which had actual contractual, regulatory, or statutory "requirements" tied to them have been either upheld, or worked out through the existing oversight processes. What you're asking for is something new which (frankly) nobody thought to include in the requirements when such things were being done decades ago. That's not the ISPs' fault, it's "ours" collectively for having something of buyer's remorse about the deal we negotiated with them.

But pretending that we're Darth Vader, telling them we've altered the deal and to pray we don't alter it any further is not governance, but tyranny.

Comment Re:Changes require systematic, reliable evidence.. (Score 1) 336

What you're describing has not really been done "at scale" (in my experience and understanding anyway) by anything other than a few companies. Expecting that the backbone providers are all going to change their traffic infrastructure to accommodate this model you describe may not really be practical for "the current Internet".

For something like an "Internet2" type of green-field deployment, if we "had it all to do over again, starting with the tech of today", you might be onto something. But I don't see that change happening in today's environment.

Comment Re:Changes require systematic, reliable evidence.. (Score 1) 336

Oh, I remember dial-up ISPs all too well. I helped build two of them, and then was there for the lighting up of one of the first/few MMDS (wireless cable) ISPs. As you point out: I've got a low ID#, I've been to this dog and pony show for a long long time. :-)

As I said elsewhere in this sub-thread, I'm fine with any of a number of ways of introducing competition:

  • competitive wholesale access
  • electric companies acting as the disinterested neutral third-party fiber carrier for competition
  • communities deciding to install fiber for community-networks (provided they also allow for competitors to access that fiber

But attacking the problem from the "neutrality" side of the argument is just a misguided tactic, putting another piece of duct-tape on an already shoddy arrangement.
There's plenty of ways to crack that nut.

Comment Re: Changes require systematic, reliable evidence. (Score 1) 336

And that should be mandated on the cable side of the wall as well. The reason it gets no traction on the telco side of the playing field is that they aren't really investing in outside-plant upgrades (FIOS was a dismal money-losing failure for Verizon, which is why you haven't seen a new community be rolled out in years, except in one case where a court ordered them to because they'd contractually agreed to roll it out but hadn't done so after halting rollouts).

Cable on the other hand has a constantly growing need for capacity (for video) and so has been doing plant-upgrade after plant-upgrade over the intervening time.

Comment Re:Changes require systematic, reliable evidence.. (Score 1) 336

You're making the classic mistake of thinking that the "transit data-rates" is the dominant cost factor. It's not ... switching and routing equipment, for the types of throughput you're describing, is where the real costs lie. I think your assertions about hardware costs are extremely low-ball, and this comes from someone who's been in charge of spec'ing, approving and buying network hardware for the last ten-plus years of his career.

Slashdot Top Deals

"A car is just a big purse on wheels." -- Johanna Reynolds

Working...