Forgot your password?

Comment: Re:Response Bias (Score 1) 413

Surely that's the very question they asked, and are not hiding it? I mean that's what the article flat out says, right? People want to both hire and work with the top people regardless of where they're from, and the general US attitude towards issuing foreign visas makes it hard to hire the top foreign guy and practically requires you to hire the mediocre guy just because of where they're from?

Comment: Re:OK, NOW I'm pissed. (Score 1) 413

So, what, I'm supposed to sit back and accept an attitude of 'fuck U.S. workers, they all suck, we'll hire from overseas because they're better'?

That's not what he said. He said the best workers are not ALL from the USA. Guess what? He's dead goddamn right and who the hell are you to get pissed off because someone who runs a business pointed out the obvious, bleeding truth - America does not have a monopoly on software engineering talent, far from it? That means it's totally expected and understandable that given a choice between some American workers and some foreign workers, that employer might legitimately prefer the foreign workers because they are better than you are?

If this makes you mad then you need to learn about anger management. If you think it's all about working cheaper (which US law makes illegal anyway) then you need to get your head out of your ass and realise that foreign workers are hassle, can be expensive, and can still be worth it if they are better than you.

Comment: Re:"Culture in tech is a very meritocratic culture (Score 1) 413

Tech skills are hard to objectively verify. Technical results are hard to objectively verify. We collectively proxy that by having lots of tests, competitions, selection, and other heuristics. But that's not a symptom of us respecting skill more than other jobs(maybe more than other specific office jobs, but not more than lawyers, doctors, manufacturing technicians, similar things), it's a symptom of it being really hard to tell.

How many technical interviews have you done, as an interviewer, in your life?

I have done about 220. Evaluating technical skills is dramatically easier than evaluating many other types of skill, in particular, it's a lot easier than evaluating skills in management, marketing, customer service .... anything with a large component of soft, people skills. You can ask a technical person to achieve a very specific, tightly scoped technical task during an interview and if you know the question well quickly get a feel for how good they really are. I wouldn't want a hiring decision to be made based on just one interview, but in the hands of a good interview it still yields valuable data. For someone without specific technical skills you end up having to rely on much vaguer and more gamaeble questions like "Tell me about a time you overcame a problem of type ", the answers to which are both hard to verify and easily manipulated by people who want to make themselves look good.

I'm afraid I must agree with the original statement. The difference between someone who is merely OK and is great, well, that's huge. Someone who is merely OK will come in to work each day and will (probably) resolve the bugs or implement the features you set them. They will probably not come up with a solution that puts you ahead of the pack. They may waste large amounts of time on trivial things or produce something that sucks because they are only familiar with technology X but that's a poor fit for problem Y. Their technical judgement may be flaky - in the worst case you will have to spend a lot of time double checking what they're doing, yet they will start demanding more responsibility because they've stuck around for a while. The very best will teach you algorithms and techniques you never knew about. They'll come up with the unique feature that makes you stand out from the competition. They'll be fun to work with and help you recruit other great people. The difference is not to be sneered at.

Comment: Re:Bullshit (Score 1) 413

When Google offered me a job, I could not believe how little they wanted to pay me. 67% of what I was making at a megabank

Er, you could probably replace "Google" in that sentence with any company. You're comparing your salary to one at a fucking bank, companies so famous for absurd compensation packages that it triggered street protests ....

Comment: Re:Feeding the PR engine, (Score 1) 413

Beside, best techs from other countries are already in demand at home, no need for them to move. "The best" is not someone US would get from H1B visa program.

Reality check: tech companies hire all sorts of people in all sorts of places for all sorts of reasons.

Back in 2006 I got a job with Google SRE (at the age of 22) and they gave me a choice of locations. I chose California. But it was 2006 and the economy was booming, and that year they hit the H1B visa cap. I wasn't considered important enough to use up one of the last H1Bs they had (fair enough), so ended up moving to Switzerland instead. Over the following years I was promoted several times, invented a major new spam filtering technology they now use on all their biggest products, and earned a hell of a lot of money. Which I spent in Switzerland. I left in January to form my own company, although Google wanted me to stay.

Had I obtained an H1B, I would probably have done substantially similar things in the USA, but thanks to attitudes like yours that wasn't possible. I'm not complaining though. Having spent plenty of time in the Valley I came to appreciate my luck in not ending up there. Why would I want to live in a suburban desert like the bay area, or San Francisco where it seems the local population viscerally hates tech workers, when I can live ten minutes walk from a lake so clean people swim in it every day during summer and the local population still thinks Google is cool?

Looking back, I got lucky that I was denied an H1B. But economically speaking that was Switzerland's gain and America's loss.

Comment: Re:OPSEC (Score 2) 115

by IamTheRealMike (#47730727) Attached to: NSA Agents Leak Tor Bugs To Developers

If you RTFA you'll see that Lewman has zero evidence for this assertion. The headline paints it as a statement of fact but in reality all Lewman knows is there are people who appear to be reading the source code and reporting bugs anonymously. That's it. They could be NSA/GCHQ moles. Or, more likely, they could be anonymity fans who like security audit work. They really have no idea.

Comment: Re:say it again (Score 1) 236

by IamTheRealMike (#47727705) Attached to: Latest Wikipedia Uproar Over 'Superprotection'

Part of this is the much-hated reference requirement -- all facts in a Wikipedia page must have an external source to back them up. This rule alone causes a huge amount of strife among those who don't understand

It causes a huge amount of strife because it's yet another policy that's easily manipulated by people with no common sense.

For a long time the article on Bitcoin stated outright that it was a ponzi scheme, despite that Wikipedia's own article on Ponzi schemes had a list of requirements which Bitcoin obviously did not meet. Attempting to get this fixed was a kafkaesque nightmare due to someone camping on the page and immediately reverting any change that removed or even just qualified this statement. The reason: the statement had "citations" which turned out to be (a) someone's blog, and (b) an article in The Register, that well known bastion of reasoned and careful analysis.

Wikipedia is a project that manages to work in spite of the absurd management and crazy policies, because the idea of a global encyclopedia is such a compelling one. But it badly, badly, badly needs to be forked by people who find a way to run it better.

Comment: Re:Total BS (Score 1) 227

And your father's knowledge is broader and more accurate than this report's ..... because?

There was certainly a time when wage disparities were truly enormous, though not that big. But the entire premise of this story is that what we knew to be true just ten years ago is now out of date.

I suspect your father was giving you information that was once correct but no longer is.

Comment: Re:just ask carriers. (Score 1) 247

by kasperd (#47701493) Attached to: The IPv4 Internet Hiccups

because we couldn't possibly have good service from an ISP.

Don't most ISPs sell good service at a premium? I think that was the entire point with having poor service in the first place. The only other reason I could imagine would be to drive customers to the competitors, and that doesn't seem to make sense from a business point of view.

I have no imagination, so I have no idea what we might get in the future if we actually had the infrastructure to support it.

I can come up with a couple of additional usages for some /64s. One /64 could be used to harden your recursive DNS resolver against poisoning. The 16 bit transaction ID in DNS is way too small. The entropy you can get from randomizing port numbers help a lot. But you will still only get a total of 32 bits of entropy that way. Some have gone to great lengths to squeeze extra entropy into a DNS request, for example by mixing lower case and upper case in the domain. But that doesn't give a lot of bits. If you allocate a /64 to the recursive DNS resolver, you can put 64 bits of entropy into the client IP, which instantly gives you more than a doubling of entropy almost for free.

A modern OS is a multi user system, imagine if each user could get their own IP address. You could allow users to use privileged port numbers on their own IP address, and all port numbers on their IP address would be protected from usage by other users. You could do this by responding to neighbor discovery for as many IPs in your link prefix as you have users on the node. But a more secure and more efficient approach would be to route a prefix to each node.

Comment: Re:IPv6 won't fix this problem (Score 1) 247

by kasperd (#47701435) Attached to: The IPv4 Internet Hiccups

a prefix that just got compressed might get split quickly, and vice versa

There is no need to combine the routes, if there is still free entries in the CAM. Once the CAM is full and another entry need to be inserted, the pair which has been a candidate for combining for the longest time can then be updated. That algorithm would keep the number of updates down.

However as the number of routes approach the limit of what can be handled, even with combination of routes, the frequency of updates needing to combine and split entries will go up. It may be they are already doing this, some sources say the problem did cause reduced performance, which would be consistent with such behavior.

Comment: Re:just ask carriers. (Score 1) 247

by kasperd (#47699217) Attached to: The IPv4 Internet Hiccups

all Comcast needed to do was write "56" in their config files rather than "60"...

One has got to wonder if that's how it happened. Did some admin arbitrarily decide to write 60 in a configuration file, where he could/should have written 56, and then that was how it was going to be? Or did a lot of bean counters get together and decide on a policy (possibly not even based on real data), and then admins had to implement it like that without asking questions.

But that's not what we should be targeting. We should be targeting "enough for pretty much everybody", and "for the foreseeable future" -- including for any new, fun things that become possible because of easily-available address space.

Even in many areas where there is tough competition among ISPs, it is hard to find even one trying to capture those customers, who want IPv6. That's how bad it looks today. And that's why I would happily take a /60. Hopefully once IPv6 is the norm (which it likely will be before the end of the decade), the ISPs will start competing on prefix lengths as well.

I can't yet imagine what I would use more than a /60 for. But if I get a /60, I might soon come up with ideas on how to use a /56. All it takes to get that competition among ISPs started is two people independently of each other coming up with something really cool you can do to put your entire /60 to use.

Comment: Re:IPv6 would make the problem worse (Score 1) 247

by kasperd (#47697957) Attached to: The IPv4 Internet Hiccups

Next, IPv6 addresses are of course 4 times larger than IPv4 addresses. Even if your IPv6 routing table has 5 times fewer entries, you're not getting a 5 times saving in memory. You're only getting a 5/4 times saving or tables that are 80% of the IPv4 - nowhere near as dramatic.

In IPv4 all 32 bits are used for routing, though on the backbone you tend to only accept /24s. In IPv6 the first 64 bits are used for routing, though on the backbone you tend to only accept /48s.

Either way, you only need twice as many bits in the CAM to handle an IPv6 route compared to IPv4. So what you call a 20% saving is more like a 60% saving. The picture is a bit more complicated, because two CAM entries at half the size is not the same as one of the full size. So you may have to decide at design time, how you are going to use that CAM.

Routing tables growing with the size of the network, in terms of # of entries - even if not at all fragmented.

I'd love to take part in solving that problem. Any realistic solution is going to start with a migration to IPv6. And I don't see how we could expect the solution to be deployed any faster, so if we start now, we could probably have it in production by 2040.

it is possible that IPv6 is actually too small to be able to solve routing scalability.

That algorithm has a major drawback. The address of a node depends on which links are up and which are not. You'd have to renumber your networks and update DNS, every time a link changing somewhere cause your address to change. If we assume that issue can be fixed, it doesn't really imply that addresses would have to be larger.

The algorithm in the paper assigns two identifications to each node. The first one could very well be the IPv6 address assigned to the node. The second address is computed based on the first address and structure of the network. However their routing looks awfully similar to source routing. So really the solution might just be to make source routing work.

I can think of a couple of other reasons to consider IPv6 addresses to be too short. That paper isn't one.

Teredo and 6to4 are two "automatic" tunnel protocols. Both embed IPv4 addresses inside IPv6 addresses. Due to the use of NAT, Teredo needs to embed two IPv4 addresses and a port number inside the IPv6 address. That doesn't leave room for a site-level-aggregator or host part. If you wanted one unified protocol which could replace both Teredo and 6to4, you'd need at least 192 bits in the IPv6 address.

After IPv6 showed up, people realized that it is sometimes convenient to embed cryptographic information inside the IP address. That was unthinkable with IPv4. With IPv6 it is doable, but you have to chose cryptographic primitives that are not exactly state of the art, due to 128 bits being a bit short for cryptographic values, and not all of them even being available for that purpose.

Comment: Re:IPv6 would make the problem worse (Score 1) 247

by kasperd (#47691329) Attached to: The IPv4 Internet Hiccups

You won't *keep* that nice clean space. The same processes that led to IPv4 fragmentation, ex space, will start to affect IPv6

With address shortage being the main reason for fragmentation, that doesn't sound so likely.


This will not exactly lead to growth in number of announcements, but it won't lead to a reduction either. Giving incentives to renumber after a merger may help a bit. At least there should be enough addresses that the company can pick which of the two blocks it want to renumber into, and that block can be extended as needed.

ASes eventually running out of bits in their prefix

Bits are set aside to allow them to grow - for now at least.

that's only 16 in a /48, a lot but not impossible to exhaust either, a /56 would be even easier to exhaust

Doesn't all the RIRs hand out addresses in /32 or shorter blocks?

Ok, you've got a 5 fold linear reduction compared to IPv4. However it still doesn't fix the problem that current Internet routing leads to O(N) routing tables at each AS

That is true. This problem is going to get even worse if we want end user sites to have access to dual homing. Fixing this is going to require some fundamental change to how routing is done.

But if IPv6 gets deployed soon, the reduction in routing table size should buy us some time, that can be used to come up with a more scalable solution, which will allow every site to be dual homed. But of course things will have to break if ISPs will keep waiting for breakage to happen before they start deploying scalable solutions.

That 5x linear reduction is, ultimately, a barely noticeable blip

If the tables grow with each generation of hardware, a 5x reduction can last a while. Not forever, but long enough that a long term solution can be deployed, if ISPs want to.

IPv6 doesn't fix routing table growth problems

Not permanently, but IPv6 can help now, and IPv4 can be expected to get worse if allocations gets split and traded. And throwing bigger hardware at the problem may help with this one issue regarding IPv4, but there are other problems with IPv4.

You can measure a programmer's perspective by noting his attitude on the continuing viability of FORTRAN. -- Alan Perlis