Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:No problem (Score 1) 761

Just call it in as a bomb scare. Somebody has attached a weird device with antennas and whatever to my car and I think it could be a bomb. Ask there advice on what to do.

They will either realize there is a warrant and tell you it is safe or the bomb squad won't talk to the DEA and will do the destruction for you.

Seriously I'd be a little freaked by some random wireless device being attached to my car without knowing what it was. Let the experts decide if it is safe.

Comment Fatigue (Score 1) 255

I'd put a bet on driver fatigue being the main cause.

Bee keeping is mostly a day time job except when you need to move a hive. If you close down a hive during daytime then lots of bees are flying and you get losses.

The bee keeper is working outside his normal shift, drives for hours to the bees, spends half the night closing them down and loading them on the truck and then has to drive for hours to the new site hopefully before the sun gets too hot and cooks the closed down hives that can't really vent themselves.

The bee keeper isn't really a professional driver, doesn't know the roads as well as a professional who would be more likely to be familiar with the road works. Tired and surprised so doesn't react appropriately so crashes.

Comment Its happened in other places. (Score 1) 554

Australia stuffed around with daylight savings dates for the Olympics. Most distributions pushed updates with enough time that it wasn't a problem. I saw a few Windows servers that weren't patched miss the change. A few TV stations didn't make the update in time but that happens with DST normally. Some outlook calendar entries misbehaved.

Postgres is always unhappy with countries that stuff around with this since causes extra entries in the lookup table.

Mostly it works and mostly it doesn't matter and its fixed manually in a day or two when somebody notices.

Comment Re:Being a mathematics undergraduate... (Score 2) 680

I can attest that "true" math is very removed from computation. The computational classes are all regarded as the "easy" classes. This is in contrast to the "hard" classes, real analysis and abstract algebra.

I'm not sure what you mean by computational classes but the computational mathmatics classes I did were some of the hardest I've done. There was a reasonably large choice of "tools" to solve a problem but then proving convergence and error bounds was really hard work. I never really managed to get the "art" of it right requiring much trial and error (with pages of wasted work) to get an answer. Some of the other students had a better eye for it and could make the "educated guess" about what path was going to give an answer.

Computationally its easy to get and answer to many difficult problems but it is hard to work out how good the answer is.

Comment Re:he's right (Score 2) 680

I very much disagree. Teaching is a specialty in its own right and a good teacher can teach almost any subject given appropriate support and resources. Of course some competency in the subject is necessary to provide insight when the lesson isn't hitting the mark but you don't need to be an expert.

I'll give an example: During high school I had two physics teachers. One was pretty talented at physics and had been teaching it for years. The other hadn't taught physics much before and wasn't that strong (if he did the exam some students would have got better marks than him).

The first taught a pretty neat syllabus. He did lots of well set out problem solving examples but the course was pretty dry. I think most students just got recipes out of the course and no real insight.

The second teacher followed the same syllabus but his examples weren't as well set out and he didn't necessarily get the right answer. Because he wasn't so confident in his answers he provided a lot of demonstration of checking techniques such as estimation and general sensibility checks (The ball doesn't roll up the inclined slope). His efforts to verify his answers, clean up his working and fix his answers taught more students about problem solving, physics and maths techniques than the rest of the course.

It's a shame I didn't realise this at the time. It must have taken real guts and a lot of home work to get up and teach that course. Brilliant teacher but not a brilliant physicist.

Comment Re:Not the problem, not the solution! (Score 2) 92

I don't think most operators could do a better job. Every ISP I've dealt with has been pretty anal about what routes they accept from me.

This incident happened at the large ISP level and currently they don't have the information required to do better filtering. In this case China Telecom might legitimately be the shortest path for some of this traffic some of the time and there is no way to tell otherwise.

The PKI signed advertisements will provide trust that I have ownership of the resources and would probably solve most of the accidental routing incidents.. i.e. somebody fat fingers a route on some "core" router and it starts advertising it under its own AS. The rest of the Internet will ignore that route because that AS doesn't own it.

What I don't see it solving is the malicious case were the attacker strips the AS path and re-advertises the route. i.e ME-A-B-C-D-BADGUY. Badguy just advertises ME-BADGUY so anybody closer will go their direction. Nobody can tell the difference cause I've signed the advertisment and they won't know that I'm not connected to BADGUY...

unless I sign that my nexthop is A. A then signs that his next hop is B. I could imagine that getting very expensive in the middle. Where the level1 carriers have to sign every route multiple times for every one they connect to.. Ouch.

Comment Re:Whatever (Score 1) 460

5. If only people that designed IPv6 "by committee" though a bit about real world and technology, IPv6 would have been much easier to implement. 128 bit addresses are a *wrong* size. They should have set the size at 64 bit. 64 bit values are now natively manipulated by much of computer hardware, so just as the new protocol would come into wider use, it would be conveniently supported by many algorithms relying on hardware. Now go build a radix tree for a routing table of 128 bit IPv6 addresses - let's see how well that works.

6. IPv6 in default implementation wants to use your MAC address as part of the IP. I don't know, perhaps a few of those big companies that like tracking people so much may be interested in that. I am not.

In conclusion - I'll wait till stuff begins crashing around. May be then someone will come up with a better solution than a deadborn poorly designed IPv6 we have now.

I think the 64bit size was planned for. The network part of the address is 64bits. Anything doing routing isn't going to concern itself with the host part. Anything doing the last hop part of the processing isn't going to be doing much with the network part but doing its look ups on the host part.

I agree that the MAC address based network address is scary but I wonder how much of a signature they already have from other properties of my computer.. I wonder how long before the IPv6 address is used to try and prove that it was a specific computer that generated some traffic.

Comment Re:MAC Address? (Score 1) 460

Why is IPv6 not based on MAC adresses? I've never understood this. Every piece of electronics capable of connecting to a network has at least one unique hardware id already. Why do we need a new one?

Is there are reason not to just use this number? Or have I misunderstood, and this actually IS the plan.

A couple of reasons:

V6 addressing often is based on mac address (for the host part) when using the auto addressing methods.

Some network devices don't have mac addresses. Serial port with ppp.

Ethernet MAC addresses aren't necessarily unique. I've had to debug a mac address collision in a medium sized site.. I think vendors are better now but it probably still happens.

Makes sense to have a static address even if the hardware has to be changed for some reason... i.e. router goes on blah::1 maybe.

Sometimes you want multiple addresses. Maybe virtual ethernets. Only one can be the MAC.

I think the plan is that the network half of the address is allocated in as hierarchical a way as possible to hopefully enable route consolidation (are we dreaming). The host part will be allocated based on MAC address except when it is not.

Comment Re:Misread the RFC (Score 1) 123

So then I guess everybody should just skip slow-start then? If Google and Microsoft can and are having tremendous results, why shouldn't everybody? Heck, why is slow-start even still around then? Should be tossed to the wayside like a Colecovision if its optional and gets in the way of your performance...

Slow start probably should be skipped for most well tuned websites. Most HTTP connections are short lived enough to never ramp up to the available bandwidth or saturate queues so why use an algorithm designed to keep queues small while trying to efficiently use bandwidth.

I think the slow start concept would still be useful for bulk transfer services.. If you are serving a couple of gig ISO images then you probably don't care about a bit of round trip time latency if it means you don't clobber router queues downstream. I could imagine congestion collapse would be more likely with this load.

Bittorrent should probably use slow start. Often the competition for bit torrent connections are other connections for the same torrent. If we start too fast we could impact too many of these connections causing them to back off impacting overall performance.

I'd guess that the magic numbers that were picked for slow start when the RFC was written are no longer applicable. RTT is shorter, queues are probably longer (near the edges anyway) but the queues are probably shorter in terms of time. i.e. less consequence for a dropped packet, less likely to fill a queue and less of a performance hit if we do fill the queue..

Google's choice of initial window size would be well considered. If google's tuning impacted network performance then it would be causing packet loss to their own connections causing the latency to go up due to retries..

Similarly microsoft's initial window size seems a bit ridiculous so I'd bet it is either:
1) A mistake that is causing overall lower performance to their users.
2) Course tuning that helps for the front page (so helps in general) but causes a lower performance for bigger pages.
3) They are doing some sort of window size caching and that number was cached from previous connections.
I did note that there were no retransmissions in the MS flow so that it doesn't seem like a bad guess. They don't support SACK (WTF) so that would slow things down if they lost packets.

Comment Re:I don't think the authors understand cryptograp (Score 1) 247

Your step two is flawed. VortexCortex steps are accurate.

In your step 2 Google think they send you Googles certificate but they are really sending it to the MITM. Since it was the MITM who started the connection they build the session keys so can decrypt the session.

In your step 3 They don't need googles private keys they can create their own and because they have a CA trusted by most people they can sign them so that most people trust them. (I use firefox mostly which comes with CNNIC CA installed)

This sort of MITM attack is used all the time by filtering gateways. Examples include "McAfee web gateway" amongst many others. Since the filtered company controls its desktop operating environment they can install their own CA. The gateway filter then creates certificates pretending to be the endpoint and creates a outbound connection pretending to be the client.

The only real way for SSL to solve the man in the middle problem is for client side certificates issued by the server's owners. You have a distribution problem. If the server trusts the CA in the middle as well then it can intercept both ways.

Comment Re:Clueless (Score 1) 549

You don't have to agree to the contract but it is the only thing that is giving you a copyright license to continue viewing their content.

Probably the best example of a copyright license for publicly viewable content is the GPL. You don't have to agree with the GPL but you loose lots of rights if you don't.

Comment Re:Already Run Out (Score 1) 442

The limited address space is the real problem and the shortage has been leading networking issues for years. IPv6's extra space, the minimum subnet of /64 and the large rfc1918 eqivalent space will make things lots easier.

How often has somebody allocated a /30 for a routing/firewall segment that then needed HA some years later (2 real addresses and a floating). The /30 is then wasted because its to small to be useful without making a mess of routing.

In around two years it will be impossible to get portable address space allocated. If I have the next big killer application I'll be tied to whatever single ISP wants to rent some address space to me. If they have routing problems I won't be able to fail my routes over to somebody else.

I've seen a client fill a B class network. They built their addressing scheme many years ago when Bs were still given out and in the years since grew in ways they hadn't predicted. Of course some of their server farms turned out much bigger, some of their locations turned out smaller than planned wasting space. Basically they ended up with a mess that was difficult to clean up. IPv6 won't get to that sort of fragmentation since /64s will always be useful.

I've seen many situations where large organisations use 10/8 for their internal networks. They then merge, split or go partnership with somebody that requires private links between the organisations. I've seen situation where application servers that needed to talk to each other were on the same 10.x.y.z and the NAT nightmare between them sucked. Every device had a different address depending on who you talked to.

People talk about IBM should give back their A. Without that A there would be a NAT nightmare between everybody that IBM connects to for their management services. Imagine how many 10/8 networks IBMs NOC talks to and imagine what it would be like if they were also 10/8.

I can imagine all the big companies moving to public v6 address space for their private networks but never advertising it to the Internet. The outbound traffic will be NATed/proxied at the gateway. This will make the corporate merge/split/peer cycle easier to deal with.

Cellphones

Nokia Trades Symbian For MeeGo In N-Series Smartphones 184

An anonymous reader writes "Nokia announced that moving forward, MeeGo would be the default operating system in the N series of smartphones (original Reuters report). Symbian will still be used in low-end devices from Nokia, Samsung, and Sony Ericsson. The move to MeeGo is a demonstration of support for the open source mobile OS, but considering the handset user experience hasn't been rolled out and likely won't be rolled out in time for its vague June deadline outlined at MeeGo.com, could the decision be premature?"

Comment Broken by design. (Score 2, Insightful) 154

As seems typical with this government they don't think through the consequences of their laws (or proposed laws). A good law should:
1) Feel guilty if I break. (not applicable in this case cause it is a proscriptive law)
2) Solve a problem.. In this case it will just lead to more off shore services, encryption and obfuscation in existing communications. This will just lift the bar so that a warranted tap will no longer be likely to provide anything useful.
3) Hurt the bad guys more than the good guys. This just lifts the cost for everybody and depending on what the ISPs need to do to collect this data then it may effect performance.
4) Be technically possible.

I've got a plan with a static IP so my ISP doesn't do any transparent proxying so they don't automaticaly get my URL history. I'm running my own mail server so they don't get my email information. I trust them becuase I know they couldn't afford to be bothered.

So the ISP is going to have to start doing deep packet inspection on all my traffic to pull out these bits of information to log. That starts to get expensive and intrusive to their operations and my bill.

If we start to use more TLS on our smtp connections then they just won't have the information to log.

If they are logging URLs then I'd be tempted to do my backups with encrypted data in the get request. Can't be compressed and can't be used. This sort of attack with expensive noise could be implemented on a lot of websites... Say google with their stance against the Australian governments stupidity put more hash codes in their URLs. It would make the hard drive manufacturers rich trying to supply the ISPs fast enough.

Slashdot Top Deals

No man is an island if he's on at least one mailing list.

Working...