Become a fan of Slashdot on Facebook


Forgot your password?

Comment: Re:IPv6's day will come, but... (Score 1) 390

by FireFury03 (#49523063) Attached to: Why the Journey To IPv6 Is Still the Road Less Traveled

So, the designers of IPv6 could not conceive that somebody could have less than 2^64 devices and still want to put them in separate networks?

Networks are allocated as /64 chunks because it makes autoconfiguration easy. It is often argued by newcomers that this is a huge waste, but really, 128 bits gives you so many addresses that you can stand to do a bit of wasting in order to make things simple. Generally the "what a waste" crowd severely underestimate just how big 128 bits is.

So now my ISP will have a say in how many internal networks I have?

Yes and no. You _can_ allocate networks smaller than a /64, but you can't use SLAAC on such networks. That means you're stuck manually configuring devices or using DHCPv6. I believe Android has no support for DHCPv6, so you're probably very restricted if you choose to use a nonstandard network size.

And this is supposed to be better than IPV4 with NAT?

Oddly enough, yes - ISPs really shouldn't be restricting your internal infrastructure. If your ISP is being a dick about this then the answer is pretty obvious - switch to another ISP, it isn't as if ISPs are thin on the ground.

Comment: Re:IPv6 and Rust: overhyped and unwanted! (Score 3, Insightful) 390

by FireFury03 (#49518233) Attached to: Why the Journey To IPv6 Is Still the Road Less Traveled

People who think they need end-to-end connectivity for everything don't understand networking. It's not only not required, it is undesirable in most cases.

Its undesirable in _some_ cases, it's absolutely required in others. So if you have a single IP address and you have to NAT everything, you win in the "some cases" situation and you lose for "others" (even worse with CGNAT). If you get rid of NAT and stick a stateful firewall in, you get the best of both worlds and can choose the best for the situation at hand.

Comment: Re:IPv6 and Rust: overhyped and unwanted! (Score 1) 390

by FireFury03 (#49518221) Attached to: Why the Journey To IPv6 Is Still the Road Less Traveled

As someone who's not really a networking guy, this!

I like the extra layer NAT provides. It's no substitute for a firewall of course, but having your internal boxes not publicly addressable at all adds an extra layer of warm and fuzzy.

Is this attitude wrong? Probably. But it is also pervasive.

That attitude is definitely wrong. The warm fuzzyness you're currently feeling is false security - lots of ways to trick a NAT into giving access to internal machines that you think are unaddressable. What you need is a stateful firewall - that gives you real security without breaking all the stuff that NAT does.

Comment: Re:IPv6's day will come, but... (Score 1) 390

by FireFury03 (#49518207) Attached to: Why the Journey To IPv6 Is Still the Road Less Traveled

WTF do you need a /48 for? A /64 isn't big enough for you?

/64 is only big enough for a single network. /48s were quite common for a while, then recommendations were for ISPs to issue /56 to end users. There is no specific recommendation these days, but you certainly want to have more than a /64 if you can. I'd argue that /60 is a pretty reasonable size for a consumer grade ISP to hand out.. maybe /62 at a push, but that's starting to feel unreasonably scrimpy.

Comment: Re: Waiting for the killer app ... (Score 2) 390

by FireFury03 (#49517841) Attached to: Why the Journey To IPv6 Is Still the Road Less Traveled

IPv6 would help both enormously.

In the long term, yes. In the short term, going offline for the 93.69% of their users who don't have IPv6 yet would certainly be seen my most as a completely dickish move - I'm pretty sure their investors would be upset, for one thing.

Lower latency on routing means faster responses.

How does IPv6 yield lower latency? If anything, the latency on IPv6 is often slightly higher than IPv4 owing to the prevalence of IPv6-over-IPv4 tunnels where native IPv6 interlinks aren't available, along with larger headers slightly increasing the latency of cut-through routing.

IP Mobility means users can move between ISPs without posts breaking, losing responses to queries, losing hangout or other chat service connections, or having to continually re-authenticate.

Does anyone actually implement IP mobility? It requires support from your ISP, and I've not heard anything about any ISPs implementing it.

Autoconfiguration means both can add servers just by switching the new machines on.

DHCP does pretty much the same under IPv4 - I can't see this being a boon to Google/Facebook. (TBH I wouldn't be surprised if their infrastructure was too complex for any of these protocols - they've probably got some home baked protocol for doing that stuff).

Because IPv4 has no native security, it's vulnerable to a much wider range of attacks and there's nothing the vendors can do about them.

So no different from IPv6 then... both protocols have ipsec support (I think it's mandatory for IPv6 whereas the IPv4 version is an optional backport, but all major OSes support it in both cases so that's neither here nor there). However, ipsec use is currently pretty much reserved for VPNs - you can do adhoc ipsec but no one does. About the only thing you get from IPv6 is that IP addresses are much sparser, so scanning/attacking by picking addresses at random isn't effective.

Comment: Re:price? (Score 1) 328

Whilst CFLs worked as a stop-gap until LED lights could become feasible, I do wonder if they have done long term harm to people's acceptance of efficient lighting - for a long time, "energy efficient lighting" is going to be associated with "takes 5 minutes to get bright enough to see" thanks to CFLs...

That said, I might miss CFLs in my bedside lights if I ever have to replace them with LEDs - that's the one place where a slow start-up is quite nice!

Comment: Re:well (Score 1) 418

Yeah I never understood that, why try and recover the clock signal from the data stream? If I where designing it I would have my DAC monitor the stream to calculate what the clock signal is supposed to be then generate my own dam clock signal.

My guess is:

If you recover the clock from the stream, you just need to roughly control the motor (CD) RPM and stream what you read. If you run your own clock you need a buffer and you then either need to dynamically tweak the motor speed based on how fast the buffer is filling/draining, or you need to read the CD a bit too fast and stop/resume every so often. Clock recovery sounds much simpler to me.

Comment: Line of sight? (Score 3, Interesting) 123

Will it have the same line of site limitations as current satellite Internet? I'm in Seattle, and with providers like HughsNet you need a very good line of sight to the south to get service. IIRC, where I used to work we had the dish pointed only 24 degrees above the horizon.

These sats are going into LEO, not GEO, so their position in the sky won't be fixed. I imagine you'll used a phased array antenna to track them. The good points being: lower latency, no requirement to see the southern horizon specifically. The bad point being that you'll need a view of a bigger chunk of the sky to avoid signal dropouts as the satellites move - how big a chunk depends on how many satellites they have up there (and therefore how many are above the horizon at the same time). If they have enough satellites, it may work out better for you.

Comment: Re:Malware (Score 1) 181

by FireFury03 (#48755437) Attached to: Inside Cryptowall 2.0 Ransomware

If a program needs to look at stuff in other file structures then give it read access

Great! $malware got read access to your bank details.

You want it to be able to write to files in those other directories, fine, it reads in a file it isn't allowed to overwrite or change, and then saves it's own copy that it can molest in whatever way it wants.

So now instead of having a single copy of the file, you have a separate copy saved by each application that has been used to process it - creating a mountain of almost-identical files that the user has to keep track of is not a user friendly way of doing things.

Better is to have a versioned filesystem - each time a file is changed (by any application!) the delta is saved and the filesystem keeps the old data hidden away. Most of the time everything behaves as normal - you have one copy of a file, no matter how many times it is edited. If you need to roll back some changes then you just ask to see previous versions of that file, much like a source control system. And indeed, there are a number of file systems that do exactly this - if you care about such things there's nothing stopping you doing it.

It doesn't stop malware reading your files or modifying them, but it does mean you can recover the unmodified versions... but then doing backups (which everyone should be doing anyway) gives you similar protection.

Comment: Re:Malware (Score 1) 181

by FireFury03 (#48753745) Attached to: Inside Cryptowall 2.0 Ransomware

And, hell, why do applications get the run of every file I use under my account? Should they not have to request such things first? Even on Unix-likes, if you get on as my user, you can trash all my data - why?

Because anything else would require popping up numerous "would you like to allow this application to do $foo" boxes, and then you end up training the user to just hit "yes" on everything because it's too damned annoying to make a decision every time when the vast vast majority of access requests really are legitimate.

Sandboxing based on applications making their own decisions and being relatively trustworthy might not be a bad plan though - i.e. if your web browser has an immutable list of files it needs access to, and you trust your web browser, that provides some level of protection when some malware compromises the browser, so long as the immutable list really is immutable and the malware can't modify it.

I'm sorry, but the very concept of a virus scan happening "at scheduled intervals" or after you've already double-clicked on the file just tells you that it's too late before you start.

Well no, if you can roll back everything that happened between the "all clear" scan and the "you've been cracked" scan then that's certainly much better than nothing.

Fact is, I didn't install it and I have no idea what it ACTUALLY does.

You don't know what most software ACTUALLY does, even if you did install it - most software people use is closed source, but even the open source is a black box unless you actually audit it.

Comment: Re:As a former scientist: (Score 1) 287

by FireFury03 (#48744681) Attached to: Should We Be Content With Our Paltry Space Program?

True to a point, but the knowledge gained from the ISS is nothing to sneeze at either. I do agree that a manned mars mission is a bit silly at this point though, we don't really have the technology yet to make it feasible. More research into alternate energy sources should be where most of the money should be going.

I suspect a manned Mars mission will always be "a bit silly" at any point until people start actually doing it. And whilst I can't really point to much tangible return on the investment, "blue skies" project do have a habit of producing some quite unexpected returns.

To my mind, governments seem to be mostly concerned with themselves at the moment, with nothing to unify those in power towards some common (non-selfish) goal. With the few top-richest people being as rich as they are now I wouldn't be surprised if a few of them banded together to put together a manned Mars mission long before any government (so long as they do so before a revolution comes and redistributes the wealth a bit more fairly).

Comment: Re:ROI (Score 4, Insightful) 287

by FireFury03 (#48744635) Attached to: Should We Be Content With Our Paltry Space Program?

That's not really true. You can look at a research lab and measure the ROI retrospectively quite easily and use this to make forward looking decisions, and that's what a lot of companies do. They'll close research labs that haven't produced anything useful in the last 5-10 years, but they'll increase funding to ones that have.

And what about research that takes longer than 5-10 years to come to fruition (which actually isn't very long)?

Lets take fusion research as an example - that has spent decades sucking money out of governments and has produced very little return on that investment. It may never produce much return. But if we ever do crack fusion for commercial power generation, that would be a serious game changer - probably a big enough return to justify a couple of hundred years of otherwise fruitless investment.

Comment: Re:No we shouldnt (Score -1, Troll) 287

by FireFury03 (#48744591) Attached to: Should We Be Content With Our Paltry Space Program?

But that doesn't mean that the government should be paying for it, because not all of us agree we should be paying for it. Using Tax to pay for something should only happen for things we can only collectively purchase, like National Defense. We should be able to pay for it ourselves, and reap the rewards individually

Umm, I don't agree with my taxes being spent on "National Defence" (when I can sum up the current "defence" ideas as "go into foreign countries and blow up some brown people").

Guess what - you don't get to choose what your tax gets spent on. In theory, it should be apportioned democratically, but even that doesn't happen - a significant number of people objected to the Iraq war and were ignored.

Comment: Re:No we shouldnt (Score 5, Informative) 287

by FireFury03 (#48744539) Attached to: Should We Be Content With Our Paltry Space Program?

Compare NASA to, for example, Xerox PARC (Ethernet, the GUI, laser printers, etc.) or Bell Labs (the transistor, access control lists, UNIX, etc.) and see which produced more inventions that benefitted the economy as a whole per dollar spent.

Each shuttle launch cost, on average, $1.5bn. The cost of one launch would fund over ten thousand PhDs, or several hundred DARPA programs. Do you really think that NASA is the best ROI for taxpayers?

The problem with NASA is largely the senators dictating how the money will be spent, which leads to a huge amount of wastage. The shuttle is a good example - NASA could only get the funding if they made a space craft that fitted some fairly mutually exclusive specifications - the result was a space craft that could do none of those things especially well and almost certainly more expensively than building several separate craft tailored to specific jobs.

Look at the A-3 test stand as another example: it was designed for the Constallation programme, and when Obama cancelled the programme the partially constructed test stand was of no use. Congress demanded that NASA keep constructing this useless piece of hardware and they spent about $200M on it _after_ it was known that there was no use for it. How can you expect NASA to be value for money when it is treated as a jobs creation programme and forced to waste money like that?

SLS is probably another good example - insanely expensive, not least because congress are actually dictating the engineering requirements, and no doubt the government will order NASA to scrap it before completion, completely wasting all the money that was invested in it. Despite its huge cost, I kinda hope that SLS doesn't get scrapped, because then at least the money has gone into something that can be used instead of yet another useless cancelled project.

Far better would be to just give NASA a lump of money and tell them to do with it as they please - the money would still end up invested in paying people to do jobs (the jobs might not be in the various senator's chosen locations, but they would still happen), and we'd probably have a lot more science at the end of it instead of a huge pile of half-completed scrapped projects.

Programming is an unnatural act.