Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
I work for a small contracting company. A degree is definitely seen as an asset, but we've seen enough idiots at university that we recognize it's not everything. A technical college is an asset too, as are certs, but honestly people smart enough to figure things out for themselves are hard enough to find that if we see that we'll hire you either way.
OTOH, plenty of people make it through school without developing an aptitude for problem solving, so that's definitely not enough. It's probably going to have an impact on your starting pay though.
> What happens if NAT is used all over the place? You could imagine a bunch of
> subnets that use one address to the outside world but have hundreds or
> thousands of machines internally.
It *is* used all over the place. It's even used on an ISP-wide scale (expect that to become more common in the west). NAT delayed IP address exhaustion for a few years, a few years ago. The current rate of IP usage is what's happening *with* widespread use of NAT.
> There's a lot to be said for NAT from a security point of view too. Since you
> need to open up holes manually for incoming services, incoming connections
> for anything else will be blocked which makes it impossible for people to
> exploit most security flaws on the machines behind the router.
You can get all of that from a stateful firewall that blocks inbound connections by default.
> Reading between the lines it seems like IPv6 was a revolutionary solution to
> running out of address space. NAT was an evolutionary one. As usual the
> market has picked the evolutionary solution and more purist types are whining
> about it.
NAT isn't a solution at all, it's a way to delay the inevitable. It has successfully done that, into approximately 2011-2012. What it doesn't do is change the fundamental problem, it's not possible to use it *enough* to hold off exhaustion indefinitely.
Breaking end-to-end connectivity isn't the primary concern. This has already largely happened with NAT, and will continue to happen to a certain extent with IPv6 because we'll be using stateful firewalls. We can deal with this for most home users.
The problem is that NAT still consumes IPs, and other hosts like servers really do need to be reachable. The market prefers NAT now because exhaustion hasn't happened yet, and as the last few months have demonstrated, the market is remarkably good at ignoring problems for as long as possible.
Purist types *are* whining about it. But pragmatic types like me are also concerned that people like you seem to think NAT is something we can use later as a solution, when we've already been using it for years as a way to buy time.
We're going through a
> But you did not say why, other than "designing a rich API is hard". It is also
> hard to get the last corruption bug out of a program that is more complex than
> necessary, as a combined LVM+filesystem surely must be. So given two hard
> tasks, only one of which yields a component that can be shared, I favor the
> hard API design task.
It's not just designing the rich API, it would also require porting everything to it and dealing with all the corruption bugs that would no doubt result from such an effort -- *nothing* is in a state where it could use the proposed volume manager efficiently. If you really want to reuse BTRFS as a volume manager for other filesystems, make yourself a filesystem image file on top of BTRFS.
> I've repeatedly proposed something, only to find that ZFS already implements
> it: Define one layer which is solely responsible for storing your bare
> primitives, like a sequence of data. It is the FS-level equivalent of
That sounds like the extent allocation mechanism in BTRFS as well.
> Then, implement everything else on top of that layer. Databases could sit
> directly on the layer -- no reason they need to pretend to create files.
> Filesystems would sit on that layer, implementing structures like directories
> and POSIX file permissions.
I am unconvinced of the utility of this idea: Partly because I don't think the performance impact of going through the POSIX API is that bad when you're going to have to hit the extent allocation either way, partly because I think it's dumb to create another persistent storage abstraction with another set of permissions when we've already got two in widespread use (POSIX, SQL).
If you want the FS-level equivalent of malloc/free, make a directory and start putting files in it. I'm not sure what advantages your idea provides over that method that would be worth the complexity of another API.
The next gen filesystems discussed here (ZFS and BTRFS) are both open source (the latter is also released under the GPL for compatibility with Linux), and both are backed by big companies (Sun and Oracle, respectively). Linux would not be where it was today if it weren't for big companies committing big bucks to developing open source software.
> But hey maybe I'm missing something, why not improve or create a replacement for LVM instead of including volume management in the filesystem?
Maybe. But it would be a lot harder.
Think about LVM snapshots for example. LVM allocates a chunk of the disk for your filesystem, and then a chunk of disk for your snapshot. When something changes in the parent filesystem, the original contents of that block are first copied to the snapshot. But if you've got two snapshots, it has to be copied to two places, and each snapshot needs its own space to store the original data. Because ZFS/BTRFS/etc are unified, they can keep the original data for any number of snapshots by the simple expedient of leaving it alone and writing the new data someplace new.
LVMs can grow/shrink filesystems, but filesystems deal with this somewhat grudgingly. LVM lacks a way to handle dynamic allocation of blocks to filesystems in such a way that they can give them back dynamically when they're not using them. ZFS/BTRFS/etc can do this completely on the fly. LVMs rely on an underlying RAID layer to handle data integrity, but most RAID doesn't do this very well. BTRFS is getting a feature that allows it to handle seeky metadata differently than data (eg, use an SSD as a fast index into slow but large disks).
It is conceivable that an advanced volume manager could be created that does all these things and all the rest (eg checksumming) just as well... but I think the key point is that this isn't something you can do without a *much* richer API for filesystems talking to block devices. They'd need to be able to free up blocks they don't need anymore, and have a way to handle fragmentation when both the filesystem and the volume manager would try to allocate blocks efficiently. They'd need substantially improved RAID implementations, or they'd need to bring the checksumming into the volume manager. I'm not saying it can't be done, but doing it as well as ZFS/BTRFS/etc when you're trying to preserve layering would be very tough. At a minimum you'd need new or substantially updated filesystems and a new volume manager of comparable complexity to ZFS/BTRFS/etc. I understand the preference for a layered approach, but I just don't think it's competitive here.
So either you'll keep getting router advertisements on your network indefinitely, or your computers will have to keep requesting for it (instead of eventually giving up- which is what happens now).
Correct. New computers can use router solicitations to get this information immediately, router advertisements can be used once the initial prefix delegation is complete (eg, the public prefix to use is know) and periodically thereafter to prevent autoconfig addresses for expiring.
I'm curious why you seem to regard this as a big deal.
Next question: What url does Joe Public enter on his browser to get to the router config page, so that he can enter the username and password in order to get access to the ISP's network?
I like the mDNS method personally.
Reserved by RFC or not,
I got a
This could obviously be done with RFC1918 addresses on v4, but it's hard to pick a range there because someone somewhere will end up being incompatible with it.
> Without a NAT, how does a "NoNAT router" know what public IP range to give via DHCP (or other means) to Joe User's WinXP/Mac box, BEFORE it manages to get that public IP range from the ISP?
Before it connects to the ISP you'll be using link-local addresses. The router will then get a prefix from the ISP via DHCP prefix delegation and begin sending router advertisements so internal computers can configure themselves with public addresses (though they retain their link-local addresses).
You can get a similar level of security by using a stateful firewall. The main security advantage to NAT is really the property of limiting inbound packets to those that are associated with existing connections, and that's what you get with a stateful firewall. You don't have to have disjoint address spaces to get this feature.