Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:load of wank (Score 2, Interesting) 211

Actually, Ksplice provides live patches. The ones Uptrack distributes are all to the kernel, and obviously not restarting the system requires not restarting the kernel.

The Ksplice technology itself is free software, and can be ported to userspace (but that hasn't been implemented yet by the Ksplice people). But if your network service is an NFS server or something, or you're fixing a security bug in the kernel, then Ksplice can apply it to a running system without affecting existing sessions / connections.

Comment Re:Difference between Linux and Windows (Score 2, Informative) 211

Well, let's look at the issues raised in the article.

Windows actually can replace a DLL that is in use by renaming the original then copying the new file into place. However, the Windows world prefers not to do this.

Ksplice updates the running code of your kernel (by waiting until no thread is using the function to be patched, then calling the kernel's stop_machine_run function -- the same thing it uses when loading a new module -- while it edits the object code); it doesn't touch your /vmlinuz file on disk. If you want the patches next time you reboot, either recompile /vmlinuz, or have an initscript (like Uptrack's) apply the patches at boot.

Even if you're updating just a single DLL with no dependencies, there are still potential problems since the DLL has to interoperate with previous versions of itself.

One reason Ksplice wins here is that it updates the kernel, which is a single thing, but more fundamentally it avoids this problem by atomically patching every piece of affected code at once. You could actually port the Ksplice technology to userspace, provided you do some userspace equivalent of stop_machine is and patch every process at the same time.

Even if you haven't changed the structure itself, you may have changed the meaning of some fields in the structure. If the structure has an enumeration and the new version adds a new value to that enumeration, that's still an incompatibility between the old and new.

Again, Ksplice has the advantage of updating everything atomically. But there is explicit support for having a hook to be called at patch time, that either updates all existing structures, or does something fancy to mark structures that have been updated, so you know that any unmarked structure needs to be updated before being used.

The Ksplice paper (PDF) outlines about how you'd go about writing a data structure transformer to address this (as well as talks about how to solve a host of other problems). See also the CVE evaluation, which links to some examples.

So it's not that Windows has to restart after replacing a file that is in use. It's just that it would rather not deal with the complexity that results if it doesn't. Engineering is a set of trade-offs.

which is why this engineering problem is not something Linus Torvalds personally does, but a separate company, Ksplice Inc., is working on full-time. :-)

Security

Ksplice Offers Rebootless Updates For Ubuntu Systems 211

sdasher writes "Ksplice has started offering Ksplice Uptrack for Ubuntu Jaunty, a free service that delivers rebootless versions of all the latest Ubuntu kernel security updates. It's currently available for both the 32 and 64-bit generic kernel, and they plan to add support for the virtual and server kernels by the end of the month, according to their FAQ. This makes Ubuntu the first OS that doesn't need to be rebooted for security updates. (We covered Ksplice's underlying technology when it was first announced a year ago.)"
Security

Submission + - Ubuntu Ksplice rebootless updates now available (ksplice.com)

sdasher writes: Ksplice has started offering Ksplice Uptrack for Ubuntu Jaunty, a free service that delivers rebootless versions of all the latest Ubuntu kernel security updates. It's currently available for both the 32 and 64-bit generic kernel, and they plan to add support for the virtual and server kernels by the end of the month, according to their FAQ. This makes Ubuntu the first OS that doesn't need to be rebooted for security updates. (We covered Ksplice's underlying technology when it was first announced a year ago.)

Comment Re:Nice (Score 1) 342

The result is that it's a bitch for proprietary guys to write binary only drivers for linux.

Not true. Think about the technical means by which this is achieved: there is no stable driver API, and you're encouraged to get your code into mainline. This means two things:

1. If you have a Free Software driver that isn't GPL-compatible, you get caught in the collateral damage. This is why OpenAFS isn't in the kernel -- because before the GPL had its katamari influence that it does now, IBM released the code under another Free but GPL-incompatible license, and it's basically impossible to pursue everyone relevant now and make them sign off on a relicensing. OpenAFS is at least as old as Linux and was developed on a half a dozen other OSes, so arguments that it's secretly a derivative work because it's a module don't apply.

2. If for whatever reason your code isn't in mainline, be it that you want to be gatekeeper over its development or that the kernel people reject it, then you're also out of luck in the same manner. For instance, the GPL is explicitly okay with a company maintaining a hacked version of Linux internally without having to release the source to the public. The unstable API makes this incredibly difficult. And if you look at the commits to the Linux kernel, most of it is from the "proprietary guys" -- maybe you could have even more of them contributing bug fixes upstream if it were easier for them to customize.

The promise the kernel guys make is not that your code is easy to maintain if it's Free Software -- it's that your code is easy to maintain if and only if it's part of Linux.

Now don't get me wrong; I understand there are excellent technical reasons for not having a stable in-kernel API, such as the ability to rearchitect things when you get a better idea and not have to support compatibility interfaces forever. I'm not at all asking for Linux to grow such an API. But to consider it a worthwhile legal tool in favor of Free Software is to completely fail to understand the Free Software ecosystem.

Comment Re:Git links (Score 2, Informative) 346

Ignore tha fanboys. If anything, use them as statistical evidence that there might be something worthwhile here. :-)

Why git for a SVN user? There's nothing better than trying it for yourself (git-svn clone svn://whatever, then hack on it with git, then git-svn dcommit). But until then, two big points:

1. It's distributed. I can make lots of commits without pushing them somewhere public, which is good for the same reasons that hitting "Save" often is good, without being worried that I've broken the build for everyone.

Relatedly, I can put my stupid personal projects in git from the beginning, without bothering to set up a server. But if I find I do want to share it with anyone or add anyone else as a contributor, there's basically no difficulty in doing so.

2. Git lets you rewrite your local history. If I didn't like a commit, I can edit it before sending it off. My workflow is often edit-commit-compile-test, rather than edit-compile-test-commit, which lets me remeber why I thought editing this file was a good idea. And if it's not, I can delete that commit from history, rather than having yet another one to revert it. And then when I'm done with this task, I can squash all my temporary commits down to two or three, one for each major part of what I did. As a side benefit, my commit messages only have to be useful enough for me, since I can edit them before pushing commits.

(Obviously, once you publish a commit, it's a pain to retract that commit from everyone else's repos.)

Another part of rewriting history is that rather than trying to do a merge when you've been hacking for a while, you can save your local commits in another branch, update to upstream, and cherry-pick the ones that make sense individually and edit the ones that need to be edited, creating a linear logical history rather than a merge between branches. This will make you saner a month later when you're trying to figure out why the code changed as it did, and you don't have to follow multiple branches and see how they were resolved.

Comment Re:@#$@#$ git! (Score 1) 346

Git is fully distributed (with no "authoritative" source), but it doesn't give you any tools understand/manage the distribution of files. If you have a work group with more than a few people, you are constantly asking what repo (and what access method to it), what branch, and what (bombastically long) revision.

You should pick a layout and stick with it. It's bad practice if you have to be adding all your other developer's personal repositories. Set up a single central one somewhere, and permit the use of branching on that one if necessary if you want to publish experimental features. If a subset of developers on the same features wants to make their own sub-central repository, they can do so -- but if you let people push and pull from everyone else, you will naturally get chaos, just as if everyone e-mailed .patch files around.

Git is a toolkit, not a product. It lets you do things, but you have to do them. It's like a programming language; just because you _can_ mkdir((char*)main) doesn't mean it's a good idea.

As far as revisions, again, don't publish revisions until you push them to a central place (so you don't have to worry about what repo and how to get to it and what branch, and more importantly so people can rebase their personal repositories freely -- which is extremely important so that they can make sense of the logical history of the project without tracking merges). And make use of git tags and topic branches, so you can say "commit fix-segfault-64bit" or "the last three commits on windows-gui".

And abbreviate commits. Say "4b825", not "4b825dc642cb6eb9a060e54bf8d69288fbee4904". Even with the Linux kernel I rarely have to use more than six characters to identify a commit.

Comment Re:Why did they do it this way? (Score 1) 258

Yes, you'd need to keep NAT, but that's a special case of my proposal not requiring changes to the Internet anywhere. It encourages changes, and two people who make the same change can talk to each other without the routers in between cooperating, but it doesn't mandate them. If you want to keep NAT, nobody's stopping you.

It would increase complexity at the gateways between the two protocols, yes, but IPv6 involves increasing complexity at every gateway -- unless those gateways aren't routing IPv4.

I guess I'm starting from the assumption that we can't ever make IPv4 go away.

Comment Re:Why did they do it this way? (Score 1) 258

*ANY* physical change to IPV4 breaks IPV4. Given that assumption, we may as well start from scratch, and go back to square 1 when designing IPV6.

Well, that's not really true (both of those). IPv6 addresses are their own namespace, and can't communicate with IPv4 addresses automatically. Think of how FAT got long filenames ... or how DNS grew extra TLDs like .info ... or how MX and SRV records in DNS started showing up, although using regular A records for mail and other services still works.

If you extend IPv4 in a clever way, rather than rewriting the whole thing and coming up with a new address space, you can increase adoption because people don't have to get everything upgraded end-to-end to make your system work.

Here's an example of such a scheme. Let's call it IPv4+. I'm going to say that it uses 64 bit addreses (but only because that's convenient). The first 32 bits are an existing IPv4 address, and if you own a single IPv4 address you own all 2^32 IPv4+ addresses that start with the IPv4 address. Allocation works as normal, etc. Maybe for good measure we'll take an IPv4 class A (1.x.x.x?) and reserve it _just_ for IPv4+ allocation.

So the first thing that happens is that everyone who uses NATs already has a convenient address. If my home IP address is 18.242.0.29, and my desktop behind the NAT is 192.168.0.2, then if I want a public IPv4+ address I can just use 18.242.0.29.192.168.0.2.

The next thing that happens is, if I want to reach that machine from a part of the Internet that only supports IPv4, I can tunnel IPv4+ inside IPv4. (I can even use an existing standard like RFC 1853) or something. The routers that don't need to be upgraded just see the outer header that says to send it to 18.242.0.29, and they use existing BGP or whatever and send it on its way. Once it gets to 18.242.0.29, which does support IPv4+, it figures out how to reach 18.242.0.29.192.168.0.2. Note that none of the backbone needed to be upgraded: I just need client and server support for IPv4+. End-to-end, if you look at it like an IPv4 packet, it gets routed correctly by the existing Internet.

So, there are two benefits of this strategy. First is that you use an existing naming scheme (IANA assigned IPv4 addresses) and build on top of it. The second is that you use an existing protocol and build on top of it, and only the machines that care about IPv4+ address space need to upgrade to IPv4+.

IPv6 does neither of these. Dan Bernstein condemns IPv6 much more scathingly than I can, having been part of the IPv6 discussions, but he basically agrees with me.

Slashdot Top Deals

The key elements in human thinking are not numbers but labels of fuzzy sets. -- L. Zadeh

Working...