Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Security

Ksplice Offers Rebootless Updates For Ubuntu Systems 211

sdasher writes "Ksplice has started offering Ksplice Uptrack for Ubuntu Jaunty, a free service that delivers rebootless versions of all the latest Ubuntu kernel security updates. It's currently available for both the 32 and 64-bit generic kernel, and they plan to add support for the virtual and server kernels by the end of the month, according to their FAQ. This makes Ubuntu the first OS that doesn't need to be rebooted for security updates. (We covered Ksplice's underlying technology when it was first announced a year ago.)"
Security

Submission + - Ubuntu Ksplice rebootless updates now available (ksplice.com)

sdasher writes: Ksplice has started offering Ksplice Uptrack for Ubuntu Jaunty, a free service that delivers rebootless versions of all the latest Ubuntu kernel security updates. It's currently available for both the 32 and 64-bit generic kernel, and they plan to add support for the virtual and server kernels by the end of the month, according to their FAQ. This makes Ubuntu the first OS that doesn't need to be rebooted for security updates. (We covered Ksplice's underlying technology when it was first announced a year ago.)

Comment Re:Nice (Score 1) 342

The result is that it's a bitch for proprietary guys to write binary only drivers for linux.

Not true. Think about the technical means by which this is achieved: there is no stable driver API, and you're encouraged to get your code into mainline. This means two things:

1. If you have a Free Software driver that isn't GPL-compatible, you get caught in the collateral damage. This is why OpenAFS isn't in the kernel -- because before the GPL had its katamari influence that it does now, IBM released the code under another Free but GPL-incompatible license, and it's basically impossible to pursue everyone relevant now and make them sign off on a relicensing. OpenAFS is at least as old as Linux and was developed on a half a dozen other OSes, so arguments that it's secretly a derivative work because it's a module don't apply.

2. If for whatever reason your code isn't in mainline, be it that you want to be gatekeeper over its development or that the kernel people reject it, then you're also out of luck in the same manner. For instance, the GPL is explicitly okay with a company maintaining a hacked version of Linux internally without having to release the source to the public. The unstable API makes this incredibly difficult. And if you look at the commits to the Linux kernel, most of it is from the "proprietary guys" -- maybe you could have even more of them contributing bug fixes upstream if it were easier for them to customize.

The promise the kernel guys make is not that your code is easy to maintain if it's Free Software -- it's that your code is easy to maintain if and only if it's part of Linux.

Now don't get me wrong; I understand there are excellent technical reasons for not having a stable in-kernel API, such as the ability to rearchitect things when you get a better idea and not have to support compatibility interfaces forever. I'm not at all asking for Linux to grow such an API. But to consider it a worthwhile legal tool in favor of Free Software is to completely fail to understand the Free Software ecosystem.

Comment Re:Git links (Score 2, Informative) 346

Ignore tha fanboys. If anything, use them as statistical evidence that there might be something worthwhile here. :-)

Why git for a SVN user? There's nothing better than trying it for yourself (git-svn clone svn://whatever, then hack on it with git, then git-svn dcommit). But until then, two big points:

1. It's distributed. I can make lots of commits without pushing them somewhere public, which is good for the same reasons that hitting "Save" often is good, without being worried that I've broken the build for everyone.

Relatedly, I can put my stupid personal projects in git from the beginning, without bothering to set up a server. But if I find I do want to share it with anyone or add anyone else as a contributor, there's basically no difficulty in doing so.

2. Git lets you rewrite your local history. If I didn't like a commit, I can edit it before sending it off. My workflow is often edit-commit-compile-test, rather than edit-compile-test-commit, which lets me remeber why I thought editing this file was a good idea. And if it's not, I can delete that commit from history, rather than having yet another one to revert it. And then when I'm done with this task, I can squash all my temporary commits down to two or three, one for each major part of what I did. As a side benefit, my commit messages only have to be useful enough for me, since I can edit them before pushing commits.

(Obviously, once you publish a commit, it's a pain to retract that commit from everyone else's repos.)

Another part of rewriting history is that rather than trying to do a merge when you've been hacking for a while, you can save your local commits in another branch, update to upstream, and cherry-pick the ones that make sense individually and edit the ones that need to be edited, creating a linear logical history rather than a merge between branches. This will make you saner a month later when you're trying to figure out why the code changed as it did, and you don't have to follow multiple branches and see how they were resolved.

Comment Re:@#$@#$ git! (Score 1) 346

Git is fully distributed (with no "authoritative" source), but it doesn't give you any tools understand/manage the distribution of files. If you have a work group with more than a few people, you are constantly asking what repo (and what access method to it), what branch, and what (bombastically long) revision.

You should pick a layout and stick with it. It's bad practice if you have to be adding all your other developer's personal repositories. Set up a single central one somewhere, and permit the use of branching on that one if necessary if you want to publish experimental features. If a subset of developers on the same features wants to make their own sub-central repository, they can do so -- but if you let people push and pull from everyone else, you will naturally get chaos, just as if everyone e-mailed .patch files around.

Git is a toolkit, not a product. It lets you do things, but you have to do them. It's like a programming language; just because you _can_ mkdir((char*)main) doesn't mean it's a good idea.

As far as revisions, again, don't publish revisions until you push them to a central place (so you don't have to worry about what repo and how to get to it and what branch, and more importantly so people can rebase their personal repositories freely -- which is extremely important so that they can make sense of the logical history of the project without tracking merges). And make use of git tags and topic branches, so you can say "commit fix-segfault-64bit" or "the last three commits on windows-gui".

And abbreviate commits. Say "4b825", not "4b825dc642cb6eb9a060e54bf8d69288fbee4904". Even with the Linux kernel I rarely have to use more than six characters to identify a commit.

Comment Re:Why did they do it this way? (Score 1) 258

Yes, you'd need to keep NAT, but that's a special case of my proposal not requiring changes to the Internet anywhere. It encourages changes, and two people who make the same change can talk to each other without the routers in between cooperating, but it doesn't mandate them. If you want to keep NAT, nobody's stopping you.

It would increase complexity at the gateways between the two protocols, yes, but IPv6 involves increasing complexity at every gateway -- unless those gateways aren't routing IPv4.

I guess I'm starting from the assumption that we can't ever make IPv4 go away.

Comment Re:Why did they do it this way? (Score 1) 258

*ANY* physical change to IPV4 breaks IPV4. Given that assumption, we may as well start from scratch, and go back to square 1 when designing IPV6.

Well, that's not really true (both of those). IPv6 addresses are their own namespace, and can't communicate with IPv4 addresses automatically. Think of how FAT got long filenames ... or how DNS grew extra TLDs like .info ... or how MX and SRV records in DNS started showing up, although using regular A records for mail and other services still works.

If you extend IPv4 in a clever way, rather than rewriting the whole thing and coming up with a new address space, you can increase adoption because people don't have to get everything upgraded end-to-end to make your system work.

Here's an example of such a scheme. Let's call it IPv4+. I'm going to say that it uses 64 bit addreses (but only because that's convenient). The first 32 bits are an existing IPv4 address, and if you own a single IPv4 address you own all 2^32 IPv4+ addresses that start with the IPv4 address. Allocation works as normal, etc. Maybe for good measure we'll take an IPv4 class A (1.x.x.x?) and reserve it _just_ for IPv4+ allocation.

So the first thing that happens is that everyone who uses NATs already has a convenient address. If my home IP address is 18.242.0.29, and my desktop behind the NAT is 192.168.0.2, then if I want a public IPv4+ address I can just use 18.242.0.29.192.168.0.2.

The next thing that happens is, if I want to reach that machine from a part of the Internet that only supports IPv4, I can tunnel IPv4+ inside IPv4. (I can even use an existing standard like RFC 1853) or something. The routers that don't need to be upgraded just see the outer header that says to send it to 18.242.0.29, and they use existing BGP or whatever and send it on its way. Once it gets to 18.242.0.29, which does support IPv4+, it figures out how to reach 18.242.0.29.192.168.0.2. Note that none of the backbone needed to be upgraded: I just need client and server support for IPv4+. End-to-end, if you look at it like an IPv4 packet, it gets routed correctly by the existing Internet.

So, there are two benefits of this strategy. First is that you use an existing naming scheme (IANA assigned IPv4 addresses) and build on top of it. The second is that you use an existing protocol and build on top of it, and only the machines that care about IPv4+ address space need to upgrade to IPv4+.

IPv6 does neither of these. Dan Bernstein condemns IPv6 much more scathingly than I can, having been part of the IPv6 discussions, but he basically agrees with me.

Comment Re:Very bad (Score 1) 676

So, if you look at some of the factors that he claims "dirty" the data, one is that a lot of management/refactoring of code is done by Sun employees, and another is that a lot of outside contributors find it hard to get commit access and just leave patches in the bugtracker, which Sun employees pick up.

He claims this means that the graph — which shows Sun responsible for the vast majority of all the commits — is inaccurate.

I claim that if the project isn't getting external contributors, or not giving them the ability to get commit access, and it's basically being run by Sun internally, and he has to mention that people leave patches in the bugtracker, that itself is a sign that the project is profoundly sick, more so than any charts can so.

As further evidence that Sun isn't playing well with external contributors and the community, read the tale of the non-upstreamed Calc solver, which Meeks linked to.

Comment Re:That's because there DONE! (Score 5, Interesting) 676

This is not true at all. Sure, you can type stuff in, mark some stuff bold, spell check it, and print it out -- but there's no need for an office suite to do that, and if that's all you intend to do don't call yourself an office suite.

Here's something I ran into yesterday. There's a "Compare Documents" feature under the Edit menu. It doesn't compare the contents of tables. The bug reporting this was opened in July 2003, and nobody has seemed to care yet. In 2007, someone had a patch, which was committed and not added to the next release's codeline because "I don't think that this issue fulfills the criteria for 2.3.1". This may it was retargeted for 3.1 and rejected in November because There are too many open questions to finish in 3.1." People complained again in 2004 and 2008; I don't think you can say in good faith that "no one cares enough".

It occurs to me that your exact phrasing was "no one cares enough to add it", which is completely right. Nobody cares enough to develop OpenOffice.org to where it should be.

If you ask what more, are they not done, then I'll ask the same thing about the Linux kernel -- isn't it done? What benefit is there to running the latest 2.6.28 or whatever instead of 2.4, which worked fine for everyone a few years ago? But yet who in their right mind would (all other things being equal) set up a new system with 2.4 instead of some kernel released this year? And you'd laugh if I suggested the Linux 1.x tree, but that can open and close programs and files just as well as any other OS, can't it?

Slashdot Top Deals

The rule on staying alive as a program manager is to give 'em a number or give 'em a date, but never give 'em both at once.

Working...