This is diffs the dissembled version of the original against the update on the server, then does the opposite on the client. I couldn't help but think of this as similar to Gentoo's model ... download a compressed diff of the source and then recompile. Both have the same problem: too much client-side CPU usage (though Gentoo's is an extreme of this). Isn't Google Chrome OS primarily targeting netbooks? Can such things handle that level of extra client-side computation without leaving users frustrated?
I don't think this is really a problem in this case. In the time even a slow computer, by today's standards, has downloaded a kilobyte over a WAN link it has easily performed millions of CPU operations on it. The same would be true for any kind of compression really. Since Bandwidth through your pipe is just orders of magnitudes slower than anything that happens within your machine, this added level of complexity is clearly more beneficial than a direct approach. That's why it makes sense to compress files on the server (or the client uploading it to the server), transfer them and decompress them on the client, even if the client is quite slow.
I'd rather improve the distribution model. Since packages are all signed, SSL and friends aren't needed for the transfer, nor does it need to come from a trusted authority. Bittorrent comes to mind. I'm quite disappointed that the apt-torrent project never went anywhere. It's clearly the solution.
With patches between minor versions at about 80kB (as stated in TFA), I don't think that a distribution using bittorrent would really be the way to go here. Add to this the fact that google has quite a lot of bandwidth at their disposal and I don't see this happening anytime soon.
I aggree however that it may be a good idea to transfer large amounts of linux packeges that way. But with a lot of smaller packages the protocol overhead of bittorrent might become a limiting factor regarding its usefulness.