Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Confused... (Score 3, Insightful) 72

I'm confused, maybe someone can help clear this up for me:

Amazon plans to increase the ratio of individual contributors to managers by 15% by March-end,

and

"The way to get ahead at Amazon is not to go accumulate a giant team and fiefdom,"

seem to be conflicting statements, am I misunderstanding something here? Wouldn't less managers to non-managers by definition mean that managers would have more people they are managing?

Comment Re:Bold strategy, let's see if that pays off (Score 1) 192

I was going to post something similar to this, if they can argue that downloading copyrighted materials is legal but sharing is infringing doesn't that mean that the last 2 decades of lawsuits against residential customers by the *AA wouldn't be valid any more?

I'm too lazy to look it up but I recall one of the *AAs using the IP addresses of users downloading from a torrent to try and force ISPs to identify the customers so they could sue them. I however don't recall that seeding was a requirement, just having downloaded the files.

So then if Facebook wins, and someone puts up an archive of or copyrighted material in a country that doesn't care about US copyrights, it would be totally legal to use such a service? Like someone in China could just put up all the movies ever released in the US, charge $20/month, and US customers would be completely on the legal side of the law?

Comment Re:The 80s (Score 1) 46

I'm not sure if you're complaining about waiting or something else with all the phone and bullet proof glass stuff, but we just renewed a couple passports a month ago and it took about 20 minutes, including filling out all the paper work and getting pictures. True, they did not hand us the passports when we walked out but they mailed them to us and they arrived the following week.

Honestly confused about the negative tone of the post... Is my experience out of the ordinary or is there something I don't understand about the the hand filling it out stuff, or something else?

Comment Re:It's about time (Score 1) 62

It has always been my understanding that .local is for mDNS not regular DNS, e.g. it is in the same class as link local addresses like 169.254.0.0/16, which is why it's part of things like bonjour and zeroconf for cases where there isn't actual infrastructure and clients need to auto configure. That would be different than private ip addresses, e.g. the 10.0. 0.0/8, 172.16. 0.0/12 and 192.168. 0.0/16 subnets. It is my understanding that the .internal tld is for the latter case, not the former, and that .local is primarily for the former. Does anyone have any more information or clarification context? Even the referenced RFC says multicast DNS.

Comment Re:Explanation (Score 1) 252

3. It needs some kind of middle layer so that you can move applications between displays, and displays between consoles. Think something like screen or tmux. Once you launch an app on a display, it is stuck there.

I know I'm late to the party, but you can do this using xpra. Still works around the x display idea though, so you can't attach/detach individual windows, but you start them attached to xpra then you attach your x display to xpra. So very much like screen, but I think tmux has some more advanced functions to move windows between tmux sessions.

Comment Re:Beware Google's penchant for auto-updates... (Score 5, Informative) 197

The OP might not be completely wrong, according to a dpkg-query -L google-chrome-beta it installs some stuff to /etc/cron.daily/google-chrome which apparently adds an extra source to your apt sources then updates google chrome based on some settings in your /etc/default/google-chrome. It also adds the source to /etc/apt/sources.list.d. Seems a bit invasive to me.

Comment Been wanting something like this for a long time (Score 1) 386

A lot of times these days I use rsync to do hard linked backups, which works mostly well but has some shortcomings. For example, backups across multiple machines don't have their duplicate files hardlinked, and files that are mostly similar can't be hard linked, such as files that grow like log files. More specifically we have some database files that grow with yearly detail information and everything before the newly added records is identical, resulting in gigs of used up space every day during backups when maybe a few megs has changed.

Initially I liked the way BackupPC handled the situation by pooling and compressing all the files, and duplicate files from different backups were automatically linked together. So I wrote a little script that primarily duplicated the the functionality of hardlinking duplicate files together regardless of file stat, running on top of fusecompress to get the compression too. The problem mostly is time though to crawl thousands and thousands of files and relink them. On top of that, rsync will not use those duplicate files for hardlinks in the next backup if the file stat info doesn't match, like mtime/owner/etc which means the next backup contains fresh new copies of files that have to be re-hardlinked by crawling the files again. Plus you don't get elimination of partial file redundancy.

So I looked around some more for a system that would allow you to compress out redundant blocks, and the closest thing I could find is squashfs, but it's read-only. Which sucks because we need to purge daily local backups occasionally to make more room for newer backups. We keep the last 6 month of daily backups available on a server, and do daily offsite backups from that. So once a month we delete the oldest months backups from the local backup server, and using squashfs you'd have to recreate the whole squash archive, which would suck for a terabyte archive with millions of files in it.

At this point I knew what features I wanted but couldn't find anything that did it yet, so I went ahead and wrote a fuse daemon in python that handles block-level deduplication and compression at the same time. I'm still playing around with it and testing different storage ideas, it's available in git if anyone wants to take a look, you can get it by doing:

git clone http://git.hoopajoo.net/projects/fusearchive.git fusearchive

(note the above command might be mangled because of the auto-linking in slashdot, there should be no [hoopajoo.net] in the actual clone command)

Currently it uses a storage directory with 2 sub directories, store/ and tree/. Inside tree/ are files that contain a hash that identifies the block list for the file contents. This way 2 identical files will only consume the size of a hash on disk + inodes. The hash points the the block that contains the file data block list, which is also a list of hashes of the data. This way any files that have identical blocks (on a block boundry) will have the redundant blocks only take up the size of the hash. Blocks are currently 5M, which can be tuned, and the blocks are compressed using zlib. So a bunch of small files get the benefit of compression and entire-file deduplication while large growing files will at most use up an extra block or data + the hash info for the rest of the file. So far this seems to be working pretty well, the biggest issues I have is tracking block references so we can free the block when it's no longer referenced by any files. It works fine currently but since each block contains it's own reference counter a crash could make the ref counts incorrect, and unfortunately I can't think of a better, more atomic way to handle that. The other big drawback is speed, it's about 1/3 the speed of native file copying, and from profiling the code 80-90% of the time seems to be spent passing fuse messages in the main fuse-python library, with a little time being taken up by zlib and actual file writes.

If I could get something like that from a native filesystem that also supported journaling so you didn't have the refcount mess that would be pretty sweet. Plus I wouldn't have to waste time developing and supporting it :p

Slashdot Top Deals

I'm always looking for a new idea that will be more productive than its cost. -- David Rockefeller

Working...