Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment I had 6 months long kernel root exploit on my mint (Score 1) 206

That's why I mostly stay away from mint
Last year there was a linux root exploit in the kernel. I tried the exploit and it worked: bang root shell!
So I waited to see when this would be fixed via the usual upgrade path... nothing happened during 6 months.
Until I finally wanted to use my system and so I looked into the reasons why I'm still vulnerable while all other distributions are ok.
So I need to run apt-get to get a new kernel! That's not "ready for the desktop".

Come on! All distributions are so proud to always say that fixes get quickly spread and there comes mint saying: "I won't even notify the end user that he should upgrade his X or kernel because it is vulnerable". That's dumb. Mint is wrong, Ubuntu is right.
Result: I don't like Ubuntu, I don't like Mint. Is there a Mint derivative which does it correctly or do I need to go with Apple?

Comment Re:So what did it do all that time? (Score 1) 409

I'm not sure what you mean by failover. For me failover is active-passive. So one node just sits there and starts the applicaiton when the other node fails.
If it's some sort of always-live where the data needs to be replicated realtime to all nodes, I confirm that the complexitiy is not worth the gain in uptime. Such constructions often experience more downtime because of the complexity, not because of the failures they should protect against.

Comment My mac is only doing VNC to my linux box (Score 1) 965

I never upgraded the powerbook as the next versions of the OS felt like being a regression.
I kept it only because some software is not available on linux. But wine may be a way out of that.
Currently another laptop made it's appearance. That one has windows and linux. But I only use linux and never had the need to boot into windows.
The smartphone I bought was not iOS but android.
So yes, the Apple adventure was nice but I did not get hooked.

Comment fslint's findup deduplicator (Score 1) 440

Well yes, this is a linux tool, but still I was quite pleased with it's results for 800k files. It took some time but it had an end.
It's basically a shellscript doing what others have suggested: sort by size, same size files are checksummed. /usr/share/fslint/fslint/findup
find dUPlicate files.
Usage: findup [[[-t [-m|-d]] | [--summary]] [-r] [-f] paths(s) ...]
If no path(s) specified then the currrent directory is assumed.
When -m is specified any found duplicates will be merged (using hardlinks).
When -d is specified any found duplicates will be deleted (leaving just 1).
When -t is specfied, only report what -m or -d would do.

When --summary is specified change output format to include file sizes.
You can also pipe this summary format to /usr/share/fslint/fslint/fstool/dupwaste
to get a total of the wastage due to duplicates.

As it's a single command line with dozens of pipes, it should use all cores if needed.
some text from the source:

Description

      will show duplicate files in the specified directories
      (and their subdirectories), in the format:

              file1
              file2

              file3
              file4
              file5

      or if the --summary option is specified:

              2 * 2048 file1 file2
              3 * 1024 file3 file4 file5

      Where the number is the disk usage in bytes of each of the
      duplicate files on that line, and all duplicate files are
      shown on the same line.
              Output it ordered by largest disk usage first and
      then by the number of duplicate files.
Caveats/Notes:
      I compared this to any equivalent utils I could find (as of Nov 2000)
      and it's (by far) the fastest, has the most functionality (thanks to
      find) and has no (known) bugs. In my opinion fdupes is the next best but
      is slower (even though written in C), and has a bug where hard links
      in different directories are reported as duplicates sometimes.

      This script requires uniq > V2.0.21 (part of GNU textutils|coreutils)
      dir/file names containing \n are ignored
      undefined operation for dir/file names containing \1
      sparse files are not treated differently.
      Don't specify params to find that affect output etc. (e.g -printf etc.)
      zero length files are ignored.
      symbolic links are ignored.
      path1 & path2 can be files &/or directories

and the code has optimizations like this one
sort -k2,2n -k3,3n | #NB sort inodes so md5sum does less seeking all over disk

Comment unison is bi-directional (Score 1) 153

unison has already been suggested multiple times.

I used unison. It's perfect to sync from A to B (it only syncs the diffs) then modify B and later sync B to A
You also can modify A and B at the same time as long as it's not the same file, then sync and then A and B are identical.
You can even sync in cycles: A->B->C->A with modifications on all three directory trees and it still works
Unison also handles deletions on both sides fine.
Hint: use the -group -owner -times flags

Slashdot Top Deals

If you think the system is working, ask someone who's waiting for a prompt.

Working...