Follow Slashdot stories on Twitter


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 Internet speed test! ×

Comment Re:rsync? (Score 1) 118

Backuppc is exceptional for linux, but for windows and the infamous pst files, better to have a client/server architecture of the BURP software.

Same server, running backuppc and burp, and you save to tape the whole backup FS every 6 months.

You can use bacula for the last step if you so wish, we use straight copy to lto tapes.

Comment Re:Software doesn't really matter (Score 2) 259

Hi there,

  For archiving purposes, it is best to never touch the original files. It helps when you have thousands of files and during the years you have made backups on different places/disks.

When you consolidate (because either you consolidate or you lose your photos/memories) if you have photos that differ only for the exif tags is a nightmare to understand which photos are ok and which are not.

Always prefer programs that do not touch your photos. I recently found that one of the programs I used in the past for an old camera (2002-2005), when rotating the images was nuking the exif data.. Still need to find which one it was.. and damn it to hell.

Now it would be great to do .xmp of jpegs, but last time I tried (a few months ago) I did not manage to make it work with shotwell (there is only an option to alter the file metadata.. the horror..).

In my case, to consolidate the photo collection, I have the originals in different folders (tematic, cronological etc. etc) and then I create some symlinks in a directory called "history". Here a work in progress

#set -x
EXT="jpg JPG jpeg JPEG"
for exte in $EXT
    for file in $(find . -name '*'.$exte| grep -v history); do
        echo "doing $file"
        OCHOUR=$(echo $CHOUR | awk -F'.estim' '{print $1}')
        INFO=$(exiftool $file | tr '\n' '#')
        PROBLEM=$(echo $INFO |tr '#' '\n' | grep "^Make")
        [ -z "$PROBLEM" ] && echo "Problem with $file. Skipping" && continue
        CDATE=$(echo $INFO |tr '#' '\n' | grep "Media Create Date" | awk '{print $5}')
        [ -z "$CDATE" ] && CDATE=$(echo $INFO |tr '#' '\n' | grep "Create Date" | awk '{print $4}')
        [ -z "$CDATE" ] && CDATE=$(echo $INFO |tr '#' '\n' | grep "Date/Time Original" | awk '{print $5}')
        [ -z "$CDATE" ] && CDATE=$OCDATE
        [ -z "$CDATE" ] && echo "error inquiry file" $file && continue
        CHOUR=$(echo $INFO |tr '#' '\n' | grep "Media Create Date" | awk '{print $6}')
        [ -z "$CHOUR" ] && CHOUR=$(echo $INFO |tr '#' '\n' | grep "Create Date" | awk '{print $5}')
        [ -z "$CHOUR" ] && CHOUR=$(echo $INFO |tr '#' '\n' | grep "Date/Time Original" | awk '{print $5}')
        [ -z "$CHOUR" ] && num=$(expr $num + 1) && CHOUR=${OCHOUR}.estimation_$num
        [ -z "$CHOUR" ] && echo "error inquiry file" $file && continue
        TYPE=$( echo $INFO |tr '#' '\n' | grep "File Type" | awk '{print $4}')
        YEAR=$( echo $CDATE | cut -d':' -f1)
        MONTH=$(echo $CDATE | cut -d':' -f2)
        DAY=$( echo $CDATE | cut -d':' -f3)
        FNAME=$(echo $CHOUR | tr ':' '-')
        [ ! -d "$DDIR" ] && $DEBUG mkdir -p $DDIR
        if [ ! -L "$DEST" ]; then
            $DEBUG ln -s ./../../../$file $DEST
            TGT=$(readlink $DEST)
            [ "$TGT" != "./../../../$file" ] && echo "Error whith $file and $DEST" && exit 1

In this way, if your data is on a nas, you can export it to kodi or other clients and you do not need to "re-tag" all over again.

I guess you could do the same with tags (people - events) with shotwell, and then export the associations and build similar simlinks.

This is not very elegant but it allows to find problems and it is very portable: if shotwell database becomes corrupted, you do not lose anything...


FBI: Wiper Malware Has Korean Language Packs, Hard Coded Targets 81

chicksdaddy sends news that the FBI has issued a warning to U.S. businesses over a "destructive" malware campaign using advanced tools. They don't name specific targets, but the information fits with the details from last week's attack on Sony Pictures, which led to the leak of several unreleased movies. A copy of the FBI's recent five-page FLASH alert reveals that the malware alleged to have wiped out systems at Sony Pictures Entertainment deployed a number of malicious modules, including a version of a commercial disk wiping tool on target systems. Samples of the malware obtained by the FBI were also found to contain configuration files created on systems configured with Korean language packs. The use of Korean could strengthen theories that the destructive cyber attacks have links to North Korea, though it is hardly conclusive. It does appear that the attack was targeted at a specific organization. The malware analyzed by the FBI contained a hard coded list of IP addresses and computer host names.

Comment Systemd seems fine to me at this stage (Score 3, Interesting) 522


  I have deployed some fedora 20 machines in the last 3-4 months, and so far I did not see anything that led me to cry foul against systemd.

  Actually, the handling of the user sessions for house-keeping purposes seems much simpler now.

  So I don't get all this hate. Maybe I did not look deep enough, time will tell.


Comment Re:Fleeing abusive companies? (Score 1) 257

Here in Belgium a registered snail mail is sufficient in general to cancel a service (i.e. cable).
Last time I changed internet provider I waited the expiration of the contract, but I think now they have more consumer friendly laws and you can change with much more ease.

The general idea is to foster competition between companies making it easier for a customer jumping ship and woting with his wallet/her purse.

Of course other governamental intervention (forcing the old telecom monopoly to lease their infrastructure at reasonable price and now trying to do the same for cable) is a godsent.

You can always argue that the incumbent has the advantage (because you may want to avoid the ping pong between the virtual operator and the incumbent), but sure as hell it looks infinitely better of what people have suffering in USA.

I got friends going to work there and being flabbergasted by the internet connections and prices...

Comment Re:Good grief (Score 1) 98

Exactly the last point.

  What I dislike the most are users that take advantage of others due to their lack of knowledge. And this is either done intentionally or unintentionally when rules are not enforced.

I would like all the students (often coming in contact with linux, shell programming and clusters for the first time) to have a fair shot of using the available resources, and not to backstab each other.

  Before everyone could run on the cluster, until I discovered that certain students were giving their login to others: the first did not really need it (i.e. theoretical work) and the second would run on the cluster twice the amount of jobs of the others.

Comment Re:Just deal with problem users individually. (Score 1) 98


  the beowulf clusters we have are running either based on Centos or SLES. For the development workstations where newer versions of certain software are needed I install Fedora.

  This means the developers basically run production on the cluster and develop on the workstations.

  Since there is always a gap between the two (i.e. centos 5 on cluster and fedora 16 on workstations before, centos 6 on cluster and fedora 20 on workstation), when the cluster is updated there is limited breakage, at least until now.

  I understand those that push a stable distro everywhere, maybe for next cycle I will do the same, who knows.

Submission + - Ask Slashdot: Linux Login and Resource Management/Restriction in a Computer Lab

rongten writes: I am managing a computer lab composed of various kind of Linux workstations, from small desktops to powerful workstations with plenty of ram and cores. The users' $HOME is NFS mounted, and they either access via console (no user switch allowed), ssh or x2go. In the past the powerful workstations were reseved to certain power users, but now even "regular" students may need to have access to high memory machines for some tasks.
I ask slashdort, is there a sort of resource management that would permit: to forbid a same user to log graphically more than once (like UserLock), to limit the amount of ssh sessions (i.e. no user using distcc and spamming the rest of the machines or even worse running in parallel), to give priority to the console user (i.e. automatically renicing remote users jobs and restricting their memory usage), to avoid swapping and waiting (i.e. all the users trying to log into the latest and greatest machine, so have a limited amount of logins proportional to the capacity of the machine).
The system being put in place uses Fedora 20, ldap PAM authentication, it is puppet managed, and NFS based. In the past I tried to achieve similar functionality via cron jobs, login scripts, ssh and nx management, queuing system.
But it is not an elegant solution and it is hacked a lot.
Since I think these requirements should be pretty standard for a computer lab, I am surprised to see that I cannot find something already written for it.
Does any of you know of a similar system, preferably opensource? A commercial solution could be acceptable as well.

Slashdot Top Deals

Over the shoulder supervision is more a need of the manager than the programming task.