Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:There are a few options. (Score 1) 212

Getting the network storage enabled without a single point of failure is easy, especially if using a distributed replicated system. Getting versioning to work without a coordination point for the versioning is a much more difficult problem to solve.

I want to note that while many people voice distrust or even disgust with it, FUSE by itself shouldn't disqualify a solution. Lots of high-performance filesystems get their POSIX FS layer via FUSE.

Also, while versioning and clustering are a different problem the similar issue of frequent snapshots and clustering is well solved. Gluster, MooseFS, CephFS (part of Ceph), and others support snapshotting just fine. No matter how frequently you snapshot, it's not quite the same as versioned files. It is often worth comparing the two, but one is not a true replacement for the other.

Something like git that's designed to allow branching and merging is helpful here, but as has been pointed out using a FUSE-mountable file system like gitfs over a git remote is not the cleanest of solutions either. There are application libraries that handle checking things into and out of git pretty transparently, buy altering applications is definitely not the same as doing something at the FS level. All the clients fetch the changes from the repo, and the repo can be cloned elsewhere for backup or stored on a clustered FS itself or whatever. Since gitfs gives every client all the history, all clients have a record of all the versions. I'm tempted to set gitfs up and test it heavily, but I've only barely poked at it before now so I can't say how sturdy a solution it might turn out to be.

Something like NILFS2 or tux3 handles versions but only knows about the one file system. You gain storage reliability by putting it over something like DRBD to replicate the storage, but it's expecting to be a single master of that FS frontend.

So, yeah, it's easy to get either clustering with no single point of failure or to get file versioning. It's a bit more challenging to combine versioning with no single point of failure for the data storage. It may be a downright difficult problem to get the versioning itself to have no single point of failure.

Putting a distributed parallel FS that uses a local POSIX FS on each node, and then using a versioning file system on those nodes is, as I said in a previous post, not ideal. It makes versions on each of the storage nodes, but then your version browsing and restoration is on each individual node. A client app that connect to one or another backend directly, lets you pick a version to restore, then writes that back through the cluster may be a good enough work around in some environments.

Comment Re:There are a few options. (Score 1) 212

I doubt my life will ever depend on the OP's needs of a versioning distributed filesystem.

If I had to pick some combination out of that which I've seen work well with my own eyes, I'd share NILFS2 over CIFS or NFS, possibly with DRBD underneath LVM underneath the NILFS2. I've never done that exact combination. I have run NILFS2 on top of LVM, and I have run DRBD underneath LVM. I've shared lots of things, including NILFS2 and various other FSes on LVM over CIFS and NFS. I've never done DRBD under LVM with the NILFS2 filesystem on top, though. I wouldn't be scared to recommend testing it, but I wouldn't rush it into production that way.

Comment There are a few options. (Score 4, Informative) 212

Contrary to popular rumors, there are a number of ways to do what you want. I can't vouch for all of these combinations working and wouldn't be too optimistic about tackling some of them. The more advanced stuff can take quite a while to ramp up to speed.

If you don't mind FUSE as an intermediary, there's gitfs that uses git as a file system (which is kind of is anyway, beyond being just a VCS). It creates a new version on every file close. You can point it to a git remote on the same machine or across a network which lives on any filesystem.

You already found that there are some non-mainline kernel modules for filesystems like next3, ext3cow, or tux3 that do versioning on write. NILFS is actually in the kernel these days (since 2.6.something) . More information about NILFS2 shows that it's somewhat slow but that it is in fact a stable, dependable file system.

Subversion has a feature that you can put WebDAV in front of it, mount the WebDAV as a filesystem somewhere, and every write creates a new revision of the file in SVN. That gets you networked and versioned. This works similarly to gitfs but uses WebDAV. You could if you wanted use dav2fs in front of that to treat it like a normal file system again.

You can then share any of these over SMB with Samba. Or you can shared them via NFS.

If you need really high-end, fast, replicated network filesystems you can use any of the clustered storage systems that will use a storage node's underlying files with any of these below that, but that will put your revisions underneath everything else rather than on top. Then there's using something like gitfs with the remote on top of, for example, DRDB, XtreemFS, or Ceph (for example even across CephFS which presents Ceph as a normal POSIX filesystem). This latter option puts your revisions closer to the user and then each revision gets replicated.

I've personally never used some of the more exotic combinations listed here. You could in theory put NILFS2 on LVM with DRBD as the physical layer (since DRBD supports that) and then serve that file system via Samba (CIFS) or NFS which I would expect to work well enough if slowly.

Comment Re:This will NOT half the cost of batteries (Score 1) 214

Well, if someone really wants to pay by the minute instead of $10 per month for unlimited to Canada and the US or $37 per month for unlimited calls to almost anywhere in the world it's either because they almost never call outbound long distance or they're not very bright.

I wouldn't consider that plan to be the baseline cost. Most people are going to pick unlimited.

Comment Re:Different types of terms (Score 5, Funny) 175

Something other than Node is likely used for the static parts of a site or for caching. Apache or Nginx are likely candidates. There are endless stack names, and they can be as silly as we want and someone could still build something useful on them. LAMP got coined because the stack was so popular together, with the 'P' being /P(erl|HP|ython)/ in many camps. That doesn't mean they'll all catch on as common, popular stacks.

Some people use BAPP -- BSD, Apache, PostgreSQL, Perl/Python/PHP. Some people use specifically FreeBSD: FAPP. Some people use FreeBSD, Apache, Perl, and SQLite...

Here are some other less common web stacks:

MongoDB, ExpressJS, Linux, AngularJS, NodeJS, Groovy, Erlang
MELANGE

Scala, Python, AngularJS, Zope
SPAZ

Clojure, Linux, Oracle DB, WebGL, Nginx
CLOWN

PostgreSQL, io.js, Scala, Solaris, Erlang, D
PISSED

SQLite, Ubuntu, C, korn shell, io.js, TCL
SUCKIT

Lighttpd, io.js, C, Kadmelia
LICK

Apache, Mumps, io.js, R, Ingres, Twitter API, Enterprise JavaBeans
AMIRITE

Comment Re:Stop with the cop bashing. (Score 2) 31

Do you mean we should look at the crime rates that took a precipitous fall in the mid 1990s? Those crime rates you want us to look at? The ones that have been falling ever since, even through the worst recession in decades? How much more spending on putting "broken window" offenders in expensive prisons do we need to stave off this unfortunate fall in crime? We need crime to look so bad and scary so the people will keep letting their governments spend disproportionate amounts of money imprisoning people for victimless crimes. How else will the prison lobbies bring home the bacon to their masters?

AI

NIST Workshop Explores Automated Tattoo Identification 71

chicksdaddy writes: Security Ledger reports on a recent NIST workshop dedicated to improving the art of automated tattoo identification. It used to be that the only place you'd commonly see tattoos was at your local VA hospital. No more. In the last 30 years, body art has gone mainstream. One in five adults in the U.S. has one. For law enforcement and forensics experts, this is a good thing; tattoos are a great way to identify both perpetrators and their victims. Given the number and variety of tattoos, though, how to describe and catalog them? Clearly this is an area where technology can help, but it's also one of those "fuzzy" problems that challenges the limits of artificial intelligence.

The National Institute of Standards and Technology (NIST) Tattoo Recognition Technology Challenge Workshop challenged industry and academia to work towards developing an automated image-based tattoo matching technology. Participating organizations in the challenge used a FBI -supplied dataset of thousands of images of tattoos from government databases. They were challenged to develop methods for identifying a tattoo in an image, identifying visually similar or related tattoos from different subjects; identifying the same tattoo image from the same subject over time; identifying a small region of interest that is contained in a larger image; and identifying a tattoo from a visually similar image like a sketch or scanned print.

Comment Re:Meh (Score 1) 124

I'm not sure which Sony scandal you're talking about.

The PS3 alternate boot thing when it was sold with the feature and then had the feature stolen was pretty tangential for most people. The Windows rootkits on normal Redbook audio CDs was not an inconsequential thing.

Creating and then killing UMD rather than using MiniDisc or SD cards or downloads for PSP games was pretty tangential to the core purpose of the product. Using MS Pro Duo cards rather than SD for saving was too, for that matter. Putting a rootkit on the PSP media encoder software CD was not inconsequential.

Slashdot Top Deals

New York... when civilization falls apart, remember, we were way ahead of you. - David Letterman

Working...