Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment giant machines are US culture, and world culture (Score 3, Interesting) 107

In the US we love big machines. The Queen Mary, the Spruce Goose, the continuous asphalt pavers, the Liebherr T 282 B giant dump truck (although Liebherr is a Swiss company), the Boeing 747-400 and Lockheed L-1011 wide-body passenger jets, the massive Abrams tank, the Nimitz-class aircraft carriers, the 280mm towed howitzer M65 "Atomic Annie", and such are examples.

See how I slipped a Swiss-built monster in there? Well, the US and Japan aren't the only ones. Germany has a 31 million pound excavator. The largest plane is made in Russia by Antonov. South Korea builds some of the biggest cargo ships.

So while, yes, giant robots are a big thing in Japanese art the urge to build huge machines is all over the industrialized world. The US and Germany have never been afraid of large engineering feats. The US has a whole industry of using remotely piloted craft for actual combat.

I don't think Japan needs to focus so much pride on this one little competition as a cultural identity issue. It's not like a US firm is going to enter a contest designing and building a robot with the intent of a face-saving loss or an honorable tie.

Comment Re: There are a few options. (Score 1) 212

"Career" is still hyperbole. A project may fail. It may even be one job at stake. It wouldn't end a career.

So, assuming a make-or-break project for an employer are the stakes, here's what I'd do. First, I'd do an initial evaluation whether doing this on Linux is actually worthwhile given the alternatives on other Unix platforms. Second, I'd pick something for safety over performance. Given the budget, I'd pay for development on one of the OSS versioning filesystems to do clustering or one of the OSS clustering filesystems to do versioning.

I'd probably check to see if frequent snapshots are valid rather than per-file per-write versioning. That turned out to be the case in this thread. That gives many workable and fairly conventional options on Linux.

If per-write versions were really that important, and it really had to be on Linux, and really had to be shared as well, I'd probably alter my application to write through git libraries at the application level. If not git, then maybe Mercurial or Bazaar. If I didn't have control over the application, I'd look into inotify to do commits based on those writes.

If it really needs to be in the filesystem, really needs to be on Linux, and really needs to be per-write versions, I'd use something like NILFS2 on LVM with a SAN-backed LVM, and have read-only access shared out over NFS or CIFS.

No matter what I chose as my primary target, I'd choose a couple other alternatives and test the hell out of all three. I wouldn't greenlight anything for production until I was happy.

Really, if your employer expects anything less stringent on their production infrastructure than a full testing and development cycle and blames the implementors for failures of overspecified and undertested software as ordered by management, then you want a new employer anyway.

Security

Amazon's New SSL/TLS Implementation In 6,000 Lines of Code 107

bmearns writes: Amazon has announced a new library called "s2n," an open source implementation of SSL/TLS, the cryptographic security protocols behind HTTPS, SSH, SFTP, secure SMTP, and many others. Weighing in at about 6k lines of code, it's just a little more than 1% the size of OpenSSL, which is really good news in terms of security auditing and testing. OpenSSL isn't going away, and Amazon has made clear that they will continue to support it. Notably, s2n does not provide all the additional cryptographic functions that OpenSSL provides in libcrypto, it only provides the SSL/TLS functions. Further more, it implements a relatively small subset of SSL/TLS features compared to OpenSSL.
Privacy

Surveillance Court: NSA Can Resume Bulk Surveillance 161

An anonymous reader writes: We all celebrated back in May when a federal court ruled the NSA's phone surveillance illegal, and again at the beginning of June, when the Patriot Act expired, ending authorization for that surveillance. Unfortunately, the NY Times now reports on a ruling from the Foreign Intelligence Surveillance Court, which concluded that the NSA may temporarily resume bulk collection of metadata about U.S. citizens's phone calls. From the article: "In a 26-page opinion (PDF) made public on Tuesday, Judge Michael W. Mosman of the surveillance court rejected the challenge by FreedomWorks, which was represented by a former Virginia attorney general, Ken Cuccinelli, a Republican. And Judge Mosman said that the Second Circuit was wrong, too. 'Second Circuit rulings are not binding' on the surveillance court, he wrote, 'and this court respectfully disagrees with that court's analysis, especially in view of the intervening enactment of the U.S.A. Freedom Act.' When the Second Circuit issued its ruling that the program was illegal, it did not issue any injunction ordering the program halted, saying that it would be prudent to see what Congress did as Section 215 neared its June 1 expiration."
Safari

Is Safari the New Internet Explorer? 311

An anonymous reader writes: Software developer Nolan Lawson says Apple's Safari has taken the place of Microsoft's Internet Explorer as the major browser that lags behind all the others. This comes shortly after the Edge Conference, where major players in web technologies got together to discuss the state of the industry and what's ahead. Lawson says Mozilla, Google, Opera, and Microsoft were all in attendance and willing to talk — but not Apple.

"It's hard to get insight into why Apple is behaving this way. They never send anyone to web conferences, their Surfin' Safari blog is a shadow of its former self, and nobody knows what the next version of Safari will contain until that year's WWDC. In a sense, Apple is like Santa Claus, descending yearly to give us some much-anticipated presents, with no forewarning about which of our wishes he'll grant this year. And frankly, the presents have been getting smaller and smaller lately."

He argues, "At this point, we in the web community need to come to terms with the fact that Safari has become the new IE. Microsoft is repentant these days, Google is pushing the web as far as it can go, and Mozilla is still being Mozilla. Apple is really the one singer in that barbershop quartet hitting all the sour notes, and it's time we start talking about it openly instead of tiptoeing around it like we're going to hurt somebody's feelings."
Canada

Quebec Government May Force ISPs To Block Gambling Websites 60

New submitter ottawan- writes: In order to drive more customers to their own online gambling website, the Quebec government and Loto-Quebec (the provincial organization in charge of gaming and lotteries) are thinking about forcing the province's ISPs to block all other online gambling websites. The list of websites to be blocked will be maintained by Loto-Quebec, and the government believes that the blocking will increase government revenue by up to $27 million (CAD) per year.

Comment Re:There are a few options. (Score 1) 212

Well, if you want frequent filesystem or directory snapshots rather than per-file per-change versioning then your options are much broader.

LVM can do snapshots besides doing disk pooling, and can do a sort of clustering. Btrfs pools disks and does snapshots without the clustering. XFS has indirect support for snapshots -- it allows one to freeze the filesystem and snapshot using the volume manager. Ceph is a highly available clustered storage system with a POSIX FS face and does snapshots. Lustre allows taking LVM snapshots of its MDT and OST filesystems, although doing so often could be a bottleneck. Gluster (since 3.7 or so) actually wraps around LVM's snapshot tools so long as each "brick" is on a thin LVM that contains no data besides the brick.

ZFS does pooling and snapshots. It's supported-ish on Linux both as a kernel module and as a FUSE filesystem. It has much better support on the BSDs and of course on OpenSolaris/Solaris.

It's notable that the Ceph folks recommend Btrfs underneath Ceph as the local filesystem for development and testing but XFS under it (or optionally ext4) for production at the moment.

Comment Re:There are a few options. (Score 1) 212

Getting the network storage enabled without a single point of failure is easy, especially if using a distributed replicated system. Getting versioning to work without a coordination point for the versioning is a much more difficult problem to solve.

I want to note that while many people voice distrust or even disgust with it, FUSE by itself shouldn't disqualify a solution. Lots of high-performance filesystems get their POSIX FS layer via FUSE.

Also, while versioning and clustering are a different problem the similar issue of frequent snapshots and clustering is well solved. Gluster, MooseFS, CephFS (part of Ceph), and others support snapshotting just fine. No matter how frequently you snapshot, it's not quite the same as versioned files. It is often worth comparing the two, but one is not a true replacement for the other.

Something like git that's designed to allow branching and merging is helpful here, but as has been pointed out using a FUSE-mountable file system like gitfs over a git remote is not the cleanest of solutions either. There are application libraries that handle checking things into and out of git pretty transparently, buy altering applications is definitely not the same as doing something at the FS level. All the clients fetch the changes from the repo, and the repo can be cloned elsewhere for backup or stored on a clustered FS itself or whatever. Since gitfs gives every client all the history, all clients have a record of all the versions. I'm tempted to set gitfs up and test it heavily, but I've only barely poked at it before now so I can't say how sturdy a solution it might turn out to be.

Something like NILFS2 or tux3 handles versions but only knows about the one file system. You gain storage reliability by putting it over something like DRBD to replicate the storage, but it's expecting to be a single master of that FS frontend.

So, yeah, it's easy to get either clustering with no single point of failure or to get file versioning. It's a bit more challenging to combine versioning with no single point of failure for the data storage. It may be a downright difficult problem to get the versioning itself to have no single point of failure.

Putting a distributed parallel FS that uses a local POSIX FS on each node, and then using a versioning file system on those nodes is, as I said in a previous post, not ideal. It makes versions on each of the storage nodes, but then your version browsing and restoration is on each individual node. A client app that connect to one or another backend directly, lets you pick a version to restore, then writes that back through the cluster may be a good enough work around in some environments.

Comment Re:There are a few options. (Score 1) 212

I doubt my life will ever depend on the OP's needs of a versioning distributed filesystem.

If I had to pick some combination out of that which I've seen work well with my own eyes, I'd share NILFS2 over CIFS or NFS, possibly with DRBD underneath LVM underneath the NILFS2. I've never done that exact combination. I have run NILFS2 on top of LVM, and I have run DRBD underneath LVM. I've shared lots of things, including NILFS2 and various other FSes on LVM over CIFS and NFS. I've never done DRBD under LVM with the NILFS2 filesystem on top, though. I wouldn't be scared to recommend testing it, but I wouldn't rush it into production that way.

Comment There are a few options. (Score 4, Informative) 212

Contrary to popular rumors, there are a number of ways to do what you want. I can't vouch for all of these combinations working and wouldn't be too optimistic about tackling some of them. The more advanced stuff can take quite a while to ramp up to speed.

If you don't mind FUSE as an intermediary, there's gitfs that uses git as a file system (which is kind of is anyway, beyond being just a VCS). It creates a new version on every file close. You can point it to a git remote on the same machine or across a network which lives on any filesystem.

You already found that there are some non-mainline kernel modules for filesystems like next3, ext3cow, or tux3 that do versioning on write. NILFS is actually in the kernel these days (since 2.6.something) . More information about NILFS2 shows that it's somewhat slow but that it is in fact a stable, dependable file system.

Subversion has a feature that you can put WebDAV in front of it, mount the WebDAV as a filesystem somewhere, and every write creates a new revision of the file in SVN. That gets you networked and versioned. This works similarly to gitfs but uses WebDAV. You could if you wanted use dav2fs in front of that to treat it like a normal file system again.

You can then share any of these over SMB with Samba. Or you can shared them via NFS.

If you need really high-end, fast, replicated network filesystems you can use any of the clustered storage systems that will use a storage node's underlying files with any of these below that, but that will put your revisions underneath everything else rather than on top. Then there's using something like gitfs with the remote on top of, for example, DRDB, XtreemFS, or Ceph (for example even across CephFS which presents Ceph as a normal POSIX filesystem). This latter option puts your revisions closer to the user and then each revision gets replicated.

I've personally never used some of the more exotic combinations listed here. You could in theory put NILFS2 on LVM with DRBD as the physical layer (since DRBD supports that) and then serve that file system via Samba (CIFS) or NFS which I would expect to work well enough if slowly.

Comment Re:This will NOT half the cost of batteries (Score 1) 214

Well, if someone really wants to pay by the minute instead of $10 per month for unlimited to Canada and the US or $37 per month for unlimited calls to almost anywhere in the world it's either because they almost never call outbound long distance or they're not very bright.

I wouldn't consider that plan to be the baseline cost. Most people are going to pick unlimited.

Slashdot Top Deals

8 Catfish = 1 Octo-puss

Working...