



Best Format For OS X and Linux HDD? 253
dogmatixpsych writes "I work in a neuroimaging laboratory. We mainly use OS X but we have computers running Linux and we have colleagues using Linux. Some of the work we do with Magnetic Resonance Images produces files that are upwards of 80GB. Due to HIPAA constraints, IT differences between departments, and the size of files we create, storage on local and portable media is the best option for transporting images between laboratories. What disk file system do Slashdot readers recommend for our external HDDs so that we can readily read and write to them using OS X and Linux? My default is to use HFS+ without journaling but I'm looking to see if there are better suggestions that are reliable, fast, and allow read/write access in OS X and Linux."
UFS. (Score:2, Informative)
UFS would be the best option. Linux supports it with -rw since Kernel 2.6.30 (afaik) and OS X mounts UFS natively.
Re:UFS. (Score:5, Informative)
Unless you're using Tiger or earlier, UFS is not an option. The last two versions do not support UFS at all. However, HFS+ support in Linux is pretty good. Otherwise you're looking at mac-fuse for ext2/3, which IME is pretty slow and buggy. I thinks Jobs has gone out of his way to make OS X incompatible with OSes other than windows. Maybe he's afraid of what will happen if everyone becomes aware they have other choices.
Re: (Score:2)
Just because people regularly bash the company for dumb reasons doesn't mean that every time someone bashes the company it's for a dumb reason.
ObTopic: Just wipe all the OSXen and replace them with Ubuntu 10.04. Install AWN and Compiz, configure keybindings and themes, and no one will know the difference. Then you can use ext4.
Re: (Score:2)
4GB per file limit (Score:5, Insightful)
OS X UFS has a very unfortunate limit as it doesn't support files over 4 GB. Or, there was no chance, I would format everything (especially USB) as UFS.
Lack of commercial quality disk tools like Disk Warrior if a true catastrophe happens is a problem too. Of course, fsck can do good things but after a true catastrophic filesystem issue, diskwarrior is a must. That was one of the things Professional Mac community had hard time explaining ZFS community.
As Apple was truly wise to completely document it down to a point you can even write a full feature defragmenter (iDefrag), HFS+ without journaling seems to be the best option. I am in video business and I have seen it deal with files way beyond 80GB without any issues. In fact, lots of OS X users who images their drives see it everyday too.
I don't know why journaling is not implemented, it is open and documented too. If a bit hassle happens, it sure deserves it since he deals with external drives which are just fit to journaling purposes.
Re: (Score:2, Funny)
iThis and iThat, blah blah blah
How about iJustpukedalittlebitinmymouth?
Re: (Score:2)
It's definitely a perfectly capable, full-featured, modern filesystem.
Re:UFS. (Score:4, Informative)
It's the default filesystem in *BSD, so it's very well maintained etc. It has journalling (or does it call it "soft updates"?) auto-defrag, etc, etc. You fsck it if you power off without umount but otherwise you won't need to.
It's definitely a perfectly capable, full-featured, modern filesystem.
All the things you write are perfectly true... on *BSD variants where UFS is the native, default FS. That is not the case on either Linux or OS X, to the extent that in OS X v10.6 UFS is now a read-only FS because it's barely maintained.
Most people who think OS X is truly 'native' on UFS because it has BSD heritage haven't tried to actually use it. When Apple bought NeXT in 1997 the UFS implementation was already behind the times because at that time NeXT hadn't been updating its operating system for a few years. Since Apple wanted OS X to be a MacOS upgrade, development resources went into making a robust and high performance HFS+ implementation. Very little was done to modernize UFS. From the outside, it seems to have been just enough effort to make sure it worked and was still bootable over the first few versions, for those who wanted native UNIX FS semantics (mostly case sensitive file names). Then they added case sensitive filename support to HFS+ (it's a format-time option), and since then there has been even less reason for Apple to maintain UFS, hence its transition to a read-only legacy format.
The other piece of this picture is that UFS != UFS. The UFS in MacOS X is a mildly upgraded version of mid-1990s NeXT UFS (which, in fine BSD tradition, wasn't quite the same as the UFS found in other BSDs). It's almost certain it has few of the features you associate with modern versions of UFS.
Re: (Score:3)
Every filesystem warrants the occasional check. If you never check, there are lots of errors that can accumulate and burn your ass.
Re: (Score:3, Funny)
...storage on local and portable media is the best option for transporting images between laboratories. What disk file system do Slashdot readers recommend?
Every filesystem warrants the occasional check. If you never check, there are lots of errors that can accumulate and burn your ass.
Methinks you may be plugging your portable media into the wrong place... then again, I've never tried that, so I could be wrong. ;-)
Re: (Score:2)
But I thought they were Plug and Play!
(oh god, did I just say that?)
Re: (Score:2)
But I thought they were Plug and Play!
(oh god, did I just say that?)
LOL!!!!! Gives the phrase a whole new meaning... and I think I will be handling my customers' external hard drives (or should I have put "external hard drives" in quotes?) with gloves from now on... ;-)
Re: (Score:2)
MR scanners usually produce individual files that are smaller than a MB. I think the poster was referring to the total size of the dataset.
It's quite possible that when they analyze the images they put them in a format where individual files are considerably larger though. It's a pain to do 3D, 4D or 5D analysis on a set of 2D files.
Re:UFS. (Score:4, Informative)
Re: (Score:2)
Which one? FAT12 and its 32MB limit? One of the FAT16 versions? FAT32 with 2GB limits? FATX? exFAT? TFAT? TexFAT?
Re: (Score:2)
Followup question... (Score:3, Informative)
I have a similar problem, albeit on a smaller scale. I use unjournalled HFS+.
However, the problem is that HFS+ being a proper unix filesystem remembers UIDs and GIDs which are usually inappropriate when the disk is moved.
Is there any good way to get Linux to mount the filesystem and give every file the same UID and GID, like for non unix filesystems?
Re: (Score:3, Informative)
Many filesystems support uid= and gid= options in their mount command (including HFS). Just add that to a mount script or set it up in fstab.
Re:Followup question... (Score:5, Informative)
Non-native filesystems usually let you set UID, GID, and permission masks. Check the "mount" manpage and look for the filesystem you want. You might also try "man filesystem"
Re: (Score:2)
On my computer, at least, I think he meant "man filesystems" (plural) since man filesystem is about the signaling event, which isn't very helpful at all.
HIPAA Constraints? (Score:5, Interesting)
By "HIPAA Constraints" I assume you mean the privacy rule. I would think that this rule would prevent you from using sneakernet to transmit files. Unless you're encrypting your portable disks, and somehow it doesn't sound like you are.
Fun reading:
http://www.computerworld.com/s/article/9141172/Health_Net_says_1.5M_medical_records_lost_in_data_breach [computerworld.com]
Re: (Score:2)
That was my first thought as well. And as much as I hate to say it, but Fat32 might be the best option. Either that or UFS.
FAT32 is a nightmare waiting to happen (Score:2)
Most of files they produce involves an actual patient, sometimes in critical condition stay in something like a grave for hour sometimes.
If one of issues with filesystem, that archaic junk which should have never been released happens, it will be nightmare to restore the data while it is easy on HFS+ Journaled or even NTFS.
I own a Symbian phone and trust me on that, if there was a $50 utility just to get rid of FAT32(!) junk risking my data on memory card, I would happily buy it.
Re: (Score:2)
Re: (Score:2)
Better just use RAR and compress them with recovery record, multipart and password.
Re: (Score:2)
No problem, that won't take any time at all...
Re: (Score:2)
Re: (Score:2)
Yea, it certainly would.
I found an eSATA bracket that plugges into a normal SATA port on the motherboard. It fit just fine on the back of my server's supermicro 1U chassis.
My external drive has an eSATA as well. I've got tested sustained data rates over 100mb/s - definitely superior to USB in that respect.
Re: (Score:2)
I prefer Stacker and their memory compressor for Windows 3.11 was awesome too. Still this wouldn't solve the maximum filesize issue that multipart volumes with RAR does solve.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Oh yeah, don't put it on one 80gb disk drive, put it on forty 2gb thumb drives. Nothing can go wrong with that.
If these guys are going to be transferring a lot of 80gb files, they have to find a way to do it over the network securely and reliably. Not easy, I admit, but using sneakernet for this kind of data (even without the huge file sizes) is asking for trouble.
NTFS is undocumented and read only on OS X (Score:2)
As Apple didn't want to bother with "OS X deleted my NTFS drive" people, they support NTFS as read only by default. "NTFS-3G" and other utilities can of course read/write but it doesn't change the true reason for NTFS being unreliable to support: It is _not_ documented.
HFS+ on the other hand is completely documented. Apple wins on this case because of openness and the fact that, their true discipline in making things backwards/forwards compatible with complete documentation.
Nobody had to/has to reverse engi
Re: (Score:2)
Re: (Score:2)
FAT32 was pretty fat when it came out 15 years ago. Nobody even had 2GB drives, never mind 2GB files.
Re:HIPAA Constraints? (Score:5, Informative)
HIPPA data is often encrypted when placed on tape or transported across systems, but that's because such activities may involve the data being visible to unauthorized people. As examples of each:
IMHO wise use of sensitive data on laptops requires encryption at the filesystem level. It's neither difficult or time-consuming, but given how much sensitive data has been exposed via folks losing or misusing laptops, it ought to be a no-brainer. Sadly, too few places bother.
Re: (Score:2)
HIPPA mandates who can and should have access to the files. The method of storage (disk, tape, SSD, paper, whatever) is largely irrelevant.
Say what? You've never hear of a data breaches from lost or stolen portable hardware? See the link in the post you replied to.
Re: (Score:2)
Even though you speak as someone knowledgeable and authoritative about HIPAA, I have a hard time believing you since you apparently don't know how to spell it.
Well, as someone who is knowledgeable on it, he's pretty much right. But the sad part is, any encryption suffices to be HIPAA compliant. I've run into some pretty lame ass setups where such data was being stored on ancient Windows Server machines behind a "firewall" that qualified as meeting HIPAA requirements. The whole setup probably did (the part I saw did) - but, in realistic terms, was still highly insecure.
HIPAA seems to be part "let's make an attempt - it doesnt matter if it's a good one" and part
Re: (Score:3, Interesting)
Maybe instead of using a portable disk, they could whip up a nettop running Linux and transfer files over the gigabit ethernet...
Then they could do transfers via samba or rsync+ssh , and the nettop could transparently take care of encrypting the underlying FS, whatever that may be.
Performance wouldn't be great... maybe 20MB/s instead of 60MB/s for an eSATA drive, and they'd have to work out a consistent network port / IP across all the sites it travels to. But it might confer some advantages.
Along similar
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I completely understand the red tape.
Our scientists have been having similar problems. I believe that the real solution here is to stop these guys from working on their local machines with the full sized datasets. We've provided a centralised HPC system that is connected via infiniband (and others) to multiple architectures of storage.
There is the standard /home which is DMF'ed with the top tier being 50T of total 650MB/s write (not sure of the read stat - I'm the software guy not the hardware guy). This
Re: (Score:2)
Re: (Score:2)
In case you need help convincing the hierarchy and you need a little ammunition to get a decent, scalable, centralised solution, you will find allies in:
Engineering - find out those that teach and apply for grants doing any kind of FEA work, the robot people,
Physics: The medical imagers, users of geant4 and beam, biomechanical,
Comp Sci: talk to anyone related to the document searching/indexing areas, machine learning, etc
Chemistry: Search you local paper repository for those that have someone from your math
Re: (Score:3, Interesting)
Yeah, then it sounds like you're pretty much doing the best you can under the circumstances... I was just trying to think out of the box a bit and turn your filesystem compatibility problem into a file server compatibility problem, since cross-platform compatibility is a much bigger deal in the latter scenario.
One last consideration you might want to try benchmarking is storing your data in an image file, like a zip or tgz or more likely a dmg archive... that way you could probably do transparent compressio
Re: (Score:2)
By "HIPAA Constraints" I assume you mean the privacy rule. I would think that this rule would prevent you from using sneakernet to transmit files. Unless you're encrypting your portable disks, and somehow it doesn't sound like you are.
Fun reading:
http://www.computerworld.com/s/article/9141172/Health_Net_says_1.5M_medical_records_lost_in_data_breach [computerworld.com]
You would be surprised at how outdated parts of HIPAA are (from the day they were written). And what things they fail to cover. Heck, there are sections that indicate the requirement for data encryption for certain uses/storage/etc, but that's about the extent. ANY encryption will do to pass muster. A simple subsitution key would pass the required criteria. Then there are sections that are very specific in specifying methods that are useless... while others at least seem to have been thought out. There are
Re: (Score:3, Interesting)
Re: (Score:2)
The biggest issue comes in dealing with multiple IT departments and setting up network access to our materials. Plus our images are so large that for these processed files (not the originals) we are opting for local storage instead of storage managed by our IT staff (who are wonderful but not cheap; we just purchased 4TB of local storage for 1/4 the cost of 1TB from IT).
Dude, there's a reason network storage is more expensive than local storage: it comes with the infrastructure that allows lots of people to access it. If you try to serve up these large files from your local network, you'll slashdot the thing, and wackiness will ensue.
Getting back to the privacy issue: I hope your privacy officer did due diligence, and isn't some overworked functionary who just said, "The data is anonymized? Well, that's OK then." You wouldn't be the first people to distribute data they tho
Re: (Score:2)
We opted to go with local, portable storage because only 4 people need or have access to these particular image files on three computers (we have 2 more collaborators that might need access but we
Re: (Score:2)
I use TrueCrypt to transport patient data to/from doctor's offices.
Re: (Score:2)
But.... but it involves COMPUTERS! It's completely different, we need new rules!
(that said, it's far easier to pocket a USB drive (or just copy) and run then a folder full of files or some x-ray prints)
X-Ray and iPod? (Score:2)
I heard it is almost a standard procedure to use iPods in X-Ray community for the x-ray format images and that is why there are several OS X Utilities supporting it.
I guess the first reason was the gigantic (for that time) storage size of the iPod and you can also use it for music.
NTFS (Score:4, Interesting)
There is NTFS-3G for Linux and Mac OS X [sourceforge.net]
There is also an EXT2 Fuse FS (for Mac OS), and probably many other options.
Having said that, I have never had a problem with Linux's HFS+ write support.
Re:NTFS (Score:4, Funny)
Windows doesn't play in here, it's OSX and Linux. Tossing NTFS into that would just be... wrong somehow.
Re:NTFS (Score:4, Informative)
Windows doesn't play in here, it's OSX and Linux. Tossing NTFS into that would just be... wrong somehow.
Flamebait mod or not, there is a valid point. Though various NTFS drivers do allow read/write, the success isn't graven in stone. There are better alternatives in the Linux/OSX world. Keep in mind that losing this data becomes either costly (as in time=money, let's go make another set of copies to run to whatever office) or very bad (as in someone moved the files to the external instead of copying them) or both.
So, as good as the NTFS R/W drivers are getting, it's safer to use a file system that is known to be more stable and less error prone, such as HFS+ or UFS or one of the other suggestions. "Really good" shouldnt be an option in the medical world when "even better than 'really good'" is available, compatible, and easy to install on all systems involved.
Re: (Score:2)
Woa woa, calm down. I don't need an AC freaking out on my behalf :)
I can handle the karma hit. On this post too.
(thanks though)
ext2 works. ntfs works. (Score:2)
Mac OS and Linux both have support for NTFS through NTFS-3G [tuxera.com]. Mac OS has support for ext2 through fuse-ext2 [sf.net].
Re: (Score:2)
Re:ext2 works. ntfs works. (Score:4, Informative)
If it's Mac OS X 10.6.x, you don't even n eed NTFS-3G, as the native NTFS driver has read / write capability. You just need to change the /etc/fstab entry for the volume to rw, and remount.
Re:ext2 works. ntfs works. (Score:5, Informative)
This is dangerous advice. There are numerous reports of instability and NTFS volume corruption when forcing 10.6 to mount NTFS volumes R/W. Apple seems to have turned NTFS write off by default for a good reason, it's not done yet.
HFS+ unjournaled is best; MacFUSE also works (Score:2)
I have a similar scenario and I think HFS+ unjournaled is best for your scenario. FAT32 is even worse. You are fortunate not to have to support windows. Ideally I would use NFS and file sharing instead of external disks. But shipping a disk is always better than transferring large amounts of data over the net.
Another option is to install MacFUSE [google.com] and then mount other file systems. This is what I do when NTFS is required. For my Linux system I love ext4, if you need an older file system use XFS, ext3 is stabl
NTFS (Score:2)
It sucks, but NTFS might just be the best option. OSX and linux both have had stable enough support for years. The main plusses over FAT32 are journaling and support for files > 4GB. Using UFS is dangerous (or at least has been until very recently) because there are so many different variants of it (solaris, BSD, osx, etc.) that linux support is notoriously troublesome. An extra plus of NTFS is you can use it easily on windows machines as well.
Reiser? (Score:5, Funny)
I would have recommended ReiserFS, but the data might get buried somewhere and the system would not remember where it was....
Re:Reiser? (Score:4, Funny)
That's pure pure FUD. ReiserFS can recover anything, even something it allegedly never stored. "Oakland homicide detective Lt. Ersie Joyner recalled that Reiser led them directly to the exact site, without any hesitation or confusion."
the question of our age (Score:5, Insightful)
who will wooosh the woooshers?
Re: (Score:2)
No Filesystem (Score:5, Informative)
Re: (Score:3, Informative)
With that said, tar is a bad solution because it doesn't include any type of CRC or encryption. But it's a good idea, and certainly a million times better than a file system of some type.
True, but simply hashing the file at both ends solves that. Both linux and mac support shasum.
Re: (Score:3, Informative)
As to encryption, you just encrypt the file before you tar it. In fact, with gpg you get both encryption and integrity checking.
Gnupg is available in Mac Ports and comes with just about every linux distro.
Rubbish (Score:5, Informative)
You're storing it in the wrong format - there are all sorts of tools to convert to Analyse or DICOM format, which give you a managable frame-by-frame set of images rather than one huge one. Most tools to manipulate MRI data expect DICOM or Analyse anyhow (BrainVoyager, NISTools, etc).
If you really want to keep it all safe, use tarfiles to hold structured data, although if you do that you've made it big again.
Removable media are a daft long-term storage - use ad-hoc removable media solutions (or more ideally, scp) to move the data.
Re: (Score:2)
Re: (Score:2)
Who cares? (Score:2)
No, seriously, who cares? This is a process designed to save files that are then transferred through SneakerNet. While moderately large, at 80gb, they're not huge by modern standards. If you have a current solution that works, stick with it.
If, however, there are other constraints that are affecting you - transfer speed, decades-long retention on local media, security, etc, then by all means let us know. Until then, to use the obligatory car analogy, its as if you've said:
Due to the distance between my house and work, I currently use an automobile to go between the two locations and to perform various other services. Currently I use a Honda Accord. What would you suggest?
NFS over SSH (Score:3, Interesting)
Just tunnel NFS over SSH. I can't imagine how secure it would be to sneakernet any files around the office. If you need to encrypt the data at rest then either encrypt on the client or leverage an encrypted filesystem of a Decru type appliance.
Comment removed (Score:5, Informative)
Re: (Score:2)
Network! (Score:2)
UDF (Score:2, Informative)
I'm using a USB Disk formatted under linux with UDF (yep, it's not limited to DVDs, there is a profile for hard disks). It can be used without problems under OSX (even Snow Leopard)
Re:UDF IS ACTUALLY A SOLUTION (Score:5, Informative)
Ok everybody's occupied with surreal suggestions, but anyway:
*UDF* is quite awesome as a on disk format for LinuxOSX data exchange, because it has a file size limit around 128TB, supports all the posix permissions, hard and soft links and whatnots. There is a nice whitepaper summing it all up:
http://www.13thmonkey.org/documentation/UDF/UDF_whitepaper.pdf
If you want to use UDF on a hard disk, prepare it under linux: /dev/sdb (that's right, UDF takes the whole disk, now partitions)
1) Install uddftools
2) wipe the first few blocks of the hard disk, i.e. dd if=/dev/zero of=/dev/sdb bs=1k count=100
3) create the file system : mkudffs --media-type=hd --utf8
If you plug this into OSX, the drive will show up as "LinuxUDF". I am using this setup for years to move data between linux and OSX machines.
Re: (Score:2)
Give the man a cigar. I was struggling through all the other suggestions, every single one of them involving unacceptably horrible tradeoffs, and finally get to this post, the only idea that is not just mind numbingly brain dead. I don't even use OSX any more (finally cured that brain disease), and I'm gonna check this out.
Re: (Score:3, Insightful)
That is an excellent solution, and arguably the best to the OP's problem printed. UDF works on Windows, OS X, Linux. Even AIX is happy with it and can write to it. So an external drive with this on it should definitely solve the problem.
NAS device (Score:3, Insightful)
A simple NAS enclosure or NAS device might be what you are looking for. You can get a single drive NAS enclosure, and add a drive, that you can carry around just like a regular portable drive. You can move it between networks and use any connection method the NAS device happens to implement (SMB, FTP, NFS, etc). Some even let you optionally connect it directly via USB or eSATA to access the file system directly, and some may have encryption or other security features as well.
Of course, check to make sure you have permission and that connecting things to your network does not violate any policies. If connecting a network device directly to the your network is not permitted then perhaps you can add a second, dedicated, network card to the computers.
do not use a filesystem (Score:2)
Treat the disk as if it were a tape, and use the GNU version of cpio.
You can install GNU cpio via macports on your Macs, and people with Linux should find it either already installed or available in their distribution's package system.
You need to use the GNU cpio instead of the BSD cpio that ships with OS X because there are incompatibilities between the two, and I was unable to find a set of settings that would make them compatible. (There are settings that should, but they did not work, so there's a bug i
filesystem is largely irrelevant (Score:2)
I do my fair share of transferring large neuroimaging datasets around from time to time, although I don't do it regularly. If you want to use hard drives that aren't connected to anything in transit, then I have to agree with whoever suggested doing it without a filesystem. I've always found that to be the easiest way to get around filesystem (and sometimes operating system) idiosyncrasies, whether you're writing to a DVD or a hard drive or whatever. If you can (de)serialize your data easily (using tar),
not a question of file system as such - use a NAS! (Score:2)
I have a similar problem with backups in my paper less medical practice - I always need a working system off-site for emergency replacement, and here in rural Australia doing it via Internet is impossible due to lack of networking infrastructure and ridiculous bandwidth costs
I use a QNAP NAS (TS659). They also come as tiny handy cubes with 2.5" disks instead of the 3.5"
That makes the question of the file system irrelevant, since it communicates with just about any operating system through standard protocols
NFS (no wait, I DID read it) (Score:2)
The best cross-platform (Linux+MacOS) filesystem is NFS, wh-- stop hitting me, I DID read the whole question. Ok? So, as I was about to say, use NFS. When the techno-ignorant HIPAA people watch what you're doing, just send 80gig of /dev/random (bonus: it looks encrypted, the HIPAA guys will love that) to the removable drive, and when you're coping off that drive, send to /dev/null. Meanwhile, as the drive's contents are going to your lame software-emulated null device, also be reading the file off the e
Compression? (Score:2)
Just curious. Is the 80GB after applying lossless compression to the image set? If not, there's no good reason to store it uncompressed.
As for your question, I agree with those that say to skip the filesystem. Just use tar and a block device.
-Randall
Been There, Fixed That (Score:4, Interesting)
We had almost exactly the same problem. Our fMRI work was done at University of Virginia on a Linux machine. naturally you don't want to tie up a $1500/hour data collection machine doing analysis. Our data was transferred immediately to the Neurological Institute to a multiboot machine. No patient data included at this point, so no HIPPA problems. The receiving box ran Linux initially since the analysis programs from NIH (primarily AFNI) were Linux based. Patient data got added here so HIPPA became an issue. The machine had multiple hard drive bays, all of which were removable, plug-and-play drives made from a kit that provided slide-in rails and a locking mechanism, otherwise were common, commercial drives. Externals would have been easier, but the guy who devised this had a rilly rilly good reason. I remember it was good, but not what it was. Anyway, the machine could boot other OSs, prep the drives, go back to the native Linux HFS+ and transfer/translate to the , it was transferred, the drive removed, packaged, and FedEx'd to the other analysis sites at Virginia Tech, NIH, and U.Va Wise. We were strictly experimental, no direct medical treatment, and so time was not an issue. With OS X being *nix, there's not a lot of reasons to go with one over the other except for convenience when it comes to what your data collection and analysis are running under. Unless yours run fine under OS X, I'd say stick with HFS+, and of course moderate that according to whether you have to share out the data and what those people are running. I wouldn't bother with supporting Windows, as they continually find new problems to have with large files. One comparison test showed no difference in analysis results, but they did have problems with Windows choking on the data files. Their test files were only 1.5 GB. ref: J Med Dent Sci. 2004 Sep;51(3):147-54. Comparison of fMRI data analysis by SPM99 on different operating systems. PMID: 15597820. My experience agreed with their results. As I said we had little call for Macs, so we didn't run enough of that to give a good test of whether it had the same kind of problems. Bottom line, we used what we needed to according to where it was going and what they needed it to be, but for our own use it made no sense to transfer it out of the OS that collection and analysis used, HFS. The system met with the approval of the biophysicist we worked with at U.Va, and he had been a grad student under Peter Fox when the latter developed SPM. OH YEAH: the good reason. If anyone else wanted to work with us, they didn't have to dig too deeply into techie stuff either hardware or software. We could send them a removable-drive kit to install, and send them a drive with bootable Linux, AFNI and data, all plug and play. If that might be useful to you (using externals instead of removables doesn't matter here) that's probably be another vote for HFS.
Don't know if broke (Score:3, Informative)
Re: (Score:2, Informative)
and 4Gb cap...
FAT32 is a fucking horrible idea in his case. (Score:3, Informative)
How the fuck is he supposed to store 80 GB files on a filesystem that maxes out at 4 GB?
Re: (Score:3, Funny)
Sorry man, but your use of the "f" word is totally inadequate in this conversation. Let me correct you:
How the fsck is he supposed to store 80 GB files on a filesystem that maxes out at 4 GB?
Much more in-context, eh?
Re: (Score:2)
Indeed. You completely missed the 80gb file part.
Do *nix and OSX support exfat at all? If they do, then that -should- work. But it's not really a good solution.
Re:NTFS (Score:5, Funny)
Hah! In my company we call it "NoTeFíeS" (for non Spanish-speaking people: "Don'tTrustIt").
Re:NTFS (Score:5, Funny)
Roses are red,
Violets are blue,
All of my mod points
Would belong to you.
Re: (Score:2)
NTFS is actually not a bad option. Ubuntu 10.04 supports it out of the box (sic), OS X supports read by default but not write. Several companies supply cheap RW drivers for NTFS on OS X.
Re: (Score:2)
with 10.6, the supplied NTFS driver can do read / write, but it's not supported by Apple. You just need to change it to RW in /etc/fstab.
Re: (Score:2)
Have you got any source to confirm this or are you just pulling it out of your ass? Why would that not have been made wider knowledge?
Re: (Score:2)
Quote from fstab.hd:
"IGNORE THIS FILE.
This file does nothing, contains no useful data, and might go away in
future releases. Do not depend on this file or its contents."
...which leaves me wondering, WTF?
Is the rest of the file there like in Linux? Is there a "rest of the file"?
Re: (Score:2)
I have heard good things about that, and I think the last college I went to had that in use so the Windows boxes would be able to grok HFS+.
I say slap that on the Mac and call it done. Since it is a commercial product, if there any glitches, you can blame it on that product, so the $40 pays for some CYA.