Comment Re:Why I use ZFS/Solaris in production for Postgre (Score 1) 235
It's a good blend of both reads and writes.
We have tables that have as many as 100m records, where Solaris/ZFS seemed to help massively was the big reads for reporting. We have indexed it pretty aggressively, even going so far as to index statements and managed to pull amazing performance, considering the concurrency we see from a free database. (Which for the record, has never given us any problems... postgres has been rock-solid)
for the most part it was running "ok" on linux, but the bump we got from the testing on Solaris with ZFS with identical hardware and similar configs was nothing short of amazing.
One of the big differences between the 2 configs, we disabled the raid controler (A dell perc 6/i) to run jbod instead of Raid 1+0. I've not tried to do a stripe configuration on Linux with a similar configuration , even without compression. To be fair to the linux performance, i really need to setup and test with a similar config to make sure my results were not hardware related.
A friend had told me where solaris and ZFS really gives the big bump on the performance is how it's not having to read each byte from the disk, it's reading a compressed block and decompressing it on the fly, which if you have the CPU cycles to spare causes the io transfers to be a lot quicker. (at times 2-3x faster than a raw read with uncompressed data)
I'm guessing that we could probably get similar results with Linux on XFS or ext4 using solid state drives, which are now a little more affordable than they were years ago.
Again, we're not a large shop with lots of money to throw around at the project, we're a startup just trying to get by in a brutal economy.
You're right though about the default configuration. I've gone through and tuned the work memory, index cache, tuned the memory to match my hardware. (Currently 32 gigs on an array of 8 disks on a 8 core Xeon server)...