At least now people who would not have known about a potential attack vector can take precautions and be safer without having to wait for Microsoft to introduce more vulnerabilities when they come up with a "fix" for this one.
I'm fairly certain that past a certain volume, keeping a significant fraction of requests resident in RAM is not actually possible. Consider sites serving up large media files that are constantly changing, and even with a fairly large budget having RAM to cache all of that content is not reasonable. It's not much larger when having sufficient spindles for naïve algorithms is also not an option.
At this point having intelligent algorithms that understand non-uniform memory access patterns is essential, whether it's L1 - L2 cache, cache to SDRAM, RAM to disk, and even local disks to network storage. This is where algorithms like B*-tree and the B*-heap start to perform much better, since they're designed around such non-uniform memory access. Database engines also have been designing indexes around these principles for decades, although in a very specific way that's much more difficult to use generally.
So while it may be comforting to throw out the RAM card and give up when the reality of budgets and billion-dollar-RAM-caching systems are unobtainable and give up, actually trying to solve the underlying problem is much more interesting.
I use them on ext3 with no problems. It's true that very early on there was a problem with them and journaled filesystems, but that has long since been solved.
They are not at all related. Seriously. And if you are getting requirements you think Hibernate can't fulfill, you have done something so fundamentally wrong somewhere that if you code the database layer yourself you will be completely screwed. So find out what the real problem is and what you need to change to work with Hibernate.
It is easier to write an incorrect program than understand a correct one.