Mainframes are like really big industrial cars where everything is hugely expensive. They're stupid expensive, but far cheaper than trying to do massive amounts of work with thousands of pickup trucks.
It's like the transporter they use to move the space shuttle with rockets and all ready to go:
http://en.wikipedia.org/wiki/C...
It goes 1MPH, which sounds pretty wuss-tastic in car terms, until you consider how much capacity it has at that speed. It would be basically impossible to accomplish the same thing with any number of VW Beetles without spending years taking apart and reassembling everything each time you wanted to attempt a launch.
That's where mainframes make sense - problems which are really massive, but need to run on one computer. Any problem that can be broken down into smaller chunks can be solved much more efficiently with a network of smaller computers.
As the smaller computers continue to get more and more capable and the technology to break down problems and high speed interconnects become more common, the jobs that run better on a mainframe get more rare and networks of servers become more common.
Mainframes do have one cool thing going for them that is not respected on smaller machines - portability. There's code that's been in use for several decades on mainframes running in a stack of emulators. Each new mainframe gets an emulator to make it possible to act just like an an old mainframe. This means the customer needs to run their code on the emulator instead of having to tweak the code to work on the new mainframe. For jobs that justify mainframe costs, downtime is very expensive, so minimizing additional conversion efforts is huge. Also, it's entirely possible that the last person who knew how some mission critical code worked may have died 40+ years ago and business people aren't big proponents of hiring someone to figure out and rewrite legacy stuff.