A single point of failure is a big problem. The biggest advantage of a distributed system is that the main repo doesn't have to take a variable client load that might interfere with developer pushes. You can distribute the main repo to secondary servers and have the developers commit/push to the main repo, but all readers (including web services) can simply access the secondary servers. This works spectacularly well for us.
The second biggest advantage is that backups are completely free. If something breaks badly, a repo will be out there somewhere (and for readers one can simply fail-over to another secondary server or use a local copy).
For most open source projects... probably all open source projects frankly, and probably 90% of the in-house commercial projects, a distributed system will be far superior.
I think people underestimate just how much repo searching costs when one has a single distribution point. I remember the days when FreeBSD, NetBSD, and other CVS repos would be constantly overloaded due to the lack of a distributed solution. And the mirrors generally did not work well at all because cron jobs doing updates would invariably catch a mirror in the middle of an update and completely break the local copy. So users AND developers naturally gravitated to the original and subsequently overloaded it. SVN doesn't really solve that problem if you want to run actual repo commands, verses greping one particular version of the source.
That just isn't an issue with git. There are still lots of projects not using git, and I had a HUGE mess of cron jobs that had to try very hard to keep their cvs or other trees in sync without blowing up and requiring maintainance every few weeks. Fortunately most of those projects now run git mirrors, so we can supply local copies of the git repo and broken-out sources for many projects on our developer box that developers can grep through on our own I/O dime instead of on other project's I/O dime.
-Matt