In regard to the placement of the business logic, I think the truth - as always - lies somewhere in the middle and contains a disclaimer to the effect of "not applicable when implemented by morons" and "common sense not included".
Sometimes, the application is all about data and most aspects of the business logic can be summed up as maintaining data integrity. For example, a typical bulletin board updates various counters and lists every time someone posts something - the post counter for the user, the topic, the forum and the category, information about the latest posts for those entities, maybe lists of unread posts for the users (depends on wether those lists are inclusive or exclusive) and loads of other things. Most of those can be moved into triggers, maybe even views with rule-based updates of the actual data, with a great benefit for interoperability. In fact, I did co-author one such bulletin board and the decision to move as much business logic to the database as sensibly possible (and not a thing more) turned out to be correct. Right now, we're implementing some auxiliary functionality in Python (the original code is PHP) and it couldn't be easier. Just about everything is selected using views (which will get a dramatic speedup once I move everything to Pg9, which supports join removal - not that it's slow right now, just noticeable on the server load graphs) and most complex updates are performed using triggers and rules, so the frontend code can be kept simple, and even major updates to the actual business logic and data structure can be contained in the database and completely invisible to the outside world.
There were, however, things that I kept out of the database on purpose, generally due to poor state of interoperability of database engines and programming languages (which is kind of strange, considering that relational databases are such a damn old and mature technology), primarily when it comes to errors. The most prominent is acces control logic. I could engineer the database so that it just refused to accept a post in a forum where the author had no write access (while retaining the possibility of revoking him the access he had and keeping posts he wrote to date without any data integrity problems, of course), but I couldn't think of any elegant, sensible and clean way of turning complex database-generated errors, such as those thrown by constraints or manually in the pl/SQL code, into appropriate exceptions in the application and keep everything in line with transaction handling at the same time. I tried and what I came up with was a sorry kludge, so I just gave up and kept those bits outside of the database. Of course, that meant I had to reimplement some of this in Python and now I have to remember to update two codebases, but the code is simple enough that I can put up with this.
Of course, everything's documented, both in text and diagrams, just in case someone would inherit the code down the road. It's been quite helpful right now, too - there are parts that Just Work and the last time I even looked at the code was a few years ago, right now it is kind of new and unknown even to me when changes are to be made.
In short, I don't think this is about some holy rule that is unconditionally true and you have to "get it" or else. IMHO it's about common sense, practical knowledge in neighboring fields (you're a DBA and you don't know how a CPU cache, a memory controller or a physical hard drive works? Well, you're not going to be a good DBA for high-performance systems, regardless of your knowledge of databases) and experience. All of those together, not just one or two. Only then rules become just guidelines to step over when appropriate.
Besides, with Pg9 and its stream replication and hot standby mode, scalability just shot up through the roof and stopped at about the actual limits of the hardware, where contraptions such as the battery-backed RAM "disk" for WAL store I mentioned earlier come into play to push it even further.