Stored procedures can be very efficient sometimes, but often, they just add additional load to the DBMS server that could have been distributed among application servers...
This, however, is mostly nonsense. It's true that stored procedures may add load to the DBMS, however it's equally true they may significantly reduce the overhead. What's more efficient - transferring 1M rows to an application server and handle it there (often submitting the result back to the DBMS as a parameter for the next query), or just process it locally within a stored procedure? Not to mention that the "application servers" usually manipulate the data using languages that are very poor match for such operations (which is basically everything except for SQL).
The first line of defense against loading the DBMS server is to start with intelligent queries and organization. Logic should avoid chewing through millions of rows in a programmed manner no matter which machine the chewing is done on. The DBMS is optimized for information retrieval and updating, and stored procedures can potentially interfere with that optimization. So rule 1 is to reduce what gets acted on. If you do that, the relative overhead of pushing it out to client machines is less of an issue and even the fact that the client machines are probably going to be less powerful won't matter as much, because there will typically be more of them to share the computing load. Because they're also likely to be less expensive machines, the next cost per computation is often lower as well if the DB server's CPUs are being efficiently used.
Intelligent queries and organization - sure, no argument here. However there are cases when you really need to chew through millions of rows, and doing that in SQL directly without imperative code is nearly impossible. And I wasn't saying that it's a good idea to replace queries with stored procedures - that would be the dumbest thing in a relational database (which is exactly about running queries). If you can write a query solving your problem well, don't write procedures. No doubt about that.
I think we're both speaking about different cases, though, so I'm missing your points and you're missing mine. I'll illustrate my reasoning by an example of an actual project that I had in mind ...
About 2 or 3 years ago I was asked to help with a project in serious troubles - many things went wrong during the development, from starting the development on MySQL+Tomcat and switching to Oracle+Jboss, to heavy usage of Hibernate mixed with crazy "optimized" SQL queries.
The largest problem was that they needed to monthly balancing (it was a system for managing retail network). The business process behind this is rather complex, and the process steps need to work with large sets of rows (chew through all the transactions for all sales, salesmen, branch ... and handle each transaction according to product type, if it's a cancellation or not, what were the previous transaction for that person, ...). Doing this in a single SQL query might be possible, but it would be incomprehensible, unmaintainable, ... a bit of imperative programming is a better match here.
So the developers decided to do this on the application server, and to mess it up as much as possible, they used the Hibernate definitions optimized for the front-end (lazy fetches everywhere, ...). The result? Gigabytes and gigabytes of data were moved back and forth, the process was running for unpredictable amounts of time - sometimes 12 hours, sometimes 60 hours (which is rather unfortunate as it's longer than a weekend). Even if they optimized the Hibernate mappings heavily for batch processing, it would probably run for hours (although probably more predictably), the application server would scream in pain etc.
This is the exact place where stored procedures are a good match - with a bit of PL/SQL and Oracle magic, the whole closing was running in 3 minutes, no load on the app server, light load on the database.
If you had poor development practice (and the fact that you are saying that "stored procedure code is also not as likely to be version-controlled" strongly suggests this), then sure - it might be nightmare. But it's not a fault of stored procedures but purely your fault.
I'm not going to accept fault on that, because I don't do stored procedures except in cases of compelling need. But whereas applications in C# and Java are typically managed by an IDE and bound to a Version Control System, every database system I've had to clean up after basically had someone editing the procedure in the equivalent of Windows Notepad and jamming the code straight into the database with no local project management at all. A few DBMS's have the ability to retrieve earlier generations of database objects, but it's the exception rather than the rule.
In short, I'm making my recommendations based on what I commonly come in and find has been going on, not on what would be the case in an ideal world.
Which is exactly what I was speaking about ... a reasonable policy means that you have all the source code stored somewhere and versioned properly (Git, Subversion, VSS, ...). Even the C# IDE just stores the files somewhere on disk, and these files are stored somewhere in a versioning system. If you're jamming it directly into the database, without then storing it in a file and versioning it along with the rest of the source code, you're doing it wrong. While some DBMS systems have some built-in versioning, it's not a replacement for proper versioning of the source code. How can you even do a basic QA in such environment? How do you build a testing instance for unit testing?