My personal mileage varies significantly. I still prefer Ada, which is a language that you'd probably characterize as having a lot of "boilerplate". An experience Ada programmer learns how to use that to his advantage in several ways:
1. When you're on a large or long-lived project, readability of code (even your own, years later) is more important than writeability,
2. The compiler checks consistency, and as you get better with the language you learn how to maximize what the compiler can check. (This is particularly true for strong typing, where in my experience the bugs caught by typechecking are caught on scalar types. You're a lot more likely to add 'count-of-apples' to 'count-of-oranges' than you are to actually try to add apples and oranges.) Thus as a designer, I'd concentrate on the algorithm, logic flow, etc, and let the compiler check things like parameter names/types. When the compiler and I both agreed that the program was right, it usually was correct.
3. Syntactic error recovery. This is a big deal when first learning a language, and later when doing significant changes (e.g. refactoring). On a lot of compilers, a single syntax error made all the subsequent error messages both numerous and confusing/worthless (usually because the compiler made an incorrect assumption.) Ada compilers, particularly the hand-craft GNAT parser, got really good at providing meaningful error messages for the rest of the compilation after detecting (and recovering from) a syntax error.
4. Better optimization. The more info the compiler can get and depend on, the better job the optimizer can do, mostly by limiting the assumptions about how data or control flow is used.
"boilerplate" can be your best friend, when you and the compilation system take advantage of it.