As far as the example you give, this is because C++ always adheres to the "zero-cost principle"
Also describable as the "zero-flexibility principle". In Objective-C extending existing code is super trivial. In C++, unless the author of your library *expected* what you're about to do, you're pretty much SOL. For example implementing a native RPC mechanism in Objective-C is about as complicated as implementing -forwardInvocation: and ferrying the serialized NSInvocation object between sockets. In C++, it just can't be done (well, unless you're willing to write some sort of custom VM!). Also, in practice, the overhead of Objective-C's dynamism is pretty much negligible (speaking from experience having written some pretty heavily real-time code for video streaming in Objective-C that's running the server-facing portion of a few dozen thousand STBs in the field right now). 90% of the time you're running 10% of your code. Write that portion in highly optimized pure C and the rest in objects and you'll get the best of both worlds.
For programmers like me who value C++ primarily for it's runtime efficiency, this is absolutely the correct design decision.
TBH, I've never seen the logic of this statement. Given that most code written in pretty much any app is just high-level scaffolding around a few really core high-perf algorithms, why try and shoehorn the same language into these two completely different usage scenarios. That's IMO asinine and C++'s extreme levels of complexity and sheer volume of features is a testament to that fact.
BTW, you're a bit out of date regarding C++ and allocation. Modern C++ now has several built-in smart pointers (including ref-counted versions) which makes modern C++ feel a lot closer to C# with it's garbage collection than to C-style manual memory management.
I stopped paying attention to C++ by the time the spec document collapsed under its own gravity to form a black hole.