even worse then. you must be paying for the automation with speed.
If you pay attention, you might notice that the languages in question have strong metaprogramming support (and one of them has native immutable data structures with structural-sharing updates and typical log32n performance, and transactional memory baked in deep).
The metaprogramming support means that there's lots of room to do compile-time analysis and optimization, and the native transactional memory support means that the cell abstractions aren't doing much that the language's designers haven't already put a lot of effort into making fast.
Sure, there's performance overhead -- but it's overhead that's built to parallelize well. Whereas traditional locking can deadlock, Clojure ensures that there's always useful work going on somewhere -- the worst case you get into is that other threads' work needs to be thrown away on conflict. That's a model that scales a lot better to the highly parallel hardware of a decade from now than the conventional approaches today.