For instance, concurrent code can be fun to develop, but in practice, all the interlocks required to make it work can reduce many tasks to near-serial performance. Sometimes, though, a better approach is to look for ways to split the task into subtasks that can run in separate processes that rarely interact. I've done this on occasion to produce huge increases in speed. Of course, this isn't really a question of programming, but rather a question of reanalyzing the task and finding a way to handle it with minimal coupling of a set of independent subtasks.
True. However multiple processes is simply one form of concurrency where the OS handles your isolation. If you can divide into separate processes then you can also do it multi-threaded with minimal if any "interlocks needed to make it work."
Further, multiple threads has less overhead than multiple processes (especially on one particular prominent platform) and may be preferable. Or if the problem does easily lend itself to multiple processes, that may be good enough or sometimes even better (e.g. python with the GIL).
So the problem really could be the guy has a problem with concurrent code of any variety.