Yet for machines like home computers it is simply not possible. It is only with the relative advance of hardware that the microkernel can actually get close to a monolithic kernel in terms of performance.
This is not true. The L4 kernel ran Linux as a guest OS with 5% overhead. This was the birth of hypervisors like Xen, which are really just a sort of microkernel. Virtualization is everywhere now, and no one seems to be complaining about overhead. If you can run a virtualized host with so little overhead, native execution is at least as fast as your guest.
It's perhaps too easy to introduce performance problems via a poor choice of decomposition, but that doesn't entail that decomposed systems must necessarily perform poorly.
It is this performance consequence that made Windows NT, originally designed on a microkernel architecture, move towards a hybrid kernel.
Poorly designed systems perform poorly. The NT kernel might have been a nice kernel design from some people's standpoint, but then people thought the Mach kernel was a good microkernel too. Both opinions are incorrect.
There are far more Mac and Windows installations than all Linux distributions combined. These are all microkernel or hybrid-kernel architectures.
There is no such thing as a hybrid. You are either on fire, or you are not. You are either a microkernel, or you are a monolithic kernel. Mac's may use the Mach microkernel, but it's a poor kernel and the BSD subsystem really consists of most of the system calls, which all execute in kernel space. This is a monolithic kernel that has a poorly designed microkernel as its ancestor.
Systemd is all about marketing, and nothing about engineering. It too will fail and be replaced, just like PulseAudio by ALSA.
Except PulseAudio hasn't been replaced, it's still used by most distributions.