I'm not sure about Go, but .Net has some interesting deadlock situations with async not all of your code is 100% async. Which is annoying because most opensource .Net libraries are not async. I had to help a co-worker with some GCC Go pseudo-deadlock issues many years back. I found it rewarding to have solved a deadlock issue in a language that I had zero experience and turned out it was an implementation detail of how GCC handled "async" at the time, via threads, and when the thread-pool ran out of threads to handle go-routines, the producer and consumer could be handled on the same thread and block each other. Took me about 15 minutes. I say "pseudo" because if a routine blocked too long, the scheduler would change which routing is running, which "fixed" the issue after some hesitation. You'd get this strange jittering that got worse as more routines were running, quickly getting to tens of seconds in our test.
Once you have a mental model of how concurrency works, it doesn't matter which platform you're on. There's only a few good ways to implement it.
Async allows for high scaling when dealing with lots "messages" moving through the system. Context switching is crazy expensive, about 10,000 cycles on a modern CPU. That's not including a lot of other contention that's created in the kernel. To put it simply, if you want a single server handling 100Gb of traffic with millions of network states, you HAVE to use async. And yes, a single server can handle 100Gb of traffic over 100,000/s of short lived connections.