Nope (also [citation needed]). The go compiler is fast because it doesn't use modules/header files.
There are three compilers for Go, one based on the Plan 9 stuff, one a GCC front end, and one an LLVM front end. True, none of them use header files, but this is really something that doesn't affect C-family languages if you use precompiled headers. The Plan 9 implementation is fast because it does a tiny subset of the optimisations that GCC or LLVM would do.
The GCC and LLVM-based compilers are have similar compile-time performance to C or Objective-C. They're only faster in comparison to C++ because they don't do any compile-time specialisation (which, by the way, something like a .NET CLR or JVM will do in the JIT, but which Go never does). In C++, you pay a price in compile time for better run time[1] if you use templates or pay it at run time if you use virtual functions, in Go you pay the price at run time and have no alternative. Unless you're the person implementing the generic Map type (though the Map can't be usefully parameterised, so you often end up paying it as a user of this type too).
Go does a lot of nice things (channels, interfaces, and so on), but it is frustrating when a new language includes problems that other languages fixed decades ago. Share via communicating is a sensible pattern, but a new language for parallelism that doesn't make it trivial to enforce shared xor mutable is embarrassing. Erlang had this right from the start and Pony does it in a very nice way.
[1] Unless you end up blowing away your i-cache. It's true that a lot of C++ programmers will overuse templates and end up sacrificing compile time for no measurable run-time benefit, but at least when you actually want to retain most of the source flexibility of dynamic dispatch without the run-time overhead then you can.