I suffer the same incomprehension, except I've been using other interpreted environments throughout my career, but the idea is the same.
Compiled languages, though sometimes necessary, often substantially increase the difficulty of programming for no benefit whatsoever.
I understand this looks like flamebait, but I'm _only_ basing this on forty years of personal experience, so what the hell do I know? Since an example is worth a lot, here's one: I recently offered to help a colleague who's taking a C++ class and was reminded of all the unnecessary crap it takes to get even a very simple program to run at all. The problem was to build a Fahrenheit to Celsius (and vice-versa) temperature converter. It was friggin' painful - all the crap we had to put together to assemble even a simple, crappy program that is, at best, capable of doing
one
conversion
at
a
time
(and only a hard-coded little subset of them in an initial version).
The result was multiple source files, comprising a couple dozen lines of code, compiling to megabytes of peripheral files (in the debug version) - you know how this goes. In contrast, I write the Fahrenheit-to-Celsius conversion in a short, single line of my favorite interpreted environment (J), and am able to test it on multiple values at a time, instantly - taking seconds instead of hours. Moreover, J is smart enough that it has a built-in inverse construct to allow me to write the inverse function with another few seconds of effort.
I already hear the compiler-lovers muttering darkly about run-times and "large projects" - completely ignoring the first rule of optimization: find the bottleneck. Most code is - and should be - in small pieces that benefit from being tested quickly in small modules. The metric we should care about is "total time to completion" but this is harder to measure and more subjective than "run time", so we continue to focus on this latter measure to the detriment of productivity.