Except that then you remove the basic principle of RP: that updates are done live. Freeze things like that and you've reduced it to the current model where you just dump values into variables and then run calculations on them to set the values of other variables which then don't change until you go and recalculate them yourself.
Not quite. As I understand it, the big idea of RP is not that everything has to be live, but that everything is capable of being live. If you have pervasive liveness given to you by an underlying framework in a safe and cheap manner, it becomes a lot simpler for the programmer to turn it off in the cases where you want to freeze a value, than to manufacture it out of nowhere in the places where you finally realise you need constant event-driven updates.
The former could take just one keyword in a reactive-declarative language; the latter (in current imperative languages) takes a whole lot of complicated, unsafe, gymnastics like spawning a new process/thread, passing a callback (which might involve security vulnerabilities since you're exposing a raw memory address), arranging your own concurrent synchronisation primitives, and so on. It's a world of pain and it really shouldn't be up to the programmer to solve this thorny problem, since they'll inevitably get it wrong, and in very nasty ways.
In the Internet age, it becomes a whole lot easier to see the entire Net - and in fact your own server/desktop - as looking less like a von Neumann computer with one 'processor' doing one thing at a time, than a network of components operating on streams of messages. So the theory is, if you had a programming language which looked more like operations on event streams, you'd have something that could work nicely with a single paradigm at all three levels: at the CPU level, it could let compilers scale your code out nicely across arbitrary numbers of cores; at the desktop level, it could provide really simple, clean descriptions of windows and UIs as time-varying functions over the state of other windows/datasources, making it a lot faster and safer to write and change UIs; and at the Internet level, where the world really does look very much non-von, it could do the same thing for entire networks of processes running on servers. Heck, maybe we could even describe our hardware network toplogies as programs? Wouldn't that be something? Especially if more and more our 'networks' aren't physical patch plugs in ports, but virtual topologies running on VM environments?