This. If everything is done in the client, the application will lag every time anything processor-intensive is done. Likewise, if the client has to call back to the server every time it does anything, the client will lag on high-latency connections or when the server is overloaded. There's a balance there; the trouble is that most developers don't know how to find it.
I think part of the problem is that web app developers seem to fall into one of two camps: Do everything locally for the best chance of availability, or do everything remotely for the best performance. The first camp is almost right, they do achieve better availability doing everything locally, right up to the point where their app becomes unusable due to processing lag (in other words, they're wrong). The second camp is also almost right, they do achieve the best performance in their local development environment, running in a VM on their workstation, where they're the only user; that falls apart the moment they add thousands users and wildly-variable latency between client and server (in other words, they're wrong).
What I prefer to do is provide all capabilities in the client (a-la first-camp), then identify those that cause the application to lag and implement them on the server (a-la second-camp). Once a function exists in both the client and the server, the client can run a job locally and on the server. If the local job finishes first, the client alerts the server and the server-side job is terminated; if the server returns its result first, the local job is terminated. I find that this structure provides the best performance, as well as availability, since the faster resource will always be the one to return the result used; and if the server is unavailable, times out, or returns an error, the application still works. I find that most users are willing to accept occasional slowness during server outages and upgrades, especially when the application is generally very responsive under normal conditions.
That, I'm sure, can be built upon to predict (e.g. based on bandwidth, latency, and local vs server load and performance) which job will finish first and only start the remote job when it will be the clear winner (still starting the local job just in case). That would give you the benefit of reduced server requirements without impacting application performance (unless you take it too far and don't keep any spare compute power online). I haven't run into an instance where this has been necessary or where the savings would be worth the effort (as evident by the number of cancelled jobs compared to completed jobs), but I'm sure such a case exists.