Merely shoving some algorithm that is done by hand onto a computer is nothing novel. This isn't to say a particularly clever and non-intuitive software implementation couldn't be patented. But just doing it in software is not novel; it is obvious.
Sure, if the algorithm is already done by hand. But what if you come up with some novel, nonobvious algorithm, like a way to calculate interstellar warp coordinates? Then doing it either by hand or on a computer is novel... The question is whether it would still be patent eligible or not.
Under Bilski and CLS and current jurisprudence, a claim just to the algorithm would not be patentable... because someone could do it by hand. But if the claim had enough some additional limitations that specifically recited the computer, such that while you could do the algorithm by hand, you couldn't do the algorithm in the claim by hand, then it would be patentable (e.g. say it included a step of transmitting the data to a cloud service for distributed processing - that particular step may not be novel, but remember that the rest of the claim includes your novel, nonobvious algorithm).
I keep recommending these rules:
1. If it's already being done in the real world, doing it on a computer is not patentable per se.
That's currently the rule: computers are known, and if your method is known, then simply doing it on a computer is not patentable.
However, what if you have to do additional steps to make it work on a computer? For example, in the real world, we can look at someone and easily recognize their face as belonging to a friend... but machine vision and facial recognition is really, really difficult. There's a whole bunch of processing that has to be done, because computers don't inherently recognize faces. So, while the broad concept of "recognizing a face, on a computer" wouldn't be patentable, "detecting a first location corresponding to a first eye; identifying a second location corresponding to a second eye; determining an approximate facial width based on the inter-eye distance; identifying a mouth shape in a third location; etc., etc.," would be.
2. Doing a simulation of a real-world item is similarly not patentable per se.
Again, same as above - if the real world item is known, then simply simulating it isn't patentable... unless you have to do other things, or make approximations that don't exist in the real world. For example, the real world has a sky, and clouds, and changes smoothly from dark blue to light blue as you get near the sun... but doing volumetric lighting simulations and simulated Rayleigh and Mie scattering in a way that doesn't kill your GPU is really difficult. Why shouldn't a narrower claim to those be patentable?
3. Doing something wirelessly formerly done over a network, or remotely formerly done locally, or on a lil' phone or tablet or tricorder, is also not patentable per se.
And again, that's how it works. You can't get a claim to "transmitting data over a network, wherein the network is wireless" but you can get one directed to some of the steps you have to do with wireless communications that you don't have to do with wired communications, like the additional signal/noise processing, frequency heterodyning, burst interference avoidance, spread spectrum broadcasting, etc.
4. This is not to say particularly clever implementations (the "machine" part of "virtual machine") could not be patented.
Or stuff that you only have to do with virtual machines, like dynamically provisioning them based on load, or having dozens of virtual machines sharing a single hardware network interface and single memory bus, and transparently distributing packets to them in such a way that each machine doesn't realize there are others using the card.