Thanks for the link. My problem is that there isn't any one bit you can point to and say "that's the slow bit" (unless it's telling the code which parameters to use, varying the parameters, and then graphing the results when done -- I'm currently doing those parts with bash and Octave, and to be fair I would probably be better off doing both of those in Python).
The main work is the simulation, and it's where I've got a trivially small amount of data (say a 20x20 lattice of sites containing the number of susceptible and infective animals), so I need arrays to store the numbers of individuals, the birth, death, infection, recovery, dispersal rates for each site, and one that keeps track of which sites need updating.
The bits you might think would be the slow bits (summing arrays, checking there are no groups with negative numbers of individuals, converting rate matrices into cumulative distribution functions and using a binary search to select an event) just don't seem to have that much of an effect on the performance. The only time that a part significantly stands out is when calculating dispersal across the entire lattice, rather than a nearest neighbour dispersal. The rest is just lots of small things that need to be done randomly and frequently.
I have profiled the model quite a bit, and the C code is over 300 times faster than the prototype written in Octave (taking advantage of vectorisation whenever possible, and using fast algorithms), but all the non-performance critical bits are either deeply embedded in the code (which is horrifically loopy by nature of the problem), or necessary for the rest to work. So Python isn't really going to help.