Apart from having to rewrite existing code, one issue is that in some fields (HPC and supercomputing) the facilities can be very conservative about what they install. In many places, they install whatever is the conservative, safe default when the system is built, then they never update it. If you do offer newer versions of something, you add it alongside the existing software, not replacing it. When the system us upgraded or replaced, you make very sure to add (or backport) the old versions for the new system.
A major reason is that research projects - that can go on for 5-10 years - don't want to switch software versions mid-stream. If you are comparing an analysis of your current data with data from five years ago, you want to be sure any differences is due to the data, not because you changed software versions somewhere along the line.
This means that many systems may not offer Python 3 at all; or if they do, they still point to Python 2.7 as "the" python, and they will until the system is decommissioned years and years from now. And that means a lot of scientific software still primarily (and sometimes only, though that's becoming rare) target Python 2, since that's where a lot of their users are.
Python 2 will in practice live on for at least another decade, and quite likely for longer than that. I do agree that new projects probably should seriously consider using Python 3, but Python 2 disappearing will not happen.