Modern mobile devices have fast CPUs yet very limited RAM. And no swap.
They have faster CPUs than they used to. The CPUs are still not "fast".
I spent the last week implementing, profiling, and improving up disk-backed image caching with a front-end LRU memory cache for the iPhone, and experimenting with offloading batch image processing off to a OpenGL FBO. Doing image interpolation while scaling is so expensive on the iPhone's relatively fast CPU that it's absolutely necessary for me to cache thumbnails.
The cache implementations themselves had to be highly optimized in order to pull images off disk fast enough to run inside of a tight animation loop, while also supporting a background thread rendering of not-yet-cached thumbnail images and saving to the disk cache.
I can't even fathom writing this in Python. Any spare CPU I have, I put to good use -- there's absolutely none available to spend on a slow interpreter, even for non "performance critical" parts. If there's a non-performance critical code path, then I can always use any available CPU time to do more background work and achieve better perceived UI performance.