Most high level scripting languages (can't speak for PHP, but it's true for Perl, Ruby and Python) implement simple user defined objects as dictionaries. That said, the lookup cost, while obviously much higher than pre-compiled v-tables, are not as expensive as you might imagine; attribute access uses interned strings, and strings cache their hash code on first hash. If you don't actually have to recompute the hash, and equality checks are (for attribute lookup) a simple reference identity test, the CPU costs are basically nil, you just have the issue of page faulting due to "random" access into the hash table (and Python at least optimizes for that case; the collision chaining algorithm in recent versions of Python tries to chain into the same cache line if it can, alternating with chains by "long steps" to avoid issues with consecutive hash codes).
Stuff that kills Python performance includes: Minimal optimization of code by the byte code compiler, and none by the byte code interpreter (while each hash table lookup is cheap, a loop will perform it over and over again, even if you're accessing the same attribute on the same object, because the compiler and interpreter aren't sophisticated enough to recognize what's happening); inability to parallelize CPU bound tasks using threads thanks to the GIL; lack of "primitive" types, so even basic math involves substantial memory allocator overhead and memory fragmentation, etc.
TL;DR: Python's performance problems aren't primarily a result of hash tables.