The op complaint was no 64-bit Firefox for Mac. Apple doesn't do x32, so it's not even an option.
Data starvation can be mitigated/manually controlled in 64-bit by understanding your data. High performance code utilizes contiguous blocks of memory and is very aware of data layout, and isn't going to be pointer chasing. So allocating arrays of types that use int32_t instead of int64_t is a trivial example. So in a 64-bit architecture, you still generally win performance-wise.
x32 failed for a lot of reasons.
From the article you cited, here's a quote:
"they just really don't see the [x32] ABI as being worthwhile ... to make maintaining an extra ABI worthwhile."
As the quote highlights, the real crux of the problem is how much of a pain it is to maintain yet another ABI.
The performance differences for x32 didn't justify it over true 64-bit. That means the data starvation due to larger pointer sizes wasn't the dominating factor. So going from i386 to x32 vs. i386 to x64, the latter wins because it gets the speed of more registers, negligible performance impact for pointer size differences, and the plus of large addressable memory if you need it.
To show the pain of another ABI, here is a simplified example: You are using a video editor in GNOME under x32. You suddenly need more than 4GB of addressable memory. That means you needs a x64 version. But then you need a GNOME that is also built as x64, so you have to load up an entirely separate instance of GNOME, all its dependencies, and the video editor. Either you need to shutdown your current environment and reboot another, or you need to load both simultaneously. (This is what happens now when you load 32-bit i386 on modern Mac, hence a large RAM hit, not to mention that every binary also takes double the disk space.)