Nginx 1.0+ supports backend keepalives with a patch and module, but they are still not in official release. But this code comes from the principal nginx author, so it will make it into release soon.
That said, your back-ends are usually very close network-wise to nginx proxies, and connections can be established and torn down in less than 1 ms. Since the back-ends are usually thread-based, this is a good idea anyway (which is why everybody has to turn off HTTP keepalives in Apache when they start to scale). Disabling HTTP keepalives SUCKS for the client's experience, especially if they are on wireless/mobile connections or on another continent.
I manage a medium-sized SaaS application with about 0.7M users, and we front dozens of honking physical JBoss/Tomcat boxes with a single-core linux VM running nginx with 1 GB of RAM (with a hot standby of course). Nginx is only proxying to back-ends, not serving static files (except for a small 512MB set of really hot files using proxy_cache which stays in the filesystem cache). Nginx itself uses only about 100 MB with 8 worker processes. This isn't surprising: even the biggest $50K F5 load balancers have very wimpy specifications for CPU and RAM, but like nginx they use an event-driven model to keep RAM usage and context-switching to a minimum.
One problem running nginx on Linux is that asynchronous IO on Linux is horribly broken by design, and only works for databases that use direct uncached IO. So we are looking at moving nginx to FreeBSD so we can take advantage of asynchronous disk IO as well as the default asynchronous network IO.
The one-thread/process-per-connection model of Apache really just doesn't cut it for web-scale workloads. We were able to re-purpose our dedicated Apache front-end boxes as application servers instead because of the RAM savings. So nginx saves us about $2k per month in colo costs.