Think of how most developers are using Javascript nowadays: it's a target language for their compilers.
Whether the source was Java (GWT compiler) or Javascript itself (YUI compressor, Google closure compiler) the fact remains that what browsers are given to run is not what the developers wrote. Which is standard practice in the software business (it's called compilation) and for good reasons.
Now, JS makes for a poor machine language. So we could either beat around the bush with an intermediate bytecode language (Java went there, and Python and all the others too, with varying results) or go for the real thing and come up with a good x86 sandboxing and code verification standard.
Remember, x86 is currently in use by 99% of desktop machines. When other architectures will gain momentum, websites will just offer two or more compiled versions of their code. In the mean time, they will just have to emulate or translate the x86 instruction set, a task for which a large open source code base has already been developed, and which would still be more efficient than parsing plain Javascript, by several orders of magnitude.
So what's the problem with that, again?