SSH connections take For. Eh. Ver. relatively speaking:
% time ssh localserver exit
ssh localserver exit 0.02s user 0.02s system 2% cpu 2.061 total
Subsequent requests using the same connection are quick enough:
% time ssh localserver exit
ssh localserver exit 0.00s user 0.00s system 20% cpu 0.039 total
But compare to an HTTPS connection to a remote host:
% cat curlcfg
verbose
trace-time
url = "https://www.google.com/"
output = "/dev/null"
head
url = "https://www.google.com/"
output = "/dev/null"
head
% curl -K curlcfg
...
A brand new request to a remote server takes just 263ms, and a second request only 81ms. Considering that the server is 25ms away, that makes it a bit faster than a cached SSH connection to a local machine.
But even more than that, SSH in this context is a transport, not a protocol. It allows you to build and manage secure connections, but you still have to write a protocol on top of it ("I'll send this command, and you reply with..."). Even if you "cheat" and use SFTP, you're still missing out on fixes to the thousands of little issues people have worked out with HTTP over the years. What's the SFTP equivalent of If-Modified-Since? How will redirects to remote servers work? What's your cross-domain scripting policy? How are you going to handle anonymous connections?
Use SSH for SSH. Use HTTP for HTTP. They're separate things for good reasons.