All new equipment for a long time has supported IPv6, but why purchase new when you can purchase a generation or two old and get liquidation prices? When my ISP purchased all new gigabit fiber and replaced their core router with a new shiny one that has more 10Gb and 100Gb ports than they'll need for a long time, I'm sure it supports IPv6, but there is bound to be a few pieces of old gear that needs to go away. Then they need to get training and do planning before they attempt to roll it out.
"Built in caching proxy server" - Doesn't help with HTTPS and with just around the corner 10Gb internet, I challenge you to make a cheap device that can handle proxying data at 10Gb/s.
"The device needs a real time virus scanner that is automatically updated" - Not so much a virus scanner, but an IDS. Can't virus scan at 10Gb/s.
"Must of course include basic traffic shaping and other useful stuff" - Even professionals get Traffic shaping wrong most of the time. An AQM like Cake or fq_Codel is all that is needed. Fair queueing and flow isolation to combat bufferbloat.
"You could even use VPNs to link two homes together" - More features!
You have a lot of great ideas, but as it is, even $400 consume grade routers are riddled with security holes that never get fixed. They can't even get NAT or UPNP right, what makes you think they can do some of the more complicated features in a secure way? Remember, most of these devices are EoL by the time they can be purchased. Supporting devices is a cost and most companies don't want that.
Either people need to take responsibility for their own security or we need a better open source security framework and support that allows for companies to make the devices and let the opensource community handle the software side of things. We cannot trust companies to maintain bug fixes for their devices.
My example is a little bit simplified because once a TCP stream gets moving, the packets are spaced apart, only the initial transfer will burst all segments in the window at line rate. A bit of trivia. Google modified their TCP stack to increase the number of initial bursted segments because most responses are quite small, and if you can fit the entire response in the initial burst the client only needs to wait one RTT, but if there are any more segments to be sent, the client now has to wait at least 2-RTTs, even if it's one more segment.
10ms to Chicago, 30ms to New York City and Atalanta and Washington(AWS), 40ms to Texas and Florida, and 60ms to Cali. Short RTTs help a lot with TCP.
Bandwidth isn't everything, ping, jitter, and loss are also important. Jitter typically indicates congestion and so does loss. I can reach every major datacenter in the world with under a 250ms RTT. That includes Moscow, India, China, Japan, South Korea, New Zealand, and Australia. Also, all under 1ms of jitter.
The idea is one machine, one address
More like 5 IPs per computer. Each for different usages.