It's unlikely that we ever will run out of IPv6 addresses as there's enough addresses to give each person living on Earth roughly 5×10^28 addresses. Which is quite likely going to be enough for anybody that could possibly follow in the future. So, it's technically possible, but for reasons related to the speed of light, physical size of the Earth and solar system it would be very difficult to ever get to the point where you need an IPv7.
You seem to grossly underestimate the power of human stupidity.
Here is a just a hint of one of the possible millions things that might go wrong:
(time:current) in IPv4, many web sites want to have redundancy and be multihomed. For that, they get AS number, their provider independent IPv4 block and then they BGP peer with (at least) two ISPs. However, a catch: due to routing table sizes, many routers won't accept BGP prefixes less than /24, which makes a web site which would be content with /30 request /24 (or it's multihoming redundancy won't work most of the time). So we're wasting 6 bits here, or over 18% of IPv4 address space.
(time: near future) in IPv6, many web sites still have redundancy and so be multihomed. While there are some other ideas, none of them are currently really working or deployed, except the PI & BGP, same as in IPv4. And while it is intended for most companies to get /48 in IPv6, BGP routers still have same problem, or even worse - multiplied by size of routes. So they'd probably do in IPv6 what they've did in IPv4 - only accept BGP routes of at least /32. Which makes every SME wanting at least /32... Net result?
While some small company might need much less than /64, they're now grabbing /32. 32bits wasted in a blink of and eye. And it wouldn't stop there probably...
Problem is, BGP doesn't scale. It haven't with IPv4, and it will get worse with IPv6 when in catches on.
But it is familiar and it works, so people will go with that by inertia.
There were attempts at fixing it. For example, SCTP allows many-to-many bindings, so if HTTP would be modified to use SCTP instead of TCP, web server could have one IP from one ISP, other IP from other ISP, and SCTP would take care of redundancy. And there would be no specific routes to be kept for web site on routers, so it would be O(1) no matter how many multihomed web sites you had.
But you would first have to convert whole of the Internet (or at least stuff like WWW, IMAP, SMTP, ...) to use SCTP instead of TCP all over the world. FAIL.
There is an easier way. Let's say we modify the web browser behavior if the site it accesses has multiple A/AAAA record. It would try to connect to one of the IPs, but if the 3-way TCP handshake did not complete in some predefined time (say 2 seconds?) it would initiate connection to all the other IP(s) and pick one that was fastest (RSTing all others), and remember that decision for some time (say 15 minutes? Or an hour?)
That way, if everything worked, the behavior would be same as the current one.
For sites with only one A record, the behavior would always be same to the current one.
However, if site had multiple A/AAAA records, and if one IP failed (timeouted), there would be a one-time small burst of several (depending on the number of DNS records) connection attempts, the user would experience one-time-only 2 seconds delay, and then the working address would be cached in browser and it would work automagically. same as SCTP, but much easier to implement, and it is much easier to implement gradually (which is the only way, really)