You could just have easily retained the public IPs, while putting a firewall in front of them. NAT was just added complexity providing no benefit other than reducing the number of legacy addresses required.
By hiding vulnerable machines behind a firewall you've not actually solved the problem, as those machines will become instantly infected if someone introduces a single infected machine behind the firewall.
In these days of mobile devices and wifi it is actually FAR more common for this to happen - totally unrelated devices find themselves on the same public wifi network. All it takes is for one employee to travel somewhere and connect to public wifi where an already infected machine is, then bring his laptop back to the office. A public wifi might have NAT for outside access for cost reasons, but that doesn't prevent other users of the same network from connecting to each other. It also doesn't prevent users from opening arbitrary ports via UPNP, or tunneling to outside networks and thus providing a route inside etc. If you're connecting to someone else's wifi you have absolutely no control over what the network manager does, or what the other users do.
The reason worms don't propagate in this way so commonly is not down to NAT, it's due to more sensible defaults (eg windows firewall enabled by default).
Nowadays most end user malware does not rely on inbound connections, it exploits outbound connections made by the user (eg phishing, browser exploits etc). There is still malware which makes inbound connections but it tends to target servers (which by their very nature need to have services open) and embedded devices. The vast majority of this kind of malware exclusively uses legacy IP.
Meanwhile this method of propagation is not actually practical with v6 due to the huge address space, so even if machines were vulnerable the chance of them being discovered and exploited is extremely slim.
When doing enumeration against v6 networks you have to rely on public information such as DNS records, certificate transparency logs, or access logs if you can convince a user to access a site under your control. In the former cases you'll typically only find servers which are inherently meant to be public, and in the latter case you'll only get temporary addresses of end user devices (which as previously mentioned don't have any listening services for you to attack these days anyway, and if you already convinced a user to access your site that inbound connection is a far more useful attack vector irrespective of network configuration). If someone happens to have a random embedded device exposing default credentials on an SSH service good luck finding that device on a /64 network.
I agree with you that IPv6 should be an option for people who want to have a public facing ip without NAT, specifically for ease of self-hosting. But most people not only don't self-host anything. They don't even know what that means.
Even users who don't do self hosting do use things that benefit from p2p (voice/video calls, gaming, etc).
If self hosting was more accessible, more users would do it. There are plenty of things that you might want to have at home and access remotely - for instance CCTV and NAS appliances. Because of widespread NAT, users are steered towards cloud based services with all the privacy, security and longevity implications thereof, and you will find many stories here about breaches or shutdowns turning devices into bricks.
And even if only a few users benefit from it, v6 needs to be ubiquitous or those benefits are limited. What use is someone being able to self host via v6 if other users can't reach their site (and have no idea why they cant reach it because they get a generic error message instead of one explaining what the problem is)?