The issue with NVIDIA's RTX cards, particularly of the x90 Enthusiast range, is three fold.
1. The 12VHPWR connector shows an unfortunate increase in power demand, which may not be represented by the expected Moore's Law increase in performance. The 5090 is not a massive improvement in raw power, and thus the improvement claims are questionable. The 12VHPWR suggests that the Moore's Law gain in performance will exceed consumer 15 AMP breakers, and thus exceed maximum stability. The early flammability of the port further increases the risk to the consumer. California may move to ban 12VHPWR as A.I./Robotics class non-green components. Insurance may move to not cover fire damage if an 12VHPWR was present and connected, or cited as the cause of the fire either directly or via electrical cabling. If not true of the 12VHPWR, then it may be true of seeking any successor that would demand more than 15 amps in order to sustain Moore's Law gains.
2. The RTX class GPUs upscaling and frame-generation capabilities are quickly rendered inadequate compared to native. Latency and Responsiveness matter, and Frame-Rate was the old way to measure responsiveness. 50 series Frame-Generation alone is a red-herring, the ultimate goal is to unlock 60-90 FPS. Unlocking more compute headroom can be used for higher resolutions over 1080p, higher detail in the rasterization up to the limits of the resolution chosen, or enable more compute heavy elements such as Ray Tracing or PhysX. If the performance gains are as "massive" as claimed through DLSS, then why do you need an x90? Over 90 FPS baseline responsiveness, and DLSS trends more towards self-defeating, and especially so if you don't use Ray Tracing in order to get higher raw rasterization and lower latency. Why do you need raw rasterization if you don't need raw rasterization?
3. GPUs with adequate power are making substantial gains in rasterization performance to unlock 1440p and 4k resolutions, as well as high refresh rates of 200+ FPS on 8th Generation level games. 9th Generation level releases seeking to avoid being "held back" by CPUs, are becoming increasingly more CPU bottlenecked below 60 FPS. NVIDIA has a track record of not supplying enough VRAM to trade horizontal frame rate and lowered latency performance into vertical performance of higher resolutions and texture detail. Microsoft, Intel, and AMD cannot keep up with NVIDIA. And NVIDIA cannot keep up VRAM with rasterization.
Even if you think that Mac is trash for high performance, it stands the argument that the M1 silicon can "catch up" to x86 "endgame". There are indicators that x86 can no longer maintain that lead due to actual physics limitations. A zenith or plateau for growth that Apple hasn't hit yet. SoC architecture looks to be the final evolution, and Apple has a "head start".
Finally, UbiSoft's "quadruple A" has justification in that multiple studios were involved in the development of the game. The discussion is around budgets, and if a "AAA" single studio budget is unsustainable, then a "AAAA" budget is likely in even worse stance of recouping development costs from a single game.