Comment Re: Yes, so? (Score 1) 43
I'm not a chip expert, but it seems being optimized for gaming and optimized for AI should be different enough to split chip models.
Let's see.
One of them relies heavily on raw parallel compute power.
The other...relies heavily on raw parallel compute power.
While those chips may have been originally designed for 3D gaming computation, the nice thing about CUDA is that it is generic enough and you can easily leverage that compute power in other applications as well.
The not-so-nice reality about CUDA is that NVidia deliberately does not offer downward compatibility with old CUDA versions in their next GPU generations, so you have to regularly re-engineer your CUDA applications for the new NVidia cards. Or stick with the old cards. I confess to hoarding the Super version of a specific RTX card because I can't be bothered to re-engineer some of my AI apps every two years to keep up with the current CUDA specs.
But AMD is a non-starter in this context. (They may still be good for gaming.)