Yes, look at software requirements first. FEA and CFD software can be extremely hardware specific. Cant they make use of powerful GPGPUs? Most server chassis will have great CPU/RAM but crap in the way of PCIe slots and especially GPU power plugs. What OS will the SW need to run? HP doesn't even certify "consumer grade" OSes on much of their rackmount lineup, and if you use Windows Server 20XX you often can't get the latest certified GPU drivers on the Server OSes, so you may well lose product support one way or the other. Ask me how I know.
Where are these servers/workstations going to be located? Servers are NOISY and belong in a climate-controlled server room, and then you'll need some sort of remote-access mechanism to them. Depending on latency and distance requirements, that can get pretty expensive.
If these are just headless number cruncher units, by all means absolutely use AWS (they also have some sort of CUDA farm if your software can leverage GPGPU). Then you can scale out the wazoo and pay only for what you need when you use it. Do your development work on your own mini-cluster (could be just a bunch of VMs in a workstation) if you want to keep standing operational costs down, but then farm out all of the big jobs to AWS and automatically shut those VMs down after they're done doing their thing. HPC clusters are a lot of work to design and keep running (something somewhere is always breaking once you get up past a dozen nodes or so). Unless what you're doing is classified, I doubt it's worthwhile getting into operating your own server farm, especially if you don't have one already.