Comment Not Yet (Score 1) 296
I don't think you know enough to know that yet. Or at least if you know enough you haven't told us enough to justify it. As cliche as it is, I'm going to quote Knuth: "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%"
I suggest you just do a prototype in whatever's easiest, fastest, and most flexible. Python, Ruby, C#, whatever you like. Get it working . Then see if it's slow as you think it is and go from there.
I see lots of hostile responses here (either way) but as a practical person who has to use C++, Python, bash, C#, and (argh) dos batch every week I would start with C# or Java (since you know those) and once you get the logic working like you want, see where the bottlenecks are then replace critical sections with C++ as you need to. That's worked really well for us. To begin with, the parts that we thought would need to be low level rarely were. Smart algorithms are much more useful than brute force. Or using libraries that do the brute force for you smartly. After a while you get a feel for it - but you don't have that yet.
The key bit is that you'll be 10x more productive in an expressive powerful language like C# or python than you will be in a restrictive fine detail language like C++, so it makes sense to bang out the framework in something flexible then only optimize the very smallest of key performance critical sections. Experience says that's less than 5%, usually 1%... so again, Knuth is right on the nose with his 3% estimate.