My bad for forgetting the "link connexion" information. Though your computation is likely to be inaccurate. The problem looks like a large sparse matrix problem. Since memory is the issue here, you do not store "from" and "to" for each "link", you use a Compressed Storage by Row type of datastructure, which only need to store either "from" or "to" (you need an extra array but at that scale it basically does not count).
Also, you do not need 64 bits to store identify a neuron 40 bits are certainly enough. Moreover, if you use a 2d partitioning of the link information, 32 bits is probably enough (you basically encode which part of the matrix you are storing within your MPI rank).
So in this computation, you are good for 8 bytes per connection, which boils down to "only" 800 TB. Sequoia is already twice bigger than that.
I do not know the detail of these algorithm and you seem to believe there is more data involved that meet the eye. (And you are certainly right, I do not know anything about that type of applications.) If you need to run genetic algorithm on each synapse, that is going to take a while, genetic algorithms are slow algorithm. That make them suitable for out of core computing.
Anyway, the project is definitely a leader-class project, but I do not see it as unfeasible. Sequoia is an "old" machine (plugged in in 2011), certainly a custom built machine will have enough memory. It might take a couple iterations of the design to reach there, but we already are in the right ball park.