On the west coast there's Shenoy's group at Stanford who are doing similar things. Much of their work is currently with monkeys and not sure if they have anything clinically available yet, but at least you can get an idea of what's currently possible:
You can build a fairly large simulation that runs on modern inexpensive graphics hardware. Learning how to program those is where I'd start.
I'm not so sure. You already need to be a domain expert to evaluate the equivalence of two ideas. Equivalence is needed to prove prior art. Why not just go the extra step and evaluate the documented effort expended to get from the previous state of things to the new claimed state, along with all reasonable missteps along the way. An expert could easily spot bullshit claimed steps or useless missteps.
All I'm basically pointing out here is that the current system doesn't provide rewards in proportion to work expended. Instead it prioritizes being there first. Land grabs are useful when it's difficult for effective sharing of a common resource. An idea can be copied and shared with little penalty. Why would you want to limit that advantage? Except of course to make sure idea generation is properly incentivized.
Not quite sure why the focus is so much on prior art. Sure an idea that has prior art can serve as proof of the obviousness of something, but at some point in time someone is, in fact, the first to propose some blend of existing ideas and call it new. Do we really need to permit that person a patent? For most low lying fruit this is basically the equivalent of a land grab.
The way I see it patents need to be granted in proportion to the amount of work required to explore the possible parameter space to find that new unique useful combination. It shouldn't just be about being first to something, it should be about expending a lot of effort to get there. And the reward for that should be temporary and in proportion to that amount of work. Shouldn't it be fairly easy to offer up objective proof of spending that effort (and have it peer reviewed)?
The process of figuring this out isn't going to occur magically. You need to test out your models at the systems level, with all the components working together. The more powerful the hardware we have to do this the more we can test and refine our models of how the brain achieves the same thing. This is both true if you're trying to model existing neuro architectures (like BU is) or if you're modeling evolutionary approaches like you describe above.
These memristive neuromorphic architectures hold the promise to get us orders of magnitude more processing speed while also keeping power levels low.
It can and is being designed for that use but I believe there have been problems with reliability of individual memristor units.. However in a neuromorphic design (non-Von Neumann architecture) you only need a certain percentage of the units to be reliable as the information is highly distributed and fault tolerant. Think of the massive cell death that occurs in Alzheimer's disease, yet patients are still fairly normal well into that process.
The other main advantage is that you can represent a single synapse with a single memristor which can be smaller than a single transistor.