This is a horrible question to ask. Software is a tool to lower hardware requirements.
Compile your algorithm to the simplest RISC architecture reasonable. For most, something among the lines of ARM or MIPS works. Then, take note of all variables and add up how much RAM they'll take. Consider every bit (yes, bit, not byte) as a D-flipflop and convert every instruction (post-compile, in assembly) into a respective set of logic gates. A bit of googling should get you those values.
If your algorithm is reasonably complicated, chances are, you'll get a number that seems absurdly high compared to what state-of-art hardware is available.
In practice, it's probably best to just pick an off-the-shelf CPU and run the software on it. There might be some parts that are better done in hardware than in software, but you should get someone who knows what they're doing for that.