This seems to say that by interlinking subsets of binary decision modules you can simulate (or create?) a silicon based system that will approximate the decision tree of a biological entity. Toss in an additive memory system that enhances pattern recognition based on past experience (machine learning compatible?) and you have a system that will grow in discernment the way biological systems do. The trick would seem to be in modeling the appropriate type and number of these underlying modules, designing them to revise their output based on the relevant memory experience, and then assigning priority to the outputs from those modules, giving self preservation and threat detection precedence for instance.
(Half cocked speculation) I can see custom cores designed to evaluate input based on their own narrow realm of specialization (food, friend/foe, threat/non-threat, shelter, etc. could be analogous to other machine relevant inputs) and with their own memory stores of experiential reference material. These feeder cores would process input with regard to their own specialization and then hand off their individual result to another coordinating core designed to integrate results from the feeder cores. The coordinating core would have a prioritization system to weight the inputs and handle conflicts. The coordinating core would also build an experiential database comprised of inputs from the other core modules and the results of the decisions made from those inputs and the viability of the decisions.
Emergent phenomena and complexity would seem to be a logical result of the combination of a large array of interacting modules provided the output space is varied and robust.