Assuming that the obsolete compute modules are of standard size/pinout (or, more likely, that compute chassis are only produced for phones that ship in sufficiently massive volume to assure a supply of board-donors), this scheme would work; but I have to imagine that a phone SoC would make a pretty dreadful compute node: Aside from being a bit feeble, there would be no reason for the interconnect to be anything but abysmal.
the nice thing about a modular system is that just as the modules may be discarded from the phones and re-purposed (in this case the idea is to re-purpose them in compute clusters), so may, when there are better more powerful processors available, the modules being used in the compute clusters *also* discarded... and re-purposed further once again down a continual chain until they break.
now, you may think "phone SoC equals useless for compute purposes" this simply is *not true*. you may for example colocate raspberry pi's (not that i like broadcom, but for GBP 25 who is complaining?) http://raspberrycolocation.com... - cost per month: $EUR 3. that's $EUR 36 per year because the power consumption and space requirements are so incredibly low.
another example: i have created a modular standard, it's called EOMA68. it re-uses legacy PCMCIA casework (which you can still get hold of if you look hard enough). the first CPU Card is a 2gb RAM dual-core 1.2ghz ARM Cortex A7, which as you know is based on the A15 so may even do Virtualisation. i did a simple test: i ran Debian GNU/Linux on it, installed xrdp, libreoffice and firefox. i then ran *five* remote sessions from my laptop, fired up libreoffice and firefox in each, and that dual-core CPU Card didn't even break a sweat.
so if you'd like to buy some compute modules *now* rather than wait for google project ara (which will require highly specialist chipsets based on an entirely new and extremely uncommon standard called MIPI UniPro) the crowdfunding campaign opens very shortly:
once that's underway, i will have the funding to finish paying for the next compute module, which is a quad-core CPU Card. after that, we can see about getting some more CPU Cards developed, and so on and so forth for the next 10 years.
to answer your question about "interconnect", you have to think in terms of "bang-per-buck-per-module" in terms of space, power used as well as CPU. a 2.5 watt module like the EOMA68-A20 only takes up 5mm x 86mm x 54mm. i worked out once that you could get something like 5,000 of those into a single full-height 19in cabinet - something mad, anyway. you end up using something like 40kW and you get such a ridiculous amount of processing power in such a small space that actually it's power and backbone interconnect that become the bottlenecks, *not* the Gigabit Ethernet on the actual modules, that becomes the main problem to overcome.
bottom line there's a lot of mileage in this kind of re-useable modular architecture. help support me in getting it off the ground!