The MIT solution, as described, appears to do away with the clock-based system that RSA uses, and instead has the server and the chip stay in lock-step as transactions occur.
What happens when the two drift out of synchronization will be the key to disrupting the technology.
If the server and chip stop talking to each other when they get out of synch, then the whole system is vulnerable to a wide scale DOS simply by corrupting the server's database of keys.
Imagine an industrial plant manager's reaction when 1000 different devices brick themselves due to a hacker's attack. If it takes a day to replace and reset everything so it all works again, that manager will rip out the technology so that his or her plant is never down that long, ever again.
On the other hand, if the server and chip and re-synchronize after a glitch, then a hacker can emulate that resynchronization process.
I wonder if a Man in the Middle attack would work where the MiM and server exchange one set of keys, while the MiM and chip exchange a second set of keys. Would either side know that it was talking to a fraudulent data source?