>an immobile supercomputer would pretty much have to maintain its own profitable corporation to ensure its survival.
Great point among many.
>once you have enough processing power to, say, engage in multiple simultaneous conversations at once, how should context-dependent things like emotional state behave?
Insightful dilemma, however you should consider that emotions are intelligence as far as Darwinian evolution was concerned up until we invented grammar. Look around, animals all have emotional states that motivate and de-motivate their goals. If you're not starting with emotions, you're not doing it the way evolution did.
The simplest AI will not speak English, it will have a light that flashes "happy/unhappy". (The unhappy light may even be blue and fill the whole screen.)
>so you could, in theory, simply keep on adding memory and processors to simulate more and more neurons.
Not quite. Did you see that each neuron has an average of 7000 connections? "Adding" is not going to be so simple when you have to wire it to every other component. And I guarantee you that no two wires serve the same purpose.