Not exactly... A neural net is just a function that takes an input and produces an output. At training time the weights are adjusted (via gradient descent) to minimize the error between the actual and desired output for examples in the training set. The weights are what define the function (via the way data is modified as it flows thru the net), rather than being storage per se.
The goal when training a neural net is to learn the desired data transformation (function) and be able to generalize it to data outside of the training set. If you increase the size of the net (number of parameters) beyond what the training set supports, you'll just end up overfitting - learning the training set rather than learning to generalize, which is undesirable even if you don't care about the computing cost.
The use of external memory in a model such as Google's DNC isn't as an alternative to having a larger model, but rather so the model can be trained to learn a function that utilizes external memory (e.g. as a scratchpad) rather than just being purely flow thru.