A person (or computer) possessing a (theoretical) book containing every possible response to every conceivable question and statement in, say, Chinese, would be considered to understand Chinese,
I've always had a problem with this definition of the Chinese Room scenario. It's the following:
To be successful in a conversation, that 'book' with responses to questions has to model not just the language, but also the subject domain and the personality of the simulated Chinese speaker. That means that not only does the book have to be huge - we're talking a giant library - but it also has have a representation of a personality inside. And the ability to store knowledge and alter that personality, depending on the depth of the conversation to be modelled.
I think most people thinking about the Chinese Room really really underestimate the amount of knowledge you need to store in that book in order to have the simplest of natural-language conversations without it quickly falling off the rails. For example:
A: 'Hi there. How can I help you?'
A: ... -- if the rules say 'repeat the same Hello response', you've already failed.
Q: 'Say, how do you feel about the Occupy Central protests in Hong Kong? Are any of your family affected?'
A: ... -- right here the room has to track a model of current news, which requires real-time processes for fetching news, parsing it, building models of world events, maintaining a simulated persona with a political alignment, a simulated family with emotional connections to each, a backstory of all these relationships if asked.... there's a nearly infinite number of possible conversational branches. And if the actual person 'operating' the room isn't aware of this information, there's still an algorithm which has to be doing it.
So most people look at the description of the Room, go 'this could be trivially represented with a bunch of index cards which obviously aren't intelligent/aware', but that's not an actual solution to the Room. A solution which does work would have to be doing such an enormous amount of effort that it actually *would* be 'aware' of a lot of things even if it doesn't have a 'feeling' of that awareness. But even then it would need to have a model of what such a 'feeling' would be like; and we currently have no idea of what such a model would be like.