Fundamentally, the issue is very simple: Given some sort of identifier, and a series of properties about that identifier, if you have enough dimensions of detail, you end up narrowing down your sample so much that you end up with a population of one, that being the person the identifier "hides". It's just that simple.
We go through the same basic process to find information through a search engine -- we attempt to find ways to narrow down the data in such a way that the information we are looking for exist within a sufficiently limited set.
I'm sure we both have experience searching for relatively generic information, where the number of possible matches unrelated to our target is so great that the information is effectively unavailable. Anonymization is the same basic principle - genericize the data to the point that it is useful in aggregate, but valueless for targeting individual users, based on a preponderance of possible matches.