That's right, and even at the block level data may be swapped around between block or obfuscated in other ways that protect individuals while still keeping the data accurate at an aggregate level. I know it is easy to be concerned about this when looking at it for the first time, but Census has been seriously working for years on how to protect confidentiality while releasing quality data at as low a level as possible.
The Census site has a little info about this:
http://www.census.gov/privacy/data_protection/statistical_safeguards.html
But more relevant is this link to the American Statistical Association, which goes into significant depth on the techniques used to protect confidentiality:
http://www.amstat.org/committees/pc/index.html
On this page
http://www.fcsm.gov/working-papers/spwp22.html
we find a working paper from the Federal Committee on Statistical Methodology, which has deeper details on actual operations.
From that page, the "Statistical Disclosure Limitation: A Primer" document has an interesting section defining inferential disclosure - "occurs when individual information can be inferred with high confidence from statistical properties of the released data."
And the "Current Federal Statistical Agency Practices" describes the multi-dimensional linear programming used to prevent that, along with other techniques including geographic thresholds, population thresholds and coarsening.
So the summary is: Yes, it is a serious issue to be concerned about, but Census is taking it seriously, applying some real science and math to it, and it looks like they are doing a good job.