Your reasoning is very simplistic and flawed. Let me iterate over a few key issues with it.
I cannot predict what third party integrations a given site uses prior to visiting it. Even though I use a myriad of plugins that block third party origins, such as RequestPolicy, when I visit websites using googleapis and other $google_widgets, the content is not available, as it relies on Goog serving the content. There are multitude of other origins, such as *.amazonaws.com, without anything clearly identifying the content owner in the URL. Some of such sites are linked from /. articles every day. Including today's articles. Furthermore, there are sites that are completely unusable until you enable a dozen+ of 3-rd party origins and widgets. Are you suggesting I should stick to the few sites that do not attempt to feed my browsing data to the collectors? Maybe I'll just stick to my LAN and cut the cord? That would be 'safe', no?