He's talking about targeted advertising, not traditional advertising.
He's saying that if you have so much information about a person that you know they're diabetic, and actually use that as a factor in deciding to show them stuff that statistically they'll go for even though you know it's proven to be harming them... that should be an actionable offense.
I think there's a better example that's less politicized: It's also like working out someone goes to a gambling support group and intentionally serving them a bunch of ads for casinos in Vegas.
That's way different than just showing ads to the public. It's even quite different from having the information somewhere else in the company and not using it in the advertising algorithms.
I actually agree with his point of view to an extent... although it should be easy to avoid doing that sort of thing. Targeted advertising algorithms that include automatic inferences might go there however and eventually need some kind of 'moral guidance' instructions of some kind.
I do not agree that having so much information that you "should" know that Vegas ad was wrong to show to the gambler but didn't use it in the decision process is wrong (the OP might). Right now we're in a glut of data but the analysis and understanding of that data is not mature. I don't think the state of the art makes that negligence. I do think we might get to the point where the algorithms are so advanced that it WOULD be wrong... much like it would be wrong for a human advertiser to go through that thought process and decide to show the ad.