An anonymous reader writes: Companies collect and buy PII and device-specific information about consumers. This information can be used to guess a consumer's racial identity. While the current potential accuracy of such guessing is unknown, the wider adoption of facial recognition technology can only make potential guesses about race better.
But, as a matter of fact, are companies online guessing about a consumer's race when deciding what to offer that consumer, or on what terms? Even if not, is racial bias exhibited by the automated decision-making models used by companies online? That is, do those models systematically make decisions based on information suggestive of racial identity but immaterial to the transaction at hand?
How can we tell?
In the physical world, civil rights organizations would send white testers and then black testers to businesses to test for disparate treatment. Can virtual identities be created which are comparable from a company's point-of-view in all material respects except for race? Can a testing organization possibly control for the many other factors which impact online experience?
Or, when it comes to online interactions, must we simply take businesses at their word that they don't discriminate?