But the people writing the algorithm and choosing the input data *can* be racist. And even in the absence of malice, you can create racist outcomes.
If your training set has many photos of white people and few photos of black people, it's not going to be great at recognizing black people. If it doesn't know what black people look like, it's bound to misclassify them more often than white people.
Anecdotally, I noticed that the Microsoft "how old are you" site a while back recognized me (a white person) in every picture, but only detected my (black) partner in about a third of the pictures I fed it. In one instance, in a screenshot of a video chat, it recognized my little 100x100 picture in the bottom right, but failed to detect my partner's face in the center.
Your real-world performance can only be as good as your test/training data.