Making decisions like this requires consideration of the consequences, which is the very definition of sapience.
If the robot is non-sapient, but simply has a configured list of users who it may or may not serve alcohol, the decision was made by the person who configured it. This would be an acceptable solution, although cumbersome and inflexible. Probably wouldn't work well enough for public bartending, but a robo-butler could work this way.
If the robot is sapient, it would be capable of making such decisions on its own. In fact, you might see robots refuse to serve alcohol at all, claiming moral reasons. On the other hand, you might see libertarian robots refuse to *not* serve someone alcohol, if they value people's right to self-determination. This would also be acceptable, but we are nowhere near this level of AI.
If the robot is non-sapient, but still expected to identify children and alcoholics on its own, problems will result. Detecting children is possible, with some false-positives (it's hard to tell a 20-year-old from a 21-year-old by appearance) and false-negatives (dwarfs/midgets/little people/hobbits/whatever the current PC term is), but how do you detect an alcoholic by their appearance?
The obvious solution for non-sapient robots requiring more flexibility than simple whitelists/blacklists, since alcohol is already a controlled substance, is to have robots require you to present ID for alcohol, and perhaps add a feature to IDs to show "recovering alcoholic, do not give alcohol" if we decide that's something that's important. Then again, we've not felt the need for that yet, with human bartenders, so maybe this whole debate is over something we've already as a society decided isn't an issue.