NYC Passes Bill Requiring 'Bias Audits' of AI Hiring Tech (protocol.com) 75
A year since it was introduced, New York City Council passed a bill earlier this week requiring companies that sell AI technologies for hiring to obtain audits assessing the potential of those products to discriminate against job candidates. The bill requiring "bias audits" passed with overwhelming support in a 38-4 vote. Protocol reports: The bill is intended to weed out the use of tools that enable already unlawful employment discrimination in New York City. If signed into law, it will require providers of automated employment decision tools to have those systems evaluated each year by an audit service and provide the results to companies using those systems. AI for recruitment can include software that uses machine learning to sift through resumes and help make hiring decisions, systems that attempt to decipher the sentiments of a job candidate, or even tech involving games to pick up on subtle clues about someone's hiring worthiness. The NYC bill attempts to encompass the full gamut of AI by covering everything from old-school decision trees to more complex systems operating through neural networks.
The legislation calls on companies using automated decision tools for recruitment not only to tell job candidates when they're being used, but to tell them what information the technology used to evaluate their suitability for a job. If signed, the law goes into effect January 2023. Violators could be subject to civil penalties. Notably, the bill "fails to go into detail on what constitutes a bias audit other than to define one as 'an impartial evaluation' that involves testing," reports Protocol. It also doesn't address how well automatic hiring technologies work to remove phony applicants.
The legislation calls on companies using automated decision tools for recruitment not only to tell job candidates when they're being used, but to tell them what information the technology used to evaluate their suitability for a job. If signed, the law goes into effect January 2023. Violators could be subject to civil penalties. Notably, the bill "fails to go into detail on what constitutes a bias audit other than to define one as 'an impartial evaluation' that involves testing," reports Protocol. It also doesn't address how well automatic hiring technologies work to remove phony applicants.
ban the personality tests & race questions as (Score:5, Interesting)
ban the personality tests & race questions as well!
Re: (Score:2)
While sadly a lot of people don't have a choice, if you are a skilled worker then you should ban personality tests and exams yourself.
If the hiring company asks you to do one, just withdraw from the process. If they ask why, tell them that personality tests and exams are red flags that indicate a poor work environment and flawed hiring strategy that doesn't create good teams.
Problem is it must discriminate. (Score:4, Interesting)
Given that 60% of college graduates are women, an algorithm which selects 50% male candidates is likely discriminating against women.
Or, it could be discriminating against men, in STEM fields where women are far less than 50% of the candidate pool.
Since this is politics, it won't be possible for the evaluators to consider that as groups, men and women often choose very different career paths. Granted, I don't want algorithms making hiring decisions, but I doubt this law will curb the inhumane practice - instead, it will end up giving women and minorities a preferential advantage in fields where they're "underrepresented". And in fields like Nursing and Teaching, may actually end up making it more difficult to hire them.
End result - politics working as designed - nobody is happy.
Re: (Score:2)
ban categories like
RACE
AGE (other then under over 18/21 for jobs with min age rules)
SEX
DISABLED status
personality types
Re: Problem is it must discriminate. (Score:1, Insightful)
Sure, because all Americans are born equal?
Of course not. Some are born with genes that code for height, while others will forever be short. Some are born with bodies which will grow up to be muscular, fit, and healthy, while others are destined to be weak and sickly. Some are born inherently gifted with superior abstract thinking abilities, while others are born with underdeveloped or damaged brains which will result in them having difficulty mastering anything more complex than shoelace tying.
This, of course, is why we need to bring about the O
Re: Problem is it must discriminate. (Score:3)
Your Harrison Bergeron references are unfortunately wasted here
Re: (Score:2)
Americans are born significantly better off large part the 3rd world. If people truly care about equality, then they should stop complaining and make those people equal first.
We need inequality, if everybody is equal no matter what why bother trying, why should I work hard for a better life for my children when everybody is equal, I want them to have an advantage. That is why I believe communism failed, if there is no benefit in trying then people don't try.
But there is a limit to how much advantage my chil
Re: (Score:2, Interesting)
ban categories like
RACE
That doesn't work. Race is excluded from many AI models. The results are still highly skewed by race because other data strongly correlate with race.
For instance, an AI that made pre-trial release recommendations quickly learned that certain zip codes had much higher rates of failing to appear at trial. Those zip codes were predominantly black. It was also able to look at criminal history. Blacks commit different crimes. They use crack cocaine while whites use powder. Blacks sell drugs on street corn
Re: (Score:2)
If there is indeed a correlation between selling drugs on the street and not appearing at trial, why is that not fair game for the AI?
I'm going to go out on a limb and say failing to appear at trials in the past and not appearing at trial in the future are correlated. What if it just so happens more black people fail to appear at trial in general? Should we ignore that data point too in the name of racial equality?
Re: (Score:1)
Re: (Score:2)
If you've failed to appear at trial before it should certainly count against you, but should it count against you if you neighbor down the block did? There probably *is* a statistical correlation, but I don't feel it would be fair to act on that correlation.
Re: (Score:2)
One is something a person can control, just show up in court, the other is being born black. Just like charging higher insurance premiums to males because they crash more, it is unfair that I should pay more irrelevant of my actions. But that is how people think and it generally works, but it can be unfair to individuals.
It also can be self perpetuating, you assume that a person is a criminal, treat them as such they are more likely to be a criminal. I think this maybe a reason a higher proportion black peo
Re: (Score:2)
It depends why they are asking for those things. If it's to give HR some anonymous stats so they can see if their job ads are appealing to wide audience, and to check that there are not systemic biases in their system, then it's fine.
Of course that information should not be fed to hiring managers, it should be kept confidential and stripped from any material given to them.
Re: (Score:2)
This has been tried, many times. Here is an example [abc.net.au] from Australia. Canada and the UK have tried similar things.
It usually results in more men and white/asian people being hired. Which the people running these programs see as a failure so they end the program. Right now companies and governments are very sensitive about diversity and so they often have quotas or targets to meet. But if you remove things like race, age, sex, etc. from applications then HR departments have no way to ensure diversity in their
Re: (Score:2)
You’re assuming it takes the gender of the name into account.
Re: (Score:2)
Youâ(TM)re assuming it takes the gender of the name into account.
No, the assumption is that the algorithm will be successful in removing all racial and gender bias.
The goal is not to remove bias, it is to create it.
Because the name will have inherent clues on race and gender that is certainly the first thing to be removed from the algorithm. Once the name is removed then it will be very difficult for an algorithm to have any bias on race or gender. The city won't know when there is bias because they will know nothing about the job or the applicants. Companies hire ind
Re: (Score:2)
I don't want algorithms making hiring decisions
Why not? Algorithms are objective and their biases can be measured and corrected.
I doubt this law will curb the inhumane practice
How is algorithmic hiring "inhumane"?
The same number of people will be hired, so an algorithm may reject someone a human would choose, but an equal number have the opposite happen.
Re: (Score:1, Offtopic)
You are clearly not an English major, and need to take some classes. I assume you English is your second language.
1) These words contract themselves:
"Algorithms are objective"
and
"and their biases can be measured..."
It they have measurable biases then by definition they are NOT objective.
[Also you are wrong, we can NOT measure them. We can prove they exist but not by how much.]
2) Anything that uses algorithms is by definition inhumane because that word means lacking pity or compassion and no algorithm has
Re: (Score:2)
It they have measurable biases then by definition they are NOT objective.
That is not what "objective" means.
Here is Google's definition:
Objective: Not influenced by personal feelings or opinions in considering and representing facts.
An algorithm does not have "feelings or opinions".
Re: (Score:2)
And you are clearly not a statistics major. In statistics bias is an objective measurement.
This doesn't mean it should determine the decision, but it is an intrinsic component of every way of evaluation of samples extracted form populations. All samples are biased, and if you know the entire population you can measure it. Otherwise you need to use various techniques to estimate it. And your decision should take the bias into account, to the extent possible.
You know it will be the outcome. (Score:1, Insightful)
They're going to push the you know what agenda and they're going to look at it. Disregard how it works entirely, and ask them, 'How does this ensure equality of outcome based on discriminating factors?" They won't care if it gives them good employees who make it profitable. They won't care of it looks at their education, qualifications or experiences. They're going to ask, "How does this ensure enough feminists, trans activists, and other special interest groups are hired equally regardless of skill, but wi
Re: (Score:1)
How you gather all that from a CV I’ll never know.
Re: (Score:1)
Yeah I mean, you can't pull data from the rest of the actions that have been going on, historical behavior or anything.
I don't get the concept that you have to evaluate each new thing as if everything done before didn't happen and that they aren't going to exhibit the exact same behavior as every other time. I recall a narcissist that I was dealing with who did something particular multiple times, but demanded more chances because /this/ time, /this/ time will be different, and their argument that I should
Re: (Score:1)
No, how it will go is "how many SJW profs/graduates do I need on the board to convince the female judge the audit is legit" and then they'll paper stamp ... except for the occasional sacrificial lamb.
Don't be the small company using the auditor for big companies, it will massively increase your chances of being the sacrificial lamb.
Re: (Score:2)
Your pilot for this segment of your flight to Hawaii direct from NYC will be a person chosen for its 13 minority equalization points rather than the person's qualifications to sit in a cockpit in any status let alone pilot. Nonetheless you are in the safe hands of WOKE airlines falling out of the skies for the last 10 months....
{O,o}
Re: (Score:2)
It's an anti-snake-oil law. No hiring AI company can seriously claim that their product is objective unless they can back that up with evidence. If they don't have the evidence then they haven't looked for it, which means their product is snake-oil.
Re: (Score:1)
That doesn't negate what my OP says at all though. They will redefine objective unless it fits the narrative that it's racist, anti feminist, anti trans etc. Even if the final pin is 'Well, no one who has identified as (Insert special interest group here) as even applied.' They'll turn around and go 'That's because they know you use this program to discriminate against them. With no proof. This is NYC we're talking about.
Take the hypocrisy elsewhere, one agenda doesn't require proof of anything or any evide
Re: (Score:2)
Seems unlikely. Most of the criticism is based on peer reviewed studies of the behaviour.
Who pays for this? (Score:2, Insightful)
Another sign New York City doesn't want your business. This costs money and it will have to be paid for by higher prices and/or higher taxes. Not only that but all businesses are assumed to be guilty of biased hiring processes until they prove innocence by providing audit results. It's unlikely anyone will take this to court over being unconstitutional since it would be a public relations nightmare to explain how protecting white heterosexual men in the workplace is a good thing.
What will happen is NYC w
Re: (Score:2)
Not only that but all businesses are assumed to be guilty of biased hiring processes until they prove innocence by providing audit results.
Complete nonsense. This law regulates AI, and AI is sold as a product to laypeople (HR staff) to feed CVs into. So the proof will be provided by the manufacturer who will have done some tests to prove that their product is not biased, and any users challenged about it will refer back to the manufacturer's certification.
That's how it works for most stuff. Construction companies aren't required to prove that the materials they use are fire safe, they rely on the manufacturer testing and certifying them, for e
In before you read anything! (Score:5, Insightful)
Wow, look at all these neckbeards who don't understand what an "audit" does. Hint: It is neither a prescription nor a proscription.
And who do the results of the audit get reported to? The companies doing the hiring! So they can understand WTF the tool they're using did.
This is information for the companies out use the software, telling them how the software made decisions affecting them. If you're against that, I have to wonder what your true motivation is. Well, not really, we all know.
Re: (Score:1)
Is there anything to suggest these audits will be unbiased themselves? I can foresee a bunch of SJW types being hired to do that and pronouncing everything that doesn't discriminate against white men as "biased".
Alternatively, if there's no mechanism to say who could do these audits, the companies will simply find someone to rubber stamp their software as "unbiased", which makes it just another pointless regulation.
Re: (Score:1)
SJW
*rofl*
You'll whine and cry about the scary wimens either way, so who cares?
Re: (Score:1)
Ah, I forgot you're one of those people.
Re: (Score:1)
*ROFLCOPTER*
Re: (Score:2)
In Europe under GDPR rules the individual has a right to ask how automated decisions about them were made. So they would be able to query why the AI made the decision it did, and the answer "it's a black box, computer says no" isn't acceptable.
Re: (Score:2)
This sounds like what it would mean to the layperson, but you may actually just get a complicated technical "reason" that doesn't really help you.
Europeans love to talk about how great their laws are, and GDPR is a good law that helps a lot of situations.
But it won't help this situation at all; they can just tell you the answer. The AI scored you __ on ___ and ___ on ___ and ___ on ___. That's why the humans who made the decision, made the decision. It's not a bottomless pit of "why" where they have to
Re: (Score:2)
And who do the results of the audit get reported to? The companies doing the hiring! So they can understand WTF the tool they're using did.
I imagine that whenever a discrimination lawsuit comes up, the companies will get subpoenaed for their audit data.
Re: (Score:2)
Indeed, that's the idea; get the companies to know what they're doing, so that they can choose what to do!
Surely this protects them in a lawsuit... right? I mean, for companies trying to comply with the law it will, for sure. If they know what they did, they know how much to compensate to correct it, and where to compensate.
And if not... that means the lawsuit might have a better chance of correcting the behavior!
And for the software companies selling the AI, it rewards correctness on things they're already
Re: (Score:2)
Wow, look at all these neckbeards who don't understand what an "audit" does. Hint: It is neither a prescription nor a proscription.
And who do the results of the audit get reported to? The companies doing the hiring! So they can understand WTF the tool they're using did.
This is information for the companies out use the software, telling them how the software made decisions affecting them. If you're against that, I have to wonder what your true motivation is. Well, not really, we all know.
Oh give me a break. Yes, I'm sure it will just stay advisory. Because that's so historically what government does, in these matters.
"Nice little business you got there ... shame if something happened to it ... "
Pay no attention to the 250 lb bully leaning over your shoulder ... he's just looking ...
Re: (Score:1)
Yes, I'm sure it will just stay advisory.
In before the Masons or the Lizard People!
A good start - But. (Score:2)
Re: (Score:2)
Disclosure would be nice, but like you say, it would reduce the value of the tool by a lot. And most companies don't want that. So it would be harder to implement.
Hopefully this moves the ball in that direction. Right now, the average person has no idea that most companies are already using these tools.
Once there are audits, then companies who get sued will have to turn over their audit results, and public education will slowly increase.
Re: (Score:2)
Been running a corporate subsidary for nearly 30 years.
Every single audit I've had has
- measured performance against AN EXPECTED BASELINE
- at the end, provided recommendations about how we can meet the rules/performance/results expected.
I think you have a stunningly naive view of what audits are, as if they're some sort of objective process that just pulls data together. They're 3rd-person reviews from someone who ostensibly doesn't have a vested interest in the result - that's their value. But to suggest
Re: (Score:2)
In finance, audits often don't measure performance at all, or use baselines, but instead compare what you wrote down to what your records say.
In IT, audits are the same; what do the logs say happened?
What sort of audits are you even talking about? The lack of key information in your claims really undercuts them. You either don't run anything, and made that up, or you don't have any idea what the audits are, you just know they show you a powerpoint about them once in awhile. And it talks about benchmarks, be
Re: (Score:2)
WTF "key information" would you be looking for?
I'm going to identify neither myself nor my business on Slashdot, thanks.
And yes, very MUCH our audits (about every 2 years) from our EU parent company
- compare our books to reality
- review our safety and IT security procedures
- review our performance vs targets financially, productivity, etc.
- ALWAYS include recommendations about processes, functions, accounting, IT, and safety procedures to "help" us conform to benchmarks and targets (in all those categories)
Re: (Score:2)
What a maroon. You don't even understand the words, but you claim to be some V I P.
So I nailed it. You're saying that in the meeting where they show you the powerpoint about the audits, they also review your performance in relation to that data.
And you can't figure out which part is the audit.
And you don't imagine that in the case in the story, there are very specific pieces of information involved, that have already been specified. You don't even comprehend enough about legislation to understand that if th
Re: (Score:2)
And you're a moron if you think the 'audit' is somehow magically distinct from the analysis and review.
Re: (Score:2)
You can't comprehend the meaning of words.
You just make up random other bullshit, and say, "It could mean anything!"
You didn't even understand the powerpoint they made you.
They should pass a similar law for medical AI (Score:2)
Re: (Score:2)
Yeah, this will probably be used as a template if they can implement it well.
For the love of ... (Score:3, Insightful)
We all know what this really is. NYC is requiring companies to be biased, and there is a danger that an automated process won't know that, didn't get the memo.
This is meant as a corrective to the danger that some unbiased hiring might break out.
Difficult to do correctly (Score:2)
Or course humans have all the same sort of failings when hiring.
One o
Weird reporting (Score:2)
Anyone else notice the last line of the summary?
Anyone who has a web form or email address or any way to receive input from the public inevitably gets bombarded with nonsense, spam, and scam messages.
Removing phony applicants is not even a public concern, so it's a good thing that the bill doesn't address that. Software vendors already have an incentive to include such features as a selling point.
AI hiring? (Score:2)
It would surprise me greatly if AI selection of candidates is any more predictive of their success than the multiple other biased ways in which hiring happens.
Bias is the point (Score:2)
Wouldn't the only way for something to be truly non biased, is if it just picked candidates at complete random?
next step: the voters get audited (Score:2)
This is like when the bonds raters down graded the US credit score and Obama audited them until the put it back up.
Re: (Score:1)