Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Microsoft AI

Microsoft Plans To Eliminate Face Analysis Tools in Push for 'Responsible AI' (nytimes.com) 19

For years, activists and academics have been raising concerns that facial analysis software that claims to be able to identify a person's age, gender and emotional state can be biased, unreliable or invasive -- and shouldn't be sold. From a report: Acknowledging some of those criticisms, Microsoft said on Tuesday that it planned to remove those features from its artificial intelligence service for detecting, analyzing and recognizing faces. They will stop being available to new users this week, and will be phased out for existing users within the year. The changes are part of a push by Microsoft for tighter controls of its artificial intelligence products. After a two-year review, a team at Microsoft has developed a "Responsible AI Standard," a 27-page document that sets out requirements for A.I. systems to ensure they are not going to have a harmful impact on society.

The requirements include ensuring that systems provide "valid solutions for the problems they are designed to solve" and "a similar quality of service for identified demographic groups, including marginalized groups." Before they are released, technologies that would be used to make important decisions about a person's access to employment, education, health care, financial services or a life opportunity are subject to a review by a team led by Natasha Crampton, Microsoft's chief responsible A.I. officer.

This discussion has been archived. No new comments can be posted.

Microsoft Plans To Eliminate Face Analysis Tools in Push for 'Responsible AI'

Comments Filter:
  • And still making to the head lines.
    Cool!

    • Maybe they know they're miles behind the competition and will never catch up.

      Solution? Pretend all the competitors are evil for pursuing it.

      • by N_Piper ( 940061 )
        It doesn't have to be one or the other, it can be both. Maybe the brass finally realized that the tech could be a "Me" problem instead of exclusively a "You" problem, like maybe they realized it would be really hard to have inconspicuous sex with whatever taboo group they preferred if every single door was equipped with a Facial ID capable camera. Heck maybe it IS entirely a realization of the drawbacks of the tech and Microsoft is years ahead in facial recognition and they just hit a milestone in cutting t
    • by N_Piper ( 940061 )
      Microsoft spin doctoring? NEVER!
  • In other words (Score:4, Interesting)

    by skovnymfe ( 1671822 ) on Tuesday June 21, 2022 @01:05PM (#62639936)

    In other words they're making the tools available only to select clientele.

    Imagine if some... undesirable... went and created a face app that detects important people.

    • It's more like, they are way behind in the market and have given up. Every company has their own different approach to AI.

      Google AI team creates groundbreaking (though exaggerated) research.
      Facebook AI doesn't do much original, but they create nice tools for the rest of us to use (Pytorch).
      Open AI isn't open but takes Google's ideas and scales them up through tons and tons of data.
      Microsoft AI turns out to be racist, gets shut down, then the company buys the tech off OpenAI.

  • It'll be interesting to see if business owners and management continue to demand these tools despite evidence that they don't work because they just believe they do, and they end up going to sketchier and sketchier service providers to obtain it...sound too irrational to happen? Remember the pre-pandemic age of ubiquitous unnecessary commuting?

    • I went through airport security last month. The photo recognition system recognized me, gave them my name, and I didn't have to show them my passport.

      Facial recognition has gotten a lot better in the last five years, which is a bit scary.

  • I still wish they'd bring back Tay.

  • I was promised a face-recognizing slaughterbot apocalypse, and by jove I want my face-recognizing slaughterbot apocalypse. I've dreamed and prepped far too long to be denied my chance to shine by forward-thinking, policy eggheads.
    • Don't worry, it'll still happen. This is just Microsoft making a public denial before the slaughterbots get turned loose.

  • Do you think Chinese companies will be doing the same as MS? Not hardly. US and EU companies and governments will have nowhere else to turn, and they will. And copies of all that data will go back to China too.

  • by SomePoorSchmuck ( 183775 ) on Tuesday June 21, 2022 @04:55PM (#62640422) Homepage

    The requirements include ensuring that systems provide "valid solutions for the problems they are designed to solve" and "a similar quality of service for identified demographic groups, including marginalized groups." Before they are released, technologies that would be used to make important decisions about a person's access to employment, education, health care, financial services or a life opportunity are subject to a review by a team led by Natasha Crampton, Microsoft's chief responsible A.I. officer.

    Translation: "We were full steam ahead to develop and make money off dozens of technologies that are invasive, privacy-destroying, anonymity-eradicating, and of dubious accuracy/reliability despite being promoted as security/authentication/prosecutory tools. We already had our marketing strategists Handling those problems and had no intention of slowing down. But there's no amount of marketing money that can make us cancel-proof when it turns out that some of the hundreds of situations where this technology would be inaccurate or abusive, involve traits that lend themselves to an outrage tweetstorm."

    Which, hey, data bias surrounding the sampling of race/gender/age/etc. are totally valid concerns. Glad something slowed down the "why are we in this handbasket and where are we going" juggernaut. But packaging this up as a social-justice issue is just another way for The Man to make an end-run around us. Rather than acknowledge the fact that the racial biases are simply some specific examples of why the biometric/profileization of civilization is full of shit and prone to data corruption and false-positives from top to bottom, they will spend a year or two making a big show of caring about "marginalized groups" as an isolatable implementation problem. Which then becomes their way of way of rehabilitating and, uhhh, whitewashing the technology's inescapable dystopian consequences.

  • Anyone able to say "Microsoft" and "Responsible" together. Now that is funny.
  • Just because something is biased or has flaws doesn't make it useless.

    In cases where they used image matching to find a suspect using Facial recognition and it misidentified the person, then the wrong person was arrested, wasn't so much a failure of an imperfect technology to do it's just but rather a total failure and abdication of law enforcement that easily could have LOOKED and seen if the evidence matched for a reasonable human being.

    If I go search for something on Google, I don't go making life changi

If Machiavelli were a hacker, he'd have worked for the CSSG. -- Phil Lapsley

Working...