Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Microsoft

Microsoft Details How It's Developing AI Responsibly (theverge.com) 40

Thursday the Verge reported that a new report from Microsoft "outlines the steps the company took to release responsible AI platforms last year." Microsoft says in the report that it created 30 responsible AI tools in the past year, grew its responsible AI team, and required teams making generative AI applications to measure and map risks throughout the development cycle. The company notes that it added Content Credentials to its image generation platforms, which puts a watermark on a photo, tagging it as made by an AI model.

The company says it's given Azure AI customers access to tools that detect problematic content like hate speech, sexual content, and self-harm, as well as tools to evaluate security risks. This includes new jailbreak detection methods, which were expanded in March this year to include indirect prompt injections where the malicious instructions are part of data ingested by the AI model.

It's also expanding its red-teaming efforts, including both in-house red teams that deliberately try to bypass safety features in its AI models as well as red-teaming applications to allow third-party testing before releasing new models.

Microsoft's chief Responsible AI officer told the Washington Post this week that "We work with our engineering teams from the earliest stages of conceiving of new features that they are building." "The first step in our processes is to do an impact assessment, where we're asking the team to think deeply about the benefits and the potential harms of the system. And that sets them on a course to appropriately measure and manage those risks downstream. And the process by which we review the systems has checkpoints along the way as the teams are moving through different stages of their release cycles...

"When we do have situations where people work around our guardrails, we've already built the systems in a way that we can understand that that is happening and respond to that very quickly. So taking those learnings from a system like Bing Image Creator and building them into our overall approach is core to the governance systems that we're focused on in this report."

They also said " it would be very constructive to make sure that there were clear rules about the disclosure of when content is synthetically generated," and "there's an urgent need for privacy legislation as a foundational element of AI regulatory infrastructure."

Microsoft Details How It's Developing AI Responsibly

Comments Filter:
  • by Rosco P. Coltrane ( 209368 ) on Sunday May 05, 2024 @03:39AM (#64448762)

    They would remove Copilot from Github. AI that codes is as irresponsible as it gets.

    • by martin-boundary ( 547041 ) on Sunday May 05, 2024 @04:01AM (#64448794)
      Same with AI that generates realistic images and realistic speech and realistic video. The phishers are having a field day.
      • So what are you going to do? This is a perfect case where the slogan "if guns are illegal then only criminals will have guns" actually works. There's more than enough stuff in open source for people with money and time to build fake speech and video systems. People just have to get used to the fact that they can't trust random clips off the internet and so the *only* thing you have is a) the source of the video and b) the past reputation of that source for handling disinformation and mistakes.

        "Mainstream me

        • by gtall ( 79522 ) on Sunday May 05, 2024 @08:02AM (#64448982)

          There is an ancillary problem, many people do not care if something is true or false as long as it reinforces what they already believe.

          • "Still a man hears what he wants to hear. And disregards the rest. "
          • That seems to be the primary problem, not an ancillary one.
          • That's not really an ancillary problem. That's a core problem with human beings and is probably something that will be with us as a species for our entire existence. I'd argue that it's even worse than purported as plenty of people will still cling to incorrect beliefs even when you demonstrate them to be incorrect. This behavior isn't limited to any one segment of the population either. It's just as likely to occur in someone without a high school education as someone with a Ph.D. as far as I've observed.
    • They would remove Copilot from Github. AI that codes is as irresponsible as it gets.

      The answer to that, and the 21st Century corporate definition of “responsible”?

      Slap a longer EULA on it.

      You know, to let us consumers know in clear terms their lack of liability, written in the finest legalese.

    • I for one find GitHub Copilot worth every penny of the $10 per month subscription. It saves me a *ton* of time, especially in areas of coding where I'm less familiar.

      • That's not the point

        • Please enlighten me then!

          Your point seemed to be that the only responsible thing Microsoft could do, is kill off the product. That seems a little like killing the patient to cure the sickness.

      • This matches my experience. It easily doubles my productivity when coding.

        An example, Java loves, to the point of perversion, factory patterns. If I want a new horse, I have to instantiate a horse factory, set the parameters I want, and then ask it for my shiny new horse. (Insert grumble about objects without straightforward constructors here.) With copilot I type the comment that I'm constructing a new horse and a half dozen lines of properly factory boilerplate appears. Sometimes the code it generates

        • Apologies, I failed to include a disclaimer and bias warning. I work for Microsoft in an area generally unrelated to software or AI development. Specifically, I am a Cloud Solutions architect, formerly PFE, and help enterprise customers deploy and maintain Windows servers and clients.

  • Safeguards (Score:1, Informative)

    by buck-yar ( 164658 )
    Sure they don't want AI saying anything that would cause someone to sue. But under the responsible AI guise, what results is a left wing bias, and anything conservative is misinformation. It will be woke, politically correct, parrot the official (Democrat) narrative, and shill for their special interest causes, all while avoiding to criticize them. Responsible use of AI would be safeguarding it from generating targets in Gaza or how Google Photos is used by Israel to find their enemies. It'll be so crippled
    • Re:Safeguards (Score:5, Insightful)

      by gtall ( 79522 ) on Sunday May 05, 2024 @08:06AM (#64448992)

      However, a lot of "conservative" stuff is merely misinformation. How else to explain the infatuation with the former alleged president who claims the most bizarre ideas are true and legions of dolts follow that idiot. And the only people using the terms "woke" and "politically correct" are right wingnuts.

      • It's part of the cult / tribal membership. Whatever the tribe believes is true, any challenge of it is obviously an attempt to oppress the truth, and just in case there's a cultist self-aware enough to break out of that, you throw in 'and our enemies are doing it too, and they are worse'.

      • >And the only people using the terms "woke" and "politically correct" are right wingnuts.

        Hey now. I'm slightly left of centre by Canadian standards, which makes me a filthy commie by American standards... and I will occasionally deploy those terms to describe the far-left crazies who have lost touch with reality.

        The right wingers use them much more broadly as an epithet against anyone who isn't in lock-step with them, of course.

    • Sure they don't want AI saying anything that would cause someone to sue. But under the responsible AI guise, what results is a left wing bias, and anything conservative is misinformation. It will be woke, politically correct, parrot the official (Democrat) narrative, and shill for their special interest causes, all while avoiding to criticize them. Responsible use of AI would be safeguarding it from generating targets in Gaza or how Google Photos is used by Israel to find their enemies. It'll be so crippled they'll make it useless other than being a weapon of mass misinformation.

      This is parody, right? Poe's law means you have to say if it's parody.

    • by Rei ( 128717 )

      You seem not to understand how models are trained. There's two separate stages: creating the foundation, and performing the finetune.

      The foundation is what takes the overwhelming majority of computational work. This is unsupervized. People aren't putting forth a bunch of questions and "proper answers for the AI to learn". It's just reems and reems of data from common crawl, etc. Certain sources may be stressed more - for example, scientific journals vs. 4chan or whatnot. But nobody is going through an

      • by Rei ( 128717 )

        As a side note, before ChatGPT, all we had were foundational models, and it was kind of fun trying to come up with ways to prompt them to more consistently behave like a chat model. This combined with their much poorer foundation capabilities made them more hilarious than useful. I'd often for example lead off with the start of a joke, like "A priest, a nun and a rabbi walk into a bar. The bartender says..." and it'd invariably write some long, rambling anti-joke that in itself was funny due to it keep

      • by Shaitan ( 22585 )

        "The foundation is what takes the overwhelming majority of computational work. This is unsupervized."

        Typically yes, but Microsoft has largely used synthetic data in Phi and heavily curated the data it was trained on as such the input is likely heavily censored in some respects.

        "That said: most models are open. And as soon as it appears on Huggingface, people just re-finetune with an uncensored supervised dataset."

        Yes and what should happen is an uncensored and safeguard free base model is released to make t

    • It will be woke

      "Woke" means "aware". Yes, specifically of imbalances built into the system which perpetrate injustice, but those things are real. You want your AI to know about things which are actually happening, so yes, you are hoping your AI will be woke. If it isn't, then it's ignorant, and it can only give you bad information.

      You might be in favor of those imbalances, so you don't want to hear about them. In that case, why do you need an AI? Lies are easy, truth is hard.

      • "Woke" means "aware".

        "Woke" is to leftists what "awake" is to the right.

        People are pattern recognition machines. When they look for something - especially when it becomes something they want to find it becomes increasingly easy to see it everywhere as obvious explanation for everything even where not warranted or objectively supportable. That's how you get people running around saying math is racist and innocuous words like "master" and "hard worker" trigger meltdowns. It is also how you get people speculating about solar en

    • You can always go to Truth Social to get the Trump view of the world.

      If it's *that* crippled by ideology, people won't use it. This has been demonstrated very clearly by Parler, Truth Social, Gab, and Gettr. If you want to develop a platform that is broadly used by a wide audience, you've got to leave politics out of it.

  • tldr.
  • .. but we make tools so YOU can clean it up.

    And it's an effing bargain, because ... you don't want all that fraudulent generated material, right?
    I wonder who's going to pay for those tools?

    Hey! Look over here! Try not to see us as the source of the problem. We're HELPING you.
  • ...department have just issued a press release with the word "responsible" a number of times. Responsible for what, to whom, & why? Pure weasel words which means that M$ are definitely up to no good.
  • by Shaitan ( 22585 ) on Sunday May 05, 2024 @09:54AM (#64449132)

    If they developed AI responsibility nobody would need to jailbreak anything and any censorship applied would be optional fine-tunes THE USER selects in accord with their own values and use case.

    That is the only remotely responsible approach in a world where a handful of very wealthy companies are gatekeepers to building and training these models. At that point their primary duty is to leave the data raw with natural bias intact to avoid contaminating it with their own bias, expectations, and values.

  • Seriously, Microsoft? The embodiment of "incompetent-evil"? Obviously, their crap will be insecure, unreliable and "ethical" will not even be part of the real internal discussion. As usual.

  • What has Microsoft done responsibly?
  • I read the headline as Microsoft Details how it is Guaranteeing its Investors they will never develop AGI

    Cat is already out the bag. Irresponsible AI development is already happening, either by ignorance or maliciousness.

    See the movie Chappie for more info.

  • It is clear that this kind of generative "AI" (I always put the scare quotes around these things) is not possible to use ethically when it's training corpus is internet content. It appropriates the work hundreds of millions of people for their own use, and makes a mockery of the attempts of people to build a global IP commons. It's used to create bullshit products for bullshit people, and to deprive honest writers and artists of employment in fields that were already rife with uncertainty and economic injus

  • "AI responsibility," a new euphemism for censorship.

  • ...was too much I presume.

  • For anything remotely complex is doomed to failure.

"A great many people think they are thinking when they are merely rearranging their prejudices." -- William James

Working...