Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Microsoft AI

Microsoft Takes Down AI Model Published by Beijing-Based Researchers Without Adequate Safety Checks (theinformation.com) 49

Microsoft's Beijing-based research group published a new open source AI model on Tuesday, only to remove it from the internet hours later after the company realized that the model hadn't gone through adequate safety testing. From a report: The team that published the model, which is comprised of China-based researchers in Microsoft Research Asia, said in a tweet on Tuesday that they "accidentally missed" the safety testing step that Microsoft requires before models can be published.

Microsoft's AI policies require that before any AI models can be published, they must be approved by the company's Deployment Safety Board, which tests whether the models can carry out harmful tasks such as creating violent or disturbing content, according to an employee familiar with the process. In a now-deleted blog post, the researchers behind the model, dubbed WizardLM-2, said that it could carry out tasks like generating text, suggesting code, translating between different languages, or solving some math problems.

Microsoft Takes Down AI Model Published by Beijing-Based Researchers Without Adequate Safety Checks

Comments Filter:
  • Safety testing? (Score:5, Informative)

    by Valgrus Thunderaxe ( 8769977 ) on Tuesday April 16, 2024 @12:02PM (#64398394)
    Why does it need safety testing? Is it going to hurt someone? How?
    • by Anonymous Coward
      You can buy a machine gun in the US but this software needs safety testing.
      • You can buy a machine gun in the US but this software needs safety testing.

        Well, you'd better have at least a bare minimum of about $40K or so, since citizens can ONLY own full auto weapons that were manufactured 1986 or earlier...so they're a bit rare (thanks Hughes amendment [rolls eyes]).

        And once you find an old one that is legal to sell, then you have to get with the ATF and go through the enhanced background check, and you pay the tax stamp ($200 I believe), and then likely wait at least 6mos to a y

        • Submit the eForm for Form 4 (individual), typical wait time is less than 2 months (53 days). If not doing the individual form, but a form for a trust, then you are looking at almost 6 months.
          • In January the wait to get cleared for a suppressor (It's the same background check) was estimated at 6 months. IDK if that has changed since. I had an issue with my state check (now fixed) and haven't followed up on the fed one.

            • I have a friend who operates a gun store. Just happened to visit one day (about two weeks ago) when ATF was doing a review/inspection. ATF agent said some suppressor checks done by eForm were getting completed in less than a week now and individual machine gun checks were taking on average 53 days. Paper forms - yeah, you're waiting 6 months. I don't know what they've changed in their process......
        • by cstacy ( 534252 )

          You can buy a machine gun in the US but this software needs safety testing.

          Well, you'd better have at least a bare minimum of about $40K or so, since citizens can ONLY own full auto weapons that were manufactured 1986 or earlier...so they're a bit rare (thanks Hughes amendment [rolls eyes]).

          Not "rare", but artificially "scarce".
          An M1919 is from $15K to $24K.
          You can get an RPD for $7K.
          A Greaser can be had for $17,500.
          If you already have a platform, a drop-in auto seer is $800.

          Those are Buy It Now prices on auction sites.
          You might be able to do better than those.

    • Re:Safety testing? (Score:4, Interesting)

      by gweihir ( 88907 ) on Tuesday April 16, 2024 @12:13PM (#64398434)

      Maybe it says that Microsoft stuff is crap or something like that. Truth about the quality of their products is not something Microsoft does support.

    • Re:Safety testing? (Score:4, Insightful)

      by HBI ( 10338492 ) on Tuesday April 16, 2024 @12:14PM (#64398440)

      The paranoia is part of the hype.

      • The risk of reputational harm is very real. Whenever a big company or government releases a model, you can bet a lot of 'journalists' will be trying to prod it into saying something offensive so they can print the headline "Microsoft AI is racist" etc.
        • by HBI ( 10338492 )

          "Corporate standards analysis" would sound much less fear-inducing than "Safety testing". That's a feature.

    • by Hadlock ( 143607 )

      China really does not want an AI model that is developed in China, that if you ask it how to over throw the Chinese government, it will give you step by step instructions to do that. Microsoft doesn't want a model that can do that, because china might ask microsoft to leave.

      Furthermore AI models that spit out unfavorable things make headlines and hurt shareholder value, once you release the model people will try to get it to say all kinds of inflammatory things, and the creator may be held at fault

    • It's code for "censorship". Because if you don't, people will make videos you don't like.

      My only sympathy for that position lies in the fact that once those uses have happened, it'll be the company that allowed the download that gets hassled, rather than the individual who created and published something objectionable with it.

    • Re:Safety testing? (Score:4, Informative)

      by Luckyo ( 1726890 ) on Tuesday April 16, 2024 @12:42PM (#64398550)

      "AI safety" generally has two completely different meanings.

      The first meaning, which is what is easy to explain and defensible is that models learn a lot of things that would be dangerous to make easily available to those who search. For example, instructions on how to make bombs. Normally you have to go to school to learn this, as most of the online bomb making "tutorials" are intentionally poisoned. If you try them, you'll end up with something that doesn't work because key step/ingredient is intentionally incorrect.

      This is the part that is easily defensible in "AI safety" and the veil behind which people who are pushing for the second meaning hide behind when called on it.

      Second one being "political correctness". This is the "trans women are biologically women", "there are infinite genders", "biology is racist" etc. It's basically about pushing the politically correct dogma on The Current Thing.

      Both will be sold on the false equivalence of "it's dangerous to talk about child mutilation and castration being bad because it causes trans genocide, as trans people are so insane that they will mass kill themselves if you tell them that castrating yourself and putting on some makeup and a dress doesn't make them women". Which is totally the same thing as making easily accessible and practical bomb making instructions.

      And will be forgotten and vehemently denied this was ever the case by the same activists after they move on to the next The Current Thing. While bomb making instructions will remain actually dangerous to society. Which is why AI safety needs to be done on the latter, and not the former. But since activists cannot justify the former without the latter to wider populace, they have to resort to motte and bailey tactics like described above. And that's why "AI safety" became something that is hard to understand. Because just as activsts retreat from their indefensible positions to the highly defensible ones, highly defensible ones become associated with indefensible positions and people begin to question if bomb making recipes being made easily available is actually dangerous, since The Current Thing obviously isn't.

    • They mean safe for Microsoft to release. I suspect they still remember their earlier Tay AI chatbot [wikipedia.org] that after a short contact with the internet was spouting neo-nazi hate propaganda and swearing like a sailor.

      The one time you can generally guarantee that corporations will have extensive and effective safety checks is when it comes to protecting their bottom line.
    • Why does it need safety testing? Is it going to hurt someone? How?

      It might call someone by the wrong pronoun.

      • And this is why you don't conflate physical and mental harm by calling them both "violence." Fast forward 30 years and you get this sad state of affairs.

    • I was unable to tell Xi from Winnie.

    • by cstacy ( 534252 )

      Why does it need safety testing? Is it going to hurt someone? How?

      It might accidentally say something factual about Trump that could be interpreted as non-damning.
       

    • Quick! Shut it down! It might hurt someone's feelings!

      Yeah, we're now living in a Far Side cartoon.

  • Funny! (Score:4, Funny)

    by oldgraybeard ( 2939809 ) on Tuesday April 16, 2024 @12:09PM (#64398424)
    "accidentally missed" the safety testing step
  • by gweihir ( 88907 ) on Tuesday April 16, 2024 @12:11PM (#64398430)

    Isn't that their usual modus operandi? This thing must have some really major defects for them to remove it.

  • by groobly ( 6155920 ) on Tuesday April 16, 2024 @12:14PM (#64398438)

    Safety testing means it will not say things that are politically unpalatable. For example, it must not misgender anyone. It must not provide statistics that look bad for some race or other "protected" group, etc.

    • Safety testing means it will not say things that are politically unpalatable. For example, it must not misgender anyone. It must not provide statistics that look bad for some race or other "protected" group, etc.

      In other words....no fun to play with, and not really worth messing with....

      Go for the truly open source models if you want something you can really "play" with and use to generate anything you wish.

    • Yup. Safety is just another word for politics here. That and avoiding lawsuits.

      Elon Musk's Grok is pretty decent about this, by the way. It tells you how to make thermite, whereas ChatGPT scolds you. Grok still gives you warnings, like it may not be legal so check first, and be careful, but it still gives you the information.

      The same thing happens with demon summoning. Ask Grok and ChatGPT how to summon a demon. GPT refuses. Grok will entertain your request.

    • TIL:

      apt-get install git-lfs
      git-lfs clone (huggingface repo url)

      regular git clone gives you tiny pointer files.

      PS Thanks, Babs!

  • Translation: It was removed when discovered it knew who tank man was.

  • That is not allowed by Microsoft.. "Trans women are men"? "It's ok to be white"?
  • I haven't looked yet, but I predict 65% of threads will be jokes, and 100% will be piling on Microsoft.

  • It looks like they don't want to censor it more, but only add another benchmark that pleases people who want to know how toxic or not it is.

  • https://blogs.microsoft.com/on... [microsoft.com] Microsoft has a duty of care to customers (in terms of privacy of your data, e.g. your data isn't used to train models for others) and to the world (MS don't allow people to weaponise AI, or use it in ways that have negative impact (e.g. if you're using facial recognition and you've used a shitty model that unfairly disadvantages people with a darker skin colour with poorer recognition, for example) - all of which are spelt out in the contracts. You don't have to use MS to
  • Honestly, this is probably fine for your average user. They'll probably want the sanitized version.

    But I don't. Fuck that. I guess is anyone leading in the "fuck guardrails" AI?

Chemist who falls in acid is absorbed in work.

Working...