Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Microsoft

Microsoft President: 'You Can't Believe Every Video You See or Audio You Hear' (microsoft.com) 67

"We're currently witnessing a rapid expansion in the abuse of these new AI tools by bad actors," writes Microsoft VP Brad Smith, "including through deepfakes based on AI-generated video, audio, and images.

"This trend poses new threats for elections, financial fraud, harassment through nonconsensual pornography, and the next generation of cyber bullying." Microsoft found its own tools being used in a recently-publicized episode, and the VP writes that "We need to act with urgency to combat all these problems."

Microsoft's blog post says they're "committed as a company to a robust and comprehensive approach," citing six different areas of focus:
  • A strong safety architecture. This includes "ongoing red team analysis, preemptive classifiers, the blocking of abusive prompts, automated testing, and rapid bans of users who abuse the system... based on strong and broad-based data analysis."
  • Durable media provenance and watermarking. ("Last year at our Build 2023 conference, we announced media provenance capabilities that use cryptographic methods to mark and sign AI-generated content with metadata about its source and history.")
  • Safeguarding our services from abusive content and conduct. ("We are committed to identifying and removing deceptive and abusive content" hosted on services including LinkedIn and Microsoft's Gaming network.)
  • Robust collaboration across industry and with governments and civil society. This includes "others in the tech sector" and "proactive efforts" with both civil society groups and "appropriate collaboration with governments."
  • Modernized legislation to protect people from the abuse of technology. "We look forward to contributing ideas and supporting new initiatives by governments around the world."
  • Public awareness and education. "We need to help people learn how to spot the differences between legitimate and fake content, including with watermarking. This will require new public education tools and programs, including in close collaboration with civil society and leaders across society."

Thanks to long-time Slashdot reader theodp for sharing the article


This discussion has been archived. No new comments can be posted.

Microsoft President: 'You Can't Believe Every Video You See or Audio You Hear'

Comments Filter:
  • Hmmmmm (Score:4, Insightful)

    by Calydor ( 739835 ) on Saturday February 17, 2024 @12:58PM (#64247594)

    Are we SURE that it was actually Brad Smith who said this?

  • They did a documentary on this in 1987: https://en.wikipedia.org/wiki/... [wikipedia.org]

    Ok, it wasn't exactly a documentary.
  • by n0w0rries ( 832057 ) on Saturday February 17, 2024 @01:04PM (#64247610)

    We are witnessing the end of the internet.
    The beauty of the internet is that you can go on there and get the opinions of regular people. You don't need a rich person that owns a media companies permission to publish anything. The rich people that are infected with greed are anxious to become the gatekeepers again.
    The future of the internet is a place that you won't be able to tell what's real and what's fabricated bullshit. AI will poison all the wells, making the internet irrelevant unless you're one of those poor souls trapped in a small box and told to wear augmented reality headsets if you want to have any fun. All fun will be monitored, controlled, and it won't be free. The gate keepers will be back, telling you what's real and what's misinformation.
    Society will split, there will be people who live wild and free in the country, and their will be people enslaved in large cities. They will be used to build pyramids, or whatever the equivalent is today.
    Enjoy the internet while you can, it's a sinking ship at this point. If we're lucky, a big solar flare will solve the problem for us.

    • The internet has brought out people's true nature - and while it is nice to engage with the entire planet from time to time, there are enough bad people to ruin the whole thing.

      The sad part is: we let them ruin it through tolerance of their early actions.
    • by Anonymous Coward

      No Luddites like Slashdot Luddites®.

      The internet isn't going anywhere, you idiot. And cities will continue to be nicer places to live than the middle of fucking nowhere.

      • And cities will continue to be nicer places to live than the middle of fucking nowhere.

        Uh huh. Tell me again why a popular thing people did when they got a nice remote job, was find a way to leave a city.

        Or why city workers retirement planning seem to have one thing in common; leaving a city to retire. Something about actually enjoying retirement keeps being the persistent reason. Go figure.

        Those that used to use cities for “fun”, can’t afford it now. All small business now struggles to survive in cities for the same damn reason. And the response from Greed? More taxe

        • by Calydor ( 739835 )

          And a lot of the fun they used the city for was stuff like nightclubs and parties and constantly DOING SOMETHING; once you're at retirement age the nightclub just doesn't hit the way it used to, I'm told, and you are much happier just sitting still enjoying the view and fresh air.

          • I think you skipped a generation there. People stop going to nightclubs in their 30s generally and move to a different part of a metropolitan area they can get a house to raise a family while still being within commuting distance of their job. They still enjoy urban amenities like restaurants, theaters, museums, parks, sporting events etc. By retirement age the kids are out of the house so it makes sense to downsize. They don't all flee to rural areas many just get condos.
            • People stop going to nightclubs in their 30s generally and move to a different part of a metropolitan area they can get a house to raise a family

              That's a luxury for most people under 40. Most younger generations cannot afford a house, let alone with a family on top of that.

              • The point was retirees generally weren't going to nightclubs. They already moved out and bought a house and raised a family. The current generation being forced to rent a house to raise a family in is irrelevant to that.
        • What is your definition of popular? Most people didn't move to rural areas. The percentage of people in urban environments still went up. All small business now struggles in cities? Have you ever been to a city? Where are you getting your numbers from?
    • by Anonymous Cward ( 10374574 ) on Saturday February 17, 2024 @01:32PM (#64247642)
      The World Wide Web is about to die and take down all the garbage proprietary silos in the process, sure. But that is not a bad thing. The Internet in general, on the other hand, is perfectly safe. Real people can and will communicate peer-to-peer in environments which require some form of physical presence attestation to guarantee unwanted entities are shut out. With extremely high-bandwidth connections and a NAT-free future on the horizon, social media will soon be entirely based on self-publishing without unwanted advertising. Dedicated video game servers will see a resurgence (backed by simple directory servers to keep them organised) and trusted meatspaces of old, such as libraries, will serve as a means to bootstrap anything which can't be safely established without needing a physical root of trust.
    • Comment removed (Score:5, Insightful)

      by account_deleted ( 4530225 ) on Saturday February 17, 2024 @02:04PM (#64247694)
      Comment removed based on user account deletion
      • by Calydor ( 739835 ) on Saturday February 17, 2024 @02:18PM (#64247714)

        No, but the sheer amount of misinformation and lies has increased thousandfold, and it'll only get worse when it's not even limited by how fast a person can type and imagine stuff anymore.

        And how do you do research on a lot of these things? If you see news on the web saying that Putin has been spotted sniffing coke off the boobs of a 15 year old girl ... okay, how are you gonna research if that is true if you're sitting in, say, New York? And with the doom-scrolling culture we've adopted you're only a flick of your finger away from the NEXT piece you need to research and verify.

        In the old days when we read a paper we trusted to some degree that the journalists and editors of the paper we read every single day had done THEIR due diligence, with their expertise in uncovering the details of a story, before sharing it with the community. We did not go out and independently verify every story in the paper every day because without really wanting to sound like a meme; NO ONE has time for that.

        • Comment removed (Score:4, Insightful)

          by account_deleted ( 4530225 ) on Saturday February 17, 2024 @02:49PM (#64247782)
          Comment removed based on user account deletion
        • Has it increased? Seems more like the internet has exposed that misinformation was rampant but tightly controlled prior. Thanks to the internet false narratives are now being challenged.
        • by mjwx ( 966435 )

          No, but the sheer amount of misinformation and lies has increased thousandfold, and it'll only get worse when it's not even limited by how fast a person can type and imagine stuff anymore.

          And how do you do research on a lot of these things? If you see news on the web saying that Putin has been spotted sniffing coke off the boobs of a 15 year old girl ... okay, how are you gonna research if that is true if you're sitting in, say, New York? And with the doom-scrolling culture we've adopted you're only a flick of your finger away from the NEXT piece you need to research and verify.

          In the old days when we read a paper we trusted to some degree that the journalists and editors of the paper we read every single day had done THEIR due diligence, with their expertise in uncovering the details of a story, before sharing it with the community. We did not go out and independently verify every story in the paper every day because without really wanting to sound like a meme; NO ONE has time for that.

          The thing is, back in the old days if you trusted newspapers you were a complete idiot... There were a lot of complete idiots. It's an old proverb in several cultures, if you believe everything you read, you'd better not read.

          Misinformation (lies) have become so pervasive that it's not just the internet, Fox News, Daily Mail, 2GB, et al. it's managed to reach every single form of media we can think of. The internet isn't a cause, it's a carrier the same as print, TV and radio are... and it's not going to

    • The WWW was a fork from USENET. So is the Darknet. Both are examples of how information sources and audiences split long ago and has always been that way.

      Gatekeepers are relevant only when the clickbait shit they’re selling, is worth more than mindless entertainment. It’s not.

      I do fully agree that the general destruction of trust, is a major problem. And those profiting from that destruction, stupidly assume they’re somehow immune. They’re not.

    • As bullshit AI floods the internet and media in general people will be forced to think critically about the material they consume. I know it's cool and hip to assume that everybody is dumber than you and if you're in IT you have a habit of just assuming everyone is stupid because frankly you're generally dealing with the dumbest people out there because they're typically the ones who call technical support.

      But critical thinking and claims evaluation are things that can be taught and learned. California i
    • We are witnessing the end of the internet.
      The beauty of the internet is that you can go on there and get the opinions of regular people.

      Whats happened, is that people are exposed to so many opinions of so many regular people, that they've learned that its no longer necessary to produce an opinion, only to reproduce the opinions they see online.

      AI just takes this to the next level, by removing other people from the generation of the opinions that you are supposed to regurgitate in order to 'be a good person', and simulate the proper opinions that 'good people' are supposed to have.

      Why should an actual person be required to produce opinions?

  • I think the larger issue is who should have the power to "fix" false/inaccurate/misleading information of what kinds, if any. Robert Kennedy Jr. just won a court case because government agencies pressured private companies into censoring information his organization put out about vaccines. There are all sorts of reasons for trying to limit information. It many be inaccurate. It may be misleading. It may lead people to act in a way that is not approved. It may offend some or most people. It may damage someon
    • Well spoken. A broad spectrum of "watchers"  is required to prevent abuse of any censor system. Of-course Joe Peanut already trusts nobody, but the person he sleeps beside. Joe Peanut already knows "only the paranoid survive". What Joe Peanut lacks is the "Kings sword" and I do not think that person is in view.   
  • by PsychoSlashDot ( 207849 ) on Saturday February 17, 2024 @01:15PM (#64247624)
    A strong safety architecture - our stuff prevent people from using it to abuse you in ways that have already been used to abuse you. Don't worry; your enemies will have their own.
    Durable media provenance and watermarking - our stuff will include tracking to prove it's not us abusing you. Don't worry; your enemies will have their own ways around it.
    Safeguarding our services from abusive content and conduct - our stuff will try to detect when your enemies use their own. Expect an arms race not unlike the spam arms race. Also, click here to enlarge your penis.
    Robust collaboration across industry and with governments and civil society - we'll make tell your politicians we cannot do all of this without some of your juicy tax dollars.
    Modernized legislation to protect people from the abuse of technology - we'll make sure the only people who can abuse you are the ones living in countries that really, really hate you.
    Public awareness and education - we will frequently remind you that it's your fault if you get abused.

    I get it that they can't do/say nothing. But I think there are only three outcomes from this evolution of tech:
    1} We will be inundated by waves of deepfakes and will never be able to trust anything, ever.
    2} Every single media item will be have a chain-of-custody that results in a further erosion of privacy.
    3} Both.

    I'm betting on #3.
    • As I discuss here for militarism but applies as well to commercialism:
      https://pdfernhout.net/recogni... [pdfernhout.net]

      That is also the idea in my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."

      To spell it out, AI is a technology of abundance. If people concerned about not having enough money to buy endless stuff use AI in a competitive spammy way -- disrupting healthy communications in the process by poisoning the web wit

  • by Dixie_Flatline ( 5077 ) <vincent.jan.gohNO@SPAMgmail.com> on Saturday February 17, 2024 @01:20PM (#64247632) Homepage

    This can really be an opportunity for traditional media if they seize it.

    They can form coalitions across political divisions that guarantee that a video or speech or whatever actually happened as you see it.

    So Fox, NPR and the NYT, say. They all send live reporters to the event, all of them have cameras and sound equipment, all of them are at the source and can confirm that the video is a true capture of the event, and if any manipulated versions come up, all three have source material to judge against. Nobody (reasonable) can accuse the coalition of bias, because they span the political spectrum. They can give it a catch-phrase and have a 'GUARANTEED' graphic across the screen, the suits will eat it up.

    It might give people a reason to watch/read them again, and they don't even have to change their political spin.

    I mean, I suppose this relies on people actually wanting a ground truth to base their views on, but I can't solve people enjoy being lied to.

    • Will that allow them to compete for advertising better? Do more people read history or get their history from movies/plays? Will AI content be any worse or and just as likely to be "true". I think much of the frenzy around AI is not its accuracy. Its about control But it is also about making what some people now consider reliable information obviously not reliable. Have you read the real history of the iconic photo of a Chinese man standing in front of the tank in Tienanmen square? Any professional photo
      • Many court rooms don't allow cameras. Imagine someone taking the transcript of a trial and producing an AI version of the trial using voices and images reflecting the actual participants. Imagine two versions, one that portrays a witness as a liar and another that makes them appear totally reliable. That is the way the media works today, but AI will make conflicting versions of events a lot more entertaining. "Traditional media", whatever that is, won't be able to compete.
    • Fox, NPR and NYT have all been busted pushing misinformation. Their working alongside government and powerful special interest groups to control narratives through propaganda is a problem. The internet is how we broke free from their shackles and they are desperately trying to censor it for that reason. The only thing they guarantee is their content will be filtered to push their narratives.
      • Not true. Its also guaranteed that every story is optimized to attract eyeballs for advertisers. Given a choice between most interesting and accurate and complete they will choose most interesting every time. Far from checking accuracy, their filter for interesting stories is that they aren't provably false and don't alienate either their target audience or powerful sources.
        • Denial ain't just a river in Egypt. It is absolutely true that all of those entities have worked with government and special interest groups (advertisers) to control narratives through propaganda. It's not up for debate this is a long established and documented truth. Project Mockingbird never truly went away. Are you going to try and tell me the government doesn't surveil social media too? The blatant propaganda during the pandemic opened many eyes. Thanks to the internet people now know they are bein
      • Certainly, this is absolutely true. But what I'm suggesting here is that at the very least, certain types of events--political speeches, rallies, riots, what have you--have three (or more) humans that agree that someone has said the things that they said. If you can deepfake a politician saying literally anything, we need some outside sources that are public and can vouch for something. It's probably even more important that they're all at odds with one another in some way.

    • Sounds interesting. (These ideas will have no effect on the willfully ignorant, of course.)

      One of the common complaints about MSM is that "they all say the same thing." That's because they often "...send live reporters to the event, all of them have cameras and sound equipment, all of them are at the source..."

      MSM mostly attempts to vet stories before publishing. That's part of what makes them MSM. So, generally, they "all say the same thing". There will be bias because humans are biased, but it tends to be

  • Advertisers / data harvesters have their horns firmly gripped in their left hands on this at a conceptual level, but they're rightly holding back, not from a moral stance, but from a late understanding that their targets (audience) will cease to be effective if their eyeballs are suddenly swamped with invented bullshit where they cannot be corralled into making a purchasing or voting decision that works in their favour. In the past, deception at this scale has been too hard for new entrants. Now the estab
    • PT Barnum proved long ago that damn near every form of advertising is still effective, because humans.

      All the best marketing and advertising clickbait in the world isn’t going to be effective unless your audience actually has the disposable money or credit line to BUY shit. What the marketing pimps truly fear right now is a Depression. When NO marketing will be profitable.

  • So adorable (Score:5, Interesting)

    by Anonymous Cward ( 10374574 ) on Saturday February 17, 2024 @01:44PM (#64247664)
    Uncensored Stable Diffusion and highly-competent 13b LLMs are readily available to people with mid-range hardware. None of Microsoft's plans will protect anyone from anything, and their PR campaigns are conducted in the name of lip service only; serving only to tell us how much of a waste of money their AI offerings are compared with what our own computers will soon be able to accomplish locally.
    • Uncensored Stable Diffusion and highly-competent 13b LLMs are readily available to people with mid-range hardware. None of Microsoft's plans will protect anyone from anything, and their PR campaigns are conducted in the name of lip service only; serving only to tell us how much of a waste of money their AI offerings are compared with what our own computers will soon be able to accomplish locally.

      Approved LLMs will be the only “valid” systems recognized by business, because lobbying. Ironically enough that is how they will “protect” individuals; by drowning out all other voices, or labeling everyone running some “competent” LLM off a recycled Xbox at home a conspiracy loon. That’s easy enough to sell to the ignorant masses too.

      Cmon. You should know how this plays out, because Greed. And ignorance. Adorable theory indeed.

      • I believe it will eventually be more extreme/effective than that. LLMs and other human mimicry AIs will require an individual license to operate, just like handguns. Sure there will be people who run an unregistered personal AI in their basement, but if they are caught, it will be instant criminal prosecution. And make no mistake, AI training always leaves footprints, just like cannabis production does today.
    • You don't own a Microsoft computer.
  • by groobly ( 6155920 ) on Saturday February 17, 2024 @02:30PM (#64247734)

    I'm more concerned about not believing the video and audio they won't let me see.

    • I'm more concerned about not believing the video and audio they won't let me see.

      Such as the footage of the Apollo moon landings on the sound stage?

  • Always encouraging to see the president of one of the most abusive companies in U.S. history so excited about teaming up with the government to accuse people.

    Go to hell, Brad.

  • The term "safety" in AI discussions is insufficient or misleading for several reasons:

    AI systems are complex, and their impacts are wide-ranging. "Safety" oversimplifies or fails to capture the spectrum of ethical, social, and technical issues, such as bias, privacy, accountability, and the broader societal implications.

    The term "safety" does not convey the ethical and social dimensions of AI, which include fairness, transparency, and justice. These aspects go beyond preventing immediate physical harm
  • Meaning we need the governments to create an online drivers license or passport, and to link those to biometrics that are read/used every time you use a device. Prove who is at the end of the wire (or who gave permission to the program) for any action.

    Would cut down on any online fraud. How much are we willing to pay to delete all (or most) online theft?

    Who else can create an identity system that everyone must support, and that many/most people will trust?

    Sure, create systems on top to allow anonymity whe

    • That's the last thing we need. That would obviously just be a tool used to censor and shun people speaking unpopular truths. Goodbye whistle blowers.
  • A brand you can trust. *sparklechime*
  • ... faking video and audio has been done for a long, long time now.
  • or he never said it,
  • ...because it's worked so well for the financial sector & is working so well for Boeing right now.

    F**k Microsoft & let's get some proper measures in place that M$ shareholders might not like but will probably be more effective.
  • Did any of you see that Houthi ship takeover video? [nbcnews.com] How come that looks like a video game to me?

    Where's the camera recording the helicopter landing, and why record that? The jerky video as they walk. The super-clean ship deck. The very quick departure of the helicopter. The soldiers taking cover (from what?). The soldiers all dressed similarly, when they dressed randomly in Houthi media I've seen in the past. The unnatural way the cameraman looks around/backward. The fact the crew somehow didn't
  • Nothing but lies since I was old enough to know what a lie was. Hence, it was all a lie the whole time. The very first lie I remember came from a book. It said some Spanish explorer discovered America with people on it. Think about that for a moment. I won't get into the propaganda of 'cowboys & indians' games we played as kids because TV always made out the Indian to be the bad guys.

  • ... you will very soon no longer be able to believe _any_ video of important global events.

    Then again, it's been that way for a very long time, just now it's going to get a whole lot easier to make believable fake content.

    Live streaming, for now, will be the only way to be 100% sure, just so long as you can see streams from multiple sources that aren't in collusion.

    Sure, you can still be reasonably sure an event actually took place if there's multiple videos from multiple angles released by multiple people!

Dynamically binding, you realize the magic. Statically binding, you see only the hierarchy.

Working...