×
AI

Taylor Swift Deepfakes Originated From AI Challenge, Report Says 62

The pornographic deepfakes of Taylor Swift that proliferated on social media late last month originated from an online challenge to break safety mechanisms designed to block people from generating lewd images with artificial intelligence, according to social network analysis company Graphika. Bloomberg: For weeks, users of internet forum 4chan have taken part in daily competitions to find words and phrases that could help them bypass the filters on popular image-generation services, which include Microsoft Designer and OpenAI's DALL-E, the researchers found. The ultimate goal was to create sexual images of prominent female figures such as singers and politicians. "While viral pornographic pictures of Taylor Swift have brought mainstream attention to the issue of AI-generated non-consensual intimate images, she is far from the only victim," said Cristina Lopez G., a senior analyst at Graphika, in an email. "In the 4chan community where these images originated, she isn't even the most frequently targeted public figure. This shows that anyone can be targeted in this way, from global celebrities to school children."
AI

Mistral Confirms New Open Source AI Model Nearing GPT-4 Performance (venturebeat.com) 18

An anonymous reader quotes a report from VentureBeat: The past few days have been a wild ride for the growing open source AI community -- even by its fast-moving and freewheeling standards. Here's the quick chronology: on or about January 28, a user with the handle "Miqu Dev" posted a set of files on HuggingFace, the leading open source AI model and code sharing platform, that together comprised a seemingly new open source large language model (LLM) labeled "miqu-1-70b." The HuggingFace entry, which is still up at the time of this article's posting, noted that new LLM's "Prompt format," how users interact with it, was the same as Mistral, the well-funded open source Parisian AI company behind Mixtral 8x7b, viewed by many to be the top performing open source LLM presently available, a fine-tuned and retrained version of Meta's Llama 2.

The same day, an anonymous user on 4chan (possibly "Miqu Dev") posted a link to the miqu-1-70b files on 4chan, the notoriously longstanding haven of online memes and toxicity, where users began to notice it. Some took to X, Elon Musk's social network formerly known as Twitter, to share the discovery of the model and what appeared to be its exceptionally high performance at common LLM tasks (measured by tests known as benchmarks), approaching the previous leader, OpenAI's GPT-4 on the EQ-Bench. Machine learning (ML) researchers took notice on LinkedIn, as well. "Does 'miqu' stand for MIstral QUantized? We don't know for sure, but this quickly became one of, if not the best open-source LLM," wrote Maxime Labonne, an ML scientist at JP Morgan & Chase, one of the world's largest banking and financial companies. "Thanks to @152334H, we also now have a good unquantized version of miqu here: https://lnkd.in/g8XzhGSM. Quantization in ML refers to a technique used to make it possible to run certain AI models on less powerful computers and chips by replacing specific long numeric sequences in a model's architecture with shorter ones. Users speculated "Miqu" might be a new Mistral model being covertly "leaked" by the company itself into the world -- especially since Mistral is known for dropping new models and updates without fanfare through esoteric and technical means -- or perhaps an employee or customer gone rouge.

Well, today it appears we finally have confirmation of the latter of those possibilities: Mistral co-founder and CEO Arthur Mensch took to X to clarify: "An over-enthusiastic employee of one of our early access customers leaked a quantized (and watermarked) version of an old model we trained and distributed quite openly... To quickly start working with a few selected customers, we retrained this model from Llama 2 the minute we got access to our entire cluster -- the pretraining finished on the day of Mistral 7B release. We've made good progress since -- stay tuned!" Hilariously, Mensch also appears to have taken to the illicit HuggingFace post not to demand a takedown, but leaving a comment that the poster "might consider attribution." Still, with Mensch's note to "stay tuned!" it appears that not only is Mistral training a version of this so-called "Miqu" model that approaches GPT-4 level performance, but it may, in fact, match or exceed it, if his comments are to be interpreted generously.

Microsoft

Microsoft Closes Loophole That Created Taylor Swift Deepfakes (404media.co) 64

An anonymous reader shares a report: Microsoft has introduced more protections to Designer, an AI text-to-image generation tool that people were using to make nonconsensual sexual images of celebrities. Microsoft made the changes after 404 Media reported that the AI-generated nude images of Taylor Swift that went viral last week came from 4chan and a Telegram channel where people were using Designer to make AI-generated images of celebrities.

"We are investigating these reports and are taking appropriate action to address them," a Microsoft spokesperson told us in an email on Friday. "Our Code of Conduct prohibits the use of our tools for the creation of adult or non-consensual intimate content, and any repeated attempts to produce content that goes against our policies may result in loss of access to the service. We have large teams working on the development of guardrails and other safety systems in line with our responsible AI principles, including content filtering, operational monitoring and abuse detection to mitigate misuse of the system and help create a safer environment for users."

AI

4chan Uses Bing To Flood the Internet With Racist Images (404media.co) 132

samleecole writes: 4chan users are coordinating a posting campaign where they use Microsoft Bing's AI text-to-image generator to create racist images that they can then post across the internet. The news shows how users are able to manipulate free to access, easy to use AI tools to quickly flood the internet with racist garbage, even when those tools are allegedly strictly moderated. "We're making propaganda for fun. Join us, it's comfy," the 4chan thread instructs. "MAKE, EDIT, SHARE."

A visual guide hosted on Imgur that's linked in that post instructs users to use AI image generators, edit them to add captions that make them seem like political campaigns, and post them to social media sites, specifically Telegram, Twitter, and Instagram. 404 Media has also seen these images shared on a TikTok account that has since been removed. People being racist is not a technological problem. But we should pay attention to the fact that technology is "to borrow a programming concept" 10x'ing racist posters, allowing them to create more sophisticated content more quickly in a way we have not seen online before. Perhaps more importantly, they are doing so with tools that are allegedly "safe" and moderated so strictly, to a point where they will not generate completely harmless images of Julius Caesar. This means we are currently getting the worst of both worlds from Bing, an AI tool that will refuse to generate a nipple but is supercharging 4chan racists.

Crime

Ignored by Police, Two Women Took Down Their Cyber-Harasser Themselves (msn.com) 104

Here's how the Washington Post tells the story of 34-year-old marketer (and former model) Madison Conradis, who discovered nude behind-the-scenes photos from 10 years earlier had leaked after a series of photographer web sites were breached: Now the photos along with her name and contact information were on 4chan, a lawless website that allows users to post anonymously about topics as varied as music and white supremacy... Facebook users registered under fake names such as "Joe Bummer" sent her direct messages demanding that she send new, explicit photos, or else they would further spread the already leaked photos. Some pictures landed in her father's Instagram messages, while marketing clients told her about the nude images that came their way. Madison was at a friend's party when she got a panicked call from the manager of a hotel restaurant where she had worked: The photos had made their way to his inbox. After two years, hoping a new Florida law against cyberharassment would finally end the torture, Madison walked into her local Melbourne police station and shared everything. But she was told that what she was experiencing was not criminal.

What Madison still did not know was that other women were in the clutches of the same man on the internet — and all faced similar reactions from their local authorities. Without help from the police, they would have to pursue justice on their own.

Some cybersleuthing revealed the four women all had one follower in common on Facebook: Christopher Buonocore. (They were his ex-girlfriend, his ex-fiance, his relative, and a childhood friend.) Eventually Madison's sister Christine — who had recently passed the bar exam — "prepared a 59-page document mapping the entire case with evidence and relevant statutes in each of the victims' jurisdictions. She sent the document to all the women involved, and each showed up at her respective law enforcement offices, dropped the packet in front of investigators and demanded a criminal investigation." The sheriff in Florida's Manatee County, Christine's locality, passed the case up to federal investigators. And in July 2019, the FBI took over on behalf of all six women on the basis of the evidence of interstate cyberstalking that Christine had compiled...

The U.S. attorney for the Middle District of Florida took action at the end of December 2020, but without a federal law criminalizing the nonconsensual distribution of intimate images, she charged Buonocore with six counts of cyberstalking instead, which can apply to some cases involving interstate communication done with the intent to kill, injure, intimidate, harass or surveil someone. He pleaded guilty to all counts the following January...

U.S. District Judge Thomas Barber sentenced Buonocore to 15 years in federal prison — almost four years more than the prosecutor had requested.

AI

DHS Has Spent Millions On an AI Surveillance Tool That Scans For 'Sentiment and Emotion' (404media.co) 50

New submitter Slash_Account_Dot shares a report from 404 Media, a new independent media company founded by technology journalists Jason Koebler, Emanuel Maiberg, Samantha Cole, and Joseph Cox: Customs and Border Protection (CBP), part of the Department of Homeland Security, has bought millions of dollars worth of software from a company that uses artificial intelligence to detect "sentiment and emotion" in online posts, according to a cache of documents obtained by 404 Media. CBP told 404 Media it is using technology to analyze open source information related to inbound and outbound travelers who the agency believes may threaten public safety, national security, or lawful trade and travel. In this case, the specific company called Fivecast also offers "AI-enabled" object recognition in images and video, and detection of "risk terms and phrases" across multiple languages, according to one of the documents.

Marketing materials promote the software's ability to provide targeted data collection from big social platforms like Facebook and Reddit, but also specifically names smaller communities like 4chan, 8kun, and Gab. To demonstrate its functionality, Fivecast promotional materials explain how the software was able to track social media posts and related Persons-of-Interest starting with just "basic bio details" from a New York Times Magazine article about members of the far-right paramilitary Boogaloo movement. 404 Media also obtained leaked audio of a Fivecast employee explaining how the tool could be used against trafficking networks or propaganda operations. The news signals CBP's continued use of artificial intelligence in its monitoring of travelers and targets, which can include U.S. citizens. This latest news shows that CBP has deployed multiple AI-powered systems, and provides insight into what exactly these tools claim to be capable of while raising questions about their accuracy and utility.
"CBP should not be secretly buying and deploying tools that rely on junk science to scrutinize people's social media posts, claim to analyze their emotions, and identify purported 'risks,'" said Patrick Toomey, deputy director of the ACLU's National Security Project. "The public knows far too little about CBP's Counter Network Division, but what we do know paints a disturbing picture of an agency with few rules and access to an ocean of sensitive personal data about Americans. The potential for abuse is immense."
Security

Discord Says Cooperating in Probe of Classified Material Breach (reuters.com) 24

Instant messaging platform Discord says it was cooperating with U.S. law enforcement's investigation into a leak of secret U.S. documents that has grabbed attention around the world. From a report: The statement comes as questions continue to swirl over who leaked the documents, whether they are genuine and whether the intelligence assessments in them are reliable. The documents, which carry markings suggesting that they are highly classified, have led to a string of stories about the war in Ukraine, protests in Israel and how the U.S. surveils friend and foe alike. The source of the documents is not publicly known, but reporting by the open-source investigative site Bellingcat has traced their earliest appearance to Discord, a communications platform popular with gamers. Discord's statement suggested it was already in touch with investigators. The White House also urged social media companies on Thursday to prevent the circulation of information that could hurt national security.
The Military

Leader of Online Group Where Secret Documents Leaked Is Air National Guardsman (nytimes.com) 182

An anonymous reader quotes a report from the New York Times: The leader of a small online gaming chat group where a trove of classified U.S. intelligence documents leaked over the last few months is a 21-year-old member of the intelligence wing of the Massachusetts Air National Guard, according to interviews and documents reviewed by The New York Times. The National Guardsman, whose name is Jack Teixeira, oversaw a private online group called Thug Shaker Central, where about 20 to 30 people, mostly young men and teenagers, came together over a shared love of guns, racist online memes and video games. On Thursday afternoon, about a half-dozen F.B.I. agents pushed into a residence in North Dighton, Mass. Attorney General Merrick B. Garland later said in a short statement that Airman Teixeira had been arrested "without incident." Federal investigators had been searching for days for the person who leaked the top secret documents online.

Starting months ago, one of the users uploaded hundreds of pages of intelligence briefings into the small chat group, lecturing its members, who had bonded during the isolation of the pandemic, on the importance of staying abreast of world events. [...] The Times spoke with four members of Thug Shaker Central, one of whom said he had known the person who leaked for at least three years, had met him in person and referred to him as the O.G. The friends described him as older than most of the group members, who were in their teens, and the undisputed leader. One of the friends said the O.G. had access to intelligence documents through his job. While the gaming friends would not identify the group's leader by name, a trail of digital evidence compiled by The Times leads to Airman Teixeira. The Times has been able to link Airman Teixeira to other members of Thug Shaker Central through his online gaming profile and other records. Details of the interior of Airman Teixeira's childhood home -- posted on social media in family photographs -- also match details on the margins of some of the photographs of the leaked secret documents.

Members of Thug Shaker Central who spoke to The Times said that the documents they discussed online were meant to be purely informative. While many pertained to the war in Ukraine, the members said they took no side in the conflict. The documents, they said, started to get wider attention only when one of the teenage members of the group took a few dozen of them and posted them to a public online forum. From there they were picked up by Russian-language Telegram channels and then The Times, which first reported on them. The person who leaked, they said, was no whistle-blower, and the secret documents were never meant to leave their small corner of the internet. "This guy was a Christian, antiwar, just wanted to inform some of his friends about what's going on," said one of the person's friends from the community, a 17-year-old recent high school graduate. "We have some people in our group who are in Ukraine. We like fighting games; we like war games."

Role Playing (Games)

Leaked Classified Documents Also Include Roleplaying Game Character Stats (vice.com) 59

An anonymous reader quotes a report from Motherboard: Over the past month, classified Pentagon documents have circulated on 4chan, Telegram, and various Discord servers. The documents contain daily intelligence briefings, sensitive information about Ukrainian military positions, and a handwritten character sheet for a table-top roleplaying game. No one knows who leaked the Pentagon documents or how. They appeared online as photographs of printed pages, implying someone printed them out and removed them from a secure location, similar to how NSA translator Reality Winner leaked documents. The earliest documents Motherboard has seen are dated February 23, though the New York Times and Bellingcat reported that some are dated as early as January. According to Bellingcat, the earliest known instances of the leaks appearing online can be traced back to a Discord server.

At some point, a Discord user uploaded a zip file of 32 images from the leak onto a Minecraft Discord server. Included in this pack alongside highly sensitive, Top Secret and other classified documents about the Pentagon's strategy and assessment of the war in Ukraine, was a handwritten piece of paper that appeared to be a character sheet for a roleplaying game. It's written on a standard piece of notebook paper, three holes punched out on the side, blue lines crisscrossing the page. The character's name is Doctor "Izmer Trotzky," his character class is "Professor Scientist." They've got a strength of 5, a charisma of 4, and 19 rubles to their name. Doctor Trotzky has 10 points in first aid and occult skills, and 24 in spot hidden. He's carrying a magnifying glass, a fountain pen, a sword cane, and a deringer. [...]

But what game is it from? Motherboard reached out to game designer Jacqueline Bryk to find out. Bryk is an award-winning designer of roleplaying games who has worked on Kult: Divinity Lost, Changeling: the Lost, Fading Suns: Pax Alexius, and Vampire: the Masquerade. "I strongly suspect this is Call Of Cthulhu," Bryk said when first looking at the sheet. Call of Cthulhu (COC) is an RPG based on the work of H.P. Lovecraft where players attempt to stave off madness while investigating eldritch horrors. "This is a pretty classic Professor build. The sword cane really clinches it for me. I notice he's currently carrying a derringer and a dagger but took no points in firearms or fighting. I'm not sure which edition this is but it seems like the most he could do with his weapons is throw them."
"After some research, Bryk concluded that the game is a homebrewed combination of COC and the Fallout tabletop game based on the popular video game franchise," adds Motherboard. "My best guest here is Fallout: Cthulhu the Homebrew," Bryk said, giving the home designed game a name.
United States

Classified US Documents Leaked on 4chan, Telegram, Discord, and Twitter (msn.com) 133

America's Department of Justice just launched an investigation into the leaking of classified documents from the U.S. Department of Defense, reports the Washington Post.

"On Wednesday, images showing some of the documents began circulating on the anonymous online message board 4chan and made their way to at least two mainstream social media platforms, Telegram and Twitter." Earlier Friday, The Washington Post obtained dozens of what appeared to be photographs showing classified documents, dating to late February and early March, that range from worldwide intelligence briefings to tactical-level battlefield updates and assessments of Ukraine's defense capabilities. They outline information about the Ukrainian and Russian militaries, and include highly sensitive U.S. analyses about China and other nations. The materials also reference highly classified sources and methods that the United States uses to collect such information, alarming U.S. national security officials who have seen them.... The material that appeared online includes photographs of documents labeled "Secret" or "Top Secret," and began appearing on Discord, a chat platform popular with gamers, according to a Post review.

In some cases, it appears that the slides were manipulated. For instance, one image features combat casualty data suggesting the number of Russian soldiers killed in the war is far below what the Pentagon publicly has assessed. Another version of the image showed higher Russian casualty figures. Besides the information on casualties that appeared to be manipulated to benefit the Russian government, U.S. officials who spoke to The Post said many of the leaked documents did not appear to be forged and looked consistent in format with CIA World Intelligence Review reports distributed at high levels within the White House, Pentagon and the State Department....

The documents appear to have been drawn from multiple reports and agencies, and concern matters other than Ukraine. Two pages, for example, are purportedly a "CIA Operations Center Intelligence Update," and includes information about events concerning Russia, Hungary and Iran.... Rachel E. VanLandingham, a former Air Force attorney and expert on military law, said that whoever is responsible for the leak "is in a world of hurt." Such breaches, she said, constitute "one of the most serious crimes that exist regarding U.S. national security...."

Skepticism abounded Friday among both Russian and Ukrainian officials aware of reports about the leaks, with each side accusing the other of being involved in a deliberate act of disinformation.

The Post notes one defense official told them "hundreds — if not thousands" of people had access to the documents, so their source "could be anyone."

But the photographs received by the Post were apparently taken from printed documents, and "classified documents may only be printed from computers in a secure facility, and each transaction is electronically logged, said Glenn Gerstell, a former general counsel with the National Security Agency who emphasized that he was speaking only about general procedures. "The fact that the documents were printed out should significantly narrow the universe of the initial inquiry."
Facebook

Facebook's Powerful Large Language Model Leaks Online (vice.com) 11

Facebook's large language model, which is usually only available to approved researchers, government officials, or members of civil society, has now leaked online for anyone to download. From a report: The leaked language model was shared on 4chan, where a member uploaded a torrent file for Facebook's tool, known as LLaMa (Large Language Model Meta AI), last week. This marks the first time a major tech firm's proprietary AI model has leaked to the public. To date, firms like Google, Microsoft, and OpenAI have kept their newest models private, only accessible via consumer interfaces or an API, ostensibly to control instances of misuse. 4chan members claim to be running LLaMa on their own machines, but the exact implications of this leak are not yet clear.

In a statement to Motherboard, Meta did not deny the LLaMa leak, and stood by its approach of sharing the models among researchers. "It's Meta's goal to share state-of-the-art AI models with members of the research community to help us evaluate and improve those models. LLaMA was shared for research purposes, consistent with how we have shared previous large language models. While the model is not accessible to all, and some have tried to circumvent the approval process, we believe the current release strategy allows us to balance responsibility and openness," a Meta spokesperson wrote in an email.

AI

AI-Generated Voice Firm Clamps Down After 4chan Makes Celebrity Voices For Abuse (vice.com) 107

An anonymous reader quotes a report from Motherboard: It was only a matter of time before the wave of artificial intelligence-generated voice startups became a play thing of internet trolls. On Monday, ElevenLabs, founded by ex-Google and Palantir staffers, said it had found an "increasing number of voice cloning misuse cases" during its recently launched beta. ElevenLabs didn't point to any particular instances of abuse, but Motherboard found 4chan members appear to have used the product to generate voices that sound like Joe Rogan, Ben Sharpio, and Emma Watson to spew racist and other sorts of material. ElevenLabs said it is exploring more safeguards around its technology.

The clips uploaded to 4chan on Sunday are focused on celebrities. But given the high quality of the generated voices, and the apparent ease at which people created them, they highlight the looming risk of deepfake audio clips. In much the same way deepfake video started as a method for people to create non-consensual pornography of specific people before branching onto other use cases, the trajectory of deepfake audio is only just beginning. [...] The clips run the gamut from harmless, to violent, to transphobic, to homophobic, to racist. One 4chan post that included a wide spread of the clips also contained a link to the beta from ElevenLabs, suggesting ElevenLabs' software may have been used to create the voices.

On its website ElevenLabs offers both "speech synthesis" and "voice cloning." For the latter, ElevenLabs says it can generate a clone of someone's voice from a clean sample recording, over one minute in length. Users can quickly sign up to the service and start generating voices. ElevenLabs also offers "professional cloning," which it says can reproduce any accent. Target use cases include voicing newsletters, books, and videos, the company's website adds. [...] On Monday, shortly after the clips circulated on 4chan, ElevenLabs wrote on Twitter that "Crazy weekend -- thank you to everyone for trying out our Beta platform. While we see our tech being overwhelmingly applied to positive use, we also see an increasing number of voice cloning misuse cases." ElevenLabs added that while it can trace back any generated audio to a specific user, it was exploring more safeguards. These include requiring payment information or "full ID identification" in order to perform voice cloning, or manually verifying every voice cloning request.

Social Networks

Documents Show 15 Social Media Companies Failed to Adequately Address Calls for Violence in 2021 (msn.com) 80

The Washington Post has obtained "stunning new details on how social media companies failed to address the online extremism and calls for violence that preceded the Capitol riot."

Their source? The bipartisan committee investigating attacks on America's Capitol on January 6, 2021 "spent more than a year sifting through tens of thousands of documents from multiple companies, interviewing social media company executives and former staffers, and analyzing thousands of posts. They sent a flurry of subpoenas and requests for information to social media companies ranging from Facebook to fringe social networks including Gab and the chat platform Discord."

Yet in the end it was written up in a 122-page memo that was circulated among the committee but not delved into in their final report. And this was partly because the committee was "concerned about the risks of a public battle with powerful tech companies, according to three people familiar with the matter who spoke on the condition of anonymity to discuss the panel's sensitive deliberations." The [committee staffer's] memo detailed how the actions of roughly 15 social networks played a significant role in the attack. It described how major platforms like Facebook and Twitter, prominent video streaming sites like YouTube and Twitch and smaller fringe networks like Parler, Gab and 4chan served as megaphones for those seeking to stoke division or organize the insurrection. It detailed how some platforms bent their rules to avoid penalizing conservatives out of fear of reprisals, while others were reluctant to curb the "Stop the Steal" movement after the attack....

The investigators also wrote that much of the content that was shared on Twitter, Facebook and other sites came from Google-owned YouTube, which did not ban election fraud claims until Dec. 9 and did not apply its policy retroactively. The investigators found that its lax policies and enforcement made it "a repository for false claims of election fraud." Even when these videos weren't recommended by YouTube's own algorithms, they were shared across other parts of the internet. "YouTube's policies relevant to election integrity were inadequate to the moment," the staffers wrote.

The draft report also says that smaller platforms were not reactive enough to the threat posed by Trump. The report singled out Reddit for being slow to take down a pro-Trump forum called "r/The-Donald." The moderators of that forum used it to "freely advertise" TheDonald.win, which hosted violent content in the lead-up to Jan. 6.... The committee also spoke to Facebook whistleblower Frances Haugen, whose leaked documents in 2021 showed that the country's largest social media platform largely had disbanded its election integrity efforts ahead of the Jan. 6 riot. But little of her account made it into the final document.

"The transcripts show the companies used relatively primitive technologies and amateurish techniques to watch for dangers and enforce their platforms' rules. They also show company officials quibbling among themselves over how to apply the rules to possible incitements to violence, even as the riot turned violent."
AI

Meet 'Unstable Diffusion', the Group Trying To Monetize AI Porn Generators (techcrunch.com) 89

An anonymous reader quotes a report from TechCrunch: When Stable Diffusion, the text-to-image AI developed by startup Stability AI, was open sourced earlier this year, it didn't take long for the internet to wield it for porn-creating purposes. Communities across Reddit and 4chan tapped the AI system to generate realistic and anime-style images of nude characters, mostly women, as well as non-consensual fake nude imagery of celebrities. But while Reddit quickly shut down many of the subreddits dedicated to AI porn, and communities like NewGrounds, which allows some forms of adult art, banned AI-generated artwork altogether, new forums emerged to fill the gap. By far the largest is Unstable Diffusion, whose operators are building a business around AI systems tailored to generate high-quality porn. The server's Patreon -- started to keep the server running as well as fund general development -- is currently raking in over $2,500 a month from several hundred donors.

"In just two months, our team expanded to over 13 people as well as many consultants and volunteer community moderators," Arman Chaudhry, one of the members of the Unstable Diffusion admin team, told TechCrunch in a conversation via Discord. "We see the opportunity to make innovations in usability, user experience and expressive power to create tools that professional artists and businesses can benefit from." Unsurprisingly, some AI ethicists are as worried as Chaudhry is optimistic. While the use of AI to create porn isn't new [...] Unstable Diffusion's models are capable of generating higher-fidelity examples than most. The generated porn could have negative consequences particularly for marginalized groups, the ethicists say, including the artists and adult actors who make a living creating porn to fulfill customers' fantasies.

Unstable Diffusion got its start in August -- around the same time that the Stable Diffusion model was released. Initially a subreddit, it eventually migrated to Discord, where it now has roughly 50,000 members. [...] Today, the Unstable Diffusion server hosts AI-generated porn in a range of different art styles, sexual preferences and kinks. [...] Users in these channels can invoke the bot to generate art that fits the theme, which they can then submit to a "starboard" if they're especially pleased with the results. Unstable Diffusion claims to have generated over 4,375,000 images to date. On a semiregular basis, the group hosts competitions that challenge members to recreate images using the bot, the results of which are used in turn to improve Unstable Diffusion's models. As it grows, Unstable Diffusion aspires to be an "ethical" community for AI-generated porn -- i.e. one that prohibits content like child pornography, deepfakes and excessive gore. Users of the Discord server must abide by the terms of service and submit to moderation of the images that they generate; Chaudhry claims the server employs a filter to block images containing people in its "named persons" database and has a full-time moderation team.
"Chaudhry sees Unstable Diffusion evolving into an organization to support broader AI-powered content generation, sponsoring dev groups and providing tools and resources to help teams build their own systems," reports TechCrunch. "He claims that Equilibrium AI secured a spot in a startup accelerator program from an unnamed 'large cloud compute provider' that comes with a 'five-figure' grant in cloud hardware and compute, which Unstable Diffusion will use to expand its model training infrastructure."

In addition to the grant, Unstable Diffusion will launch a Kickstarter campaign and seek venture funding, Chaudhry says.

"We plan to create our own models and fine-tune and combine them for specialized use cases which we shall spin off into new brands and products," Chaudhry added.
Intel

Intel Confirms Alder Lake BIOS Source Code Leaked (tomshardware.com) 61

Tom's Hardware reports: We recently broke the news that Intel's Alder Lake BIOS source code had been leaked to 4chan and Github, with the 6GB file containing tools and code for building and optimizing BIOS/UEFI images. We reported the leak within hours of the initial occurrence, so we didn't yet have confirmation from Intel that the leak was genuine. Intel has now issued a statement to Tom's Hardware confirming the incident:

"Our proprietary UEFI code appears to have been leaked by a third party. We do not believe this exposes any new security vulnerabilities as we do not rely on obfuscation of information as a security measure. This code is covered under our bug bounty program within the Project Circuit Breaker campaign, and we encourage any researchers who may identify potential vulnerabilities to bring them our attention through this program...."


The BIOS/UEFI of a computer initializes the hardware before the operating system has loaded, so among its many responsibilities, is establishing connections to certain security mechanisms, like the TPM (Trusted Platform Module). Now that the BIOS/UEFI code is in the wild and Intel has confirmed it as legitimate, both nefarious actors and security researchers alike will undoubtedly probe it to search for potential backdoors and security vulnerabilities....

Intel hasn't confirmed who leaked the code or where and how it was exfiltrated. However, we do know that the GitHub repository, now taken down but already replicated widely, was created by an apparent LC Future Center employee, a China-based ODM that manufactures laptops for several OEMs, including Lenovo.

Thanks to Slashdot reader Hmmmmmm for sharing the news.
AI

YouTuber Trains AI On 4Chan's Most Hateful Board (engadget.com) 94

An anonymous reader quotes a report from Engadget: As Motherboard and The Verge note, YouTuber Yannic Kilcher trained an AI language model using three years of content from 4chan's Politically Incorrect (/pol/) board, a place infamous for its racism and other forms of bigotry. After implementing the model in ten bots, Kilcher set the AI loose on the board -- and it unsurprisingly created a wave of hate. In the space of 24 hours, the bots wrote 15,000 posts that frequently included or interacted with racist content. They represented more than 10 percent of posts on /pol/ that day, Kilcher claimed.

Nicknamed GPT-4chan (after OpenAI's GPT-3), the model learned to not only pick up the words used in /pol/ posts, but an overall tone that Kilcher said blended "offensiveness, nihilism, trolling and deep distrust." The video creator took care to dodge 4chan's defenses against proxies and VPNs, and even used a VPN to make it look like the bot posts originated from the Seychelles. The AI made a few mistakes, such as blank posts, but was convincing enough that it took roughly two days for many users to realize something was amiss. Many forum members only noticed one of the bots, according to Kilcher, and the model created enough wariness that people accused each other of being bots days after Kilcher deactivated them.
"It's a reminder that trained AI is only as good as its source material," concludes the report.
AI

Eric Schmidt Thinks AI Is As Powerful As Nukes 84

An anonymous reader quotes a report from Motherboard: Former Google CEO Eric Schmidt compared AI to nuclear weapons and called for a deterrence regime similar to the mutually-assured destruction that keeps the world's most powerful countries from destroying each other. Schmidt talked about the dangers of AI at the Aspen Security Forum at a panel on national security and artificial intelligence on July 22. While fielding a question about the value of morality in tech, Schmidt explained that he, himself, had been naive about the power of information in the early days of Google. He then called for tech to be better in line with the ethics and morals of the people it serves and made a bizarre comparison between AI and nuclear weapons.

Schmidt imagined a near future where China and the U.S. needed to cement a treaty around AI. "In the 50s and 60s, we eventually worked out a world where there was a 'no surprise' rule about nuclear tests and eventually they were banned," Schmidt said. "It's an example of a balance of trust, or lack of trust, it's a 'no surprises' rule. I'm very concerned that the U.S. view of China as corrupt or Communist or whatever, and the Chinese view of America as failingwill allow people to say 'Oh my god, they're up to something,' and then begin some kind of conundrum. Begin some kind of thing where, because you're arming or getting ready, you then trigger the other side. We don't have anyone working on that and yet AI is that powerful."

Schmidt imagined a near future where both China and the U.S. would have security concerns that force a kind of deterrence treaty between them around AI. He speaks of the 1950s and '60s when diplomacy crafted a series of controls around the most deadly weapons on the planet. But for the world to get to a place where it instituted the Nuclear Test Ban Treaty, SALT II, and other landmark pieces of legislation, it took decades of nuclear explosions and, critically, the destruction of Hiroshima and Nagasaki. The two Japanese cities America destroyed at the end of World War II killed tens of thousands of people and proved to the world the everlasting horror of nuclear weapons. The governments of Russia and China then rushed to acquire the weapons. The way we live with the possibility these weapons will be used is through something called mutual assured destruction (MAD), a theory of deterrence that ensures if one country launches a nuke, it's possible that every other country will too. We don't use the most destructive weapon on the planet because of the possibility that doing so will destroy, at the very least, civilization around the globe.
"The problem with AI is not that it has the potentially world destroying force of a nuclear weapon," writes Motherboard's Matthew Gault. "It's that AI is only as good as the people who designed it and that they reflect the values of their creators. AI suffers from the classic 'garbage in, garbage out' problem: Racist algorithms make racist robots, all AI carries the biases of its creators, and a chatbot trained on 4chan becomes vile..."

"AI is a reflection of its creator. It can't level a city in a 1.2 megaton blast. Not unless a human teaches it to do so."
The Internet

Connecticut Will Pay a Security Analyst 150K To Monitor Election Memes (popsci.com) 140

An anonymous reader quotes a report from Popular Science: Ahead of the upcoming midterm elections, Connecticut is hiring a "security analyst" tasked with monitoring and addressing online misinformation. The New York Times first reported this new position, saying the job description will include spending time on "fringe sites like 4chan, far-right social networks like Gettr and Rumble and mainstream social media sites." The goal is to identify election-related rumors and attempt to mitigate the damage they might cause by flagging them to platforms that have misinformation policies and promoting educational content that can counter those false narratives.

Connecticut Governor Ned Lamont's midterm budget (PDF), approved in early May, set aside more than $6 million to make improvements to the state's election system. That includes $4 million to upgrade the infrastructure used for voter registration and election management and $2 million for a "public information campaign" that will provide information on how to vote. The full-time security analyst role is recommended to receive $150,000. "Over the last few election cycles, malicious foreign actors have demonstrated the motivation and capability to significantly disrupt election activities, thus undermining public confidence in the fairness and accuracy of election results," the budget stated, as an explanation for the funding.

While the role is a first for Connecticut, the NYT noted that it's part of a growing nationwide trend. Colorado, for example, has a Rapid Response Election Security Cyber Unit tasked with monitoring online misinformation, as well as identifying "cyber-attacks, foreign interference, and disinformation campaigns." Originally created in anticipation of the 2020 presidential election, which proved to be fruitful ground for misinformation, the NYT says the unit is being "redeployed" this year. Other states, including Arizona, California, Idaho, and Oregon, are similarly funding election information initiatives in an attempt to counter misinformation, provide educational information, or do both.

Social Networks

Can Tech Firms Prevent Violent Videos Circulating on the Internet? (theguardian.com) 116

This week New York's attorney general announced they're officially "launching investigations into the social media companies that the Buffalo shooter used to plan, promote, and stream his terror attack." Slashdot reader echo123 points out that Discord confirmed that roughly 30 minutes before the attack a "small group" was invited to join the shooter's server. "None of the people he invited to review his writings appeared to have alerted law enforcement," reports the New York Times., "and the massacre played out much as envisioned."

But meanwhile, another Times article tells a tangentially-related story from 2019 about what ultimately happened to "a partial recording of a livestream by a gunman while he murdered 51 people that day at two mosques in Christchurch, New Zealand." For more than three years, the video has remained undisturbed on Facebook, cropped to a square and slowed down in parts. About three-quarters of the way through the video, text pops up urging the audience to "Share THIS...." Online writings apparently connected to the 18-year-old man accused of killing 10 people at a Buffalo, New York, grocery store Saturday said that he drew inspiration for a livestreamed attack from the Christchurch shooting. The clip on Facebook — one of dozens that are online, even after years of work to remove them — may have been part of the reason that the Christchurch gunman's tactics were so easy to emulate.

In a search spanning 24 hours this week, The New York Times identified more than 50 clips and online links with the Christchurch gunman's 2019 footage. They were on at least nine platforms and websites, including Reddit, Twitter, Telegram, 4chan and the video site Rumble, according to the Times' review. Three of the videos had been uploaded to Facebook as far back as the day of the killings, according to the Tech Transparency Project, an industry watchdog group, while others were posted as recently as this week. The clips and links were not difficult to find, even though Facebook, Twitter and other platforms pledged in 2019 to eradicate the footage, pushed partly by public outrage over the incident and by world governments. In the aftermath, tech companies and governments banded together, forming coalitions to crack down on terrorist and violent extremist content online. Yet even as Facebook expunged 4.5 million pieces of content related to the Christchurch attack within six months of the killings, what the Times found this week shows that a mass killer's video has an enduring — and potentially everlasting — afterlife on the internet.

"It is clear some progress has been made since Christchurch, but we also live in a kind of world where these videos will never be scrubbed completely from the internet," said Brian Fishman, a former director of counterterrorism at Facebook who helped lead the effort to identify and remove the Christchurch videos from the site in 2019....

Facebook, which is owned by Meta, said that for every 10,000 views of content on the platform, only an estimated five were of terrorism-related material. Rumble and Reddit said the Christchurch videos violated their rules and they were continuing to remove them. Twitter, 4chan and Telegram did not respond to requests for comment

For what it's worth, this week CNN also republished an email they'd received in 2016 from 4chan's current owner, Hiroyuki Nishimura. The gist of the email? "If I liked censorship, I would have already done that."

But Slashdot reader Bruce66423 also shares an interesting observation from The Guardian's senior tech reporter about the major tech platforms. "According to Hany Farid, a professor of computer science at UC Berkeley, there is a tech solution to this uniquely tech problem. Tech companies just aren't financially motivated to invest resources into developing it." Farid's work includes research into robust hashing, a tool that creates a fingerprint for videos that allows platforms to find them and their copies as soon as they are uploaded...

Farid: It's not as hard a problem as the technology sector will have you believe... The core technology to stop redistribution is called "hashing" or "robust hashing" or "perceptual hashing". The basic idea is quite simple: you have a piece of content that is not allowed on your service either because it violated terms of service, it's illegal or for whatever reason, you reach into that content, and extract a digital signature, or a hash as it's called.... That's actually pretty easy to do. We've been able to do this for a long time. The second part is that the signature should be stable even if the content is being modified, when somebody changes say the size or the color or adds text. The last thing is you should be able to extract and compare signatures very quickly.

So if we had a technology that satisfied all of those criteria, Twitch would say, we've identified a terror attack that's being live-streamed. We're going to grab that video. We're going to extract the hash and we are going to share it with the industry. And then every time a video is uploaded with the hash, the signature is compared against this database, which is being updated almost instantaneously. And then you stop the redistribution.

It's a problem of collaboration across the industry and it's a problem of the underlying technology. And if this was the first time it happened, I'd understand. But this is not, this is not the 10th time. It's not the 20th time. I want to emphasize: no technology's going to be perfect. It's battling an inherently adversarial system. But this is not a few things slipping through the cracks.... This is a complete catastrophic failure to contain this material. And in my opinion, as it was with New Zealand and as it was the one before then, it is inexcusable from a technological standpoint.

"These are now trillion-dollar companies we are talking about collectively," Farid points out later. "How is it that their hashing technology is so bad?
Crime

Gunman Livestreams Killing of 10 On Twitch - After Radicalization On 4chan (nbcnews.com) 481

Slashdot reader DevNull127 writes: 10 people were killed in a grocery store in Buffalo, New York this afternoon — and three more were injured — by a gunman who livestreamed the massacre on Twitch. "A Twitch spokesperson said the platform has investigated and confirmed that the stream was removed 'less than two minutes after the violence started,'" reports NBC News.

The Raw Story reports that the 18-year-old suspected gunman had also apparently posted a 106-page manifesto online prior to the attack. A researcher at George Washington University program on extremism studied the manifesto, and points out that the suspected shooter "states that he was radicalized online on 4chan and was inspired by Brenton Tarrant's manifesto and livestreamed mass shooting in New Zealand."

The suspect reportedly used an assault rifle.

Less than two weeks ago, Slashdot posted the following:

28-year-old Brenton Tarrant killed 51 people in New Zealand in 2019. The Associated Press reports that at that point he'd been reading 4chan for 14 years, according to his mother — since the age of 14.

The year before, 25-year-old Alek Minassian, who killed 11 people in Toronto in 2018, namechecked 4chan in a pre-attack Facebook post.

But the Guardian now adds another a story from nine days ago — when a 23-year-old shooter with 1,000 rounds of ammunition opened fire from his apartment in Washington D.C. "Just two minutes after the shooting began, someone under the username "Raymond Spencer" logged onto the normally-anonymous 4chan and started a new thread titled 'shool [sic] shooting'. The newly published message contained a link — to a 30-second video of images captured from the digital scope of Spencer's rifle...."

NBC News reported that while Saturday's suspected shooter was livestreaming, "Some users of the website 4chan discussed the attack, and at least one archived the video in real-time, releasing photos of dead civilians inside the supermarket over the course of Saturday afternoon."

Slashdot Top Deals