The Military

Leader of Online Group Where Secret Documents Leaked Is Air National Guardsman (nytimes.com) 182

An anonymous reader quotes a report from the New York Times: The leader of a small online gaming chat group where a trove of classified U.S. intelligence documents leaked over the last few months is a 21-year-old member of the intelligence wing of the Massachusetts Air National Guard, according to interviews and documents reviewed by The New York Times. The National Guardsman, whose name is Jack Teixeira, oversaw a private online group called Thug Shaker Central, where about 20 to 30 people, mostly young men and teenagers, came together over a shared love of guns, racist online memes and video games. On Thursday afternoon, about a half-dozen F.B.I. agents pushed into a residence in North Dighton, Mass. Attorney General Merrick B. Garland later said in a short statement that Airman Teixeira had been arrested "without incident." Federal investigators had been searching for days for the person who leaked the top secret documents online.

Starting months ago, one of the users uploaded hundreds of pages of intelligence briefings into the small chat group, lecturing its members, who had bonded during the isolation of the pandemic, on the importance of staying abreast of world events. [...] The Times spoke with four members of Thug Shaker Central, one of whom said he had known the person who leaked for at least three years, had met him in person and referred to him as the O.G. The friends described him as older than most of the group members, who were in their teens, and the undisputed leader. One of the friends said the O.G. had access to intelligence documents through his job. While the gaming friends would not identify the group's leader by name, a trail of digital evidence compiled by The Times leads to Airman Teixeira. The Times has been able to link Airman Teixeira to other members of Thug Shaker Central through his online gaming profile and other records. Details of the interior of Airman Teixeira's childhood home -- posted on social media in family photographs -- also match details on the margins of some of the photographs of the leaked secret documents.

Members of Thug Shaker Central who spoke to The Times said that the documents they discussed online were meant to be purely informative. While many pertained to the war in Ukraine, the members said they took no side in the conflict. The documents, they said, started to get wider attention only when one of the teenage members of the group took a few dozen of them and posted them to a public online forum. From there they were picked up by Russian-language Telegram channels and then The Times, which first reported on them. The person who leaked, they said, was no whistle-blower, and the secret documents were never meant to leave their small corner of the internet. "This guy was a Christian, antiwar, just wanted to inform some of his friends about what's going on," said one of the person's friends from the community, a 17-year-old recent high school graduate. "We have some people in our group who are in Ukraine. We like fighting games; we like war games."

Role Playing (Games)

Leaked Classified Documents Also Include Roleplaying Game Character Stats (vice.com) 59

An anonymous reader quotes a report from Motherboard: Over the past month, classified Pentagon documents have circulated on 4chan, Telegram, and various Discord servers. The documents contain daily intelligence briefings, sensitive information about Ukrainian military positions, and a handwritten character sheet for a table-top roleplaying game. No one knows who leaked the Pentagon documents or how. They appeared online as photographs of printed pages, implying someone printed them out and removed them from a secure location, similar to how NSA translator Reality Winner leaked documents. The earliest documents Motherboard has seen are dated February 23, though the New York Times and Bellingcat reported that some are dated as early as January. According to Bellingcat, the earliest known instances of the leaks appearing online can be traced back to a Discord server.

At some point, a Discord user uploaded a zip file of 32 images from the leak onto a Minecraft Discord server. Included in this pack alongside highly sensitive, Top Secret and other classified documents about the Pentagon's strategy and assessment of the war in Ukraine, was a handwritten piece of paper that appeared to be a character sheet for a roleplaying game. It's written on a standard piece of notebook paper, three holes punched out on the side, blue lines crisscrossing the page. The character's name is Doctor "Izmer Trotzky," his character class is "Professor Scientist." They've got a strength of 5, a charisma of 4, and 19 rubles to their name. Doctor Trotzky has 10 points in first aid and occult skills, and 24 in spot hidden. He's carrying a magnifying glass, a fountain pen, a sword cane, and a deringer. [...]

But what game is it from? Motherboard reached out to game designer Jacqueline Bryk to find out. Bryk is an award-winning designer of roleplaying games who has worked on Kult: Divinity Lost, Changeling: the Lost, Fading Suns: Pax Alexius, and Vampire: the Masquerade. "I strongly suspect this is Call Of Cthulhu," Bryk said when first looking at the sheet. Call of Cthulhu (COC) is an RPG based on the work of H.P. Lovecraft where players attempt to stave off madness while investigating eldritch horrors. "This is a pretty classic Professor build. The sword cane really clinches it for me. I notice he's currently carrying a derringer and a dagger but took no points in firearms or fighting. I'm not sure which edition this is but it seems like the most he could do with his weapons is throw them."
"After some research, Bryk concluded that the game is a homebrewed combination of COC and the Fallout tabletop game based on the popular video game franchise," adds Motherboard. "My best guest here is Fallout: Cthulhu the Homebrew," Bryk said, giving the home designed game a name.
United States

Classified US Documents Leaked on 4chan, Telegram, Discord, and Twitter (msn.com) 133

America's Department of Justice just launched an investigation into the leaking of classified documents from the U.S. Department of Defense, reports the Washington Post.

"On Wednesday, images showing some of the documents began circulating on the anonymous online message board 4chan and made their way to at least two mainstream social media platforms, Telegram and Twitter." Earlier Friday, The Washington Post obtained dozens of what appeared to be photographs showing classified documents, dating to late February and early March, that range from worldwide intelligence briefings to tactical-level battlefield updates and assessments of Ukraine's defense capabilities. They outline information about the Ukrainian and Russian militaries, and include highly sensitive U.S. analyses about China and other nations. The materials also reference highly classified sources and methods that the United States uses to collect such information, alarming U.S. national security officials who have seen them.... The material that appeared online includes photographs of documents labeled "Secret" or "Top Secret," and began appearing on Discord, a chat platform popular with gamers, according to a Post review.

In some cases, it appears that the slides were manipulated. For instance, one image features combat casualty data suggesting the number of Russian soldiers killed in the war is far below what the Pentagon publicly has assessed. Another version of the image showed higher Russian casualty figures. Besides the information on casualties that appeared to be manipulated to benefit the Russian government, U.S. officials who spoke to The Post said many of the leaked documents did not appear to be forged and looked consistent in format with CIA World Intelligence Review reports distributed at high levels within the White House, Pentagon and the State Department....

The documents appear to have been drawn from multiple reports and agencies, and concern matters other than Ukraine. Two pages, for example, are purportedly a "CIA Operations Center Intelligence Update," and includes information about events concerning Russia, Hungary and Iran.... Rachel E. VanLandingham, a former Air Force attorney and expert on military law, said that whoever is responsible for the leak "is in a world of hurt." Such breaches, she said, constitute "one of the most serious crimes that exist regarding U.S. national security...."

Skepticism abounded Friday among both Russian and Ukrainian officials aware of reports about the leaks, with each side accusing the other of being involved in a deliberate act of disinformation.

The Post notes one defense official told them "hundreds — if not thousands" of people had access to the documents, so their source "could be anyone."

But the photographs received by the Post were apparently taken from printed documents, and "classified documents may only be printed from computers in a secure facility, and each transaction is electronically logged, said Glenn Gerstell, a former general counsel with the National Security Agency who emphasized that he was speaking only about general procedures. "The fact that the documents were printed out should significantly narrow the universe of the initial inquiry."
Facebook

Facebook's Powerful Large Language Model Leaks Online (vice.com) 11

Facebook's large language model, which is usually only available to approved researchers, government officials, or members of civil society, has now leaked online for anyone to download. From a report: The leaked language model was shared on 4chan, where a member uploaded a torrent file for Facebook's tool, known as LLaMa (Large Language Model Meta AI), last week. This marks the first time a major tech firm's proprietary AI model has leaked to the public. To date, firms like Google, Microsoft, and OpenAI have kept their newest models private, only accessible via consumer interfaces or an API, ostensibly to control instances of misuse. 4chan members claim to be running LLaMa on their own machines, but the exact implications of this leak are not yet clear.

In a statement to Motherboard, Meta did not deny the LLaMa leak, and stood by its approach of sharing the models among researchers. "It's Meta's goal to share state-of-the-art AI models with members of the research community to help us evaluate and improve those models. LLaMA was shared for research purposes, consistent with how we have shared previous large language models. While the model is not accessible to all, and some have tried to circumvent the approval process, we believe the current release strategy allows us to balance responsibility and openness," a Meta spokesperson wrote in an email.

AI

AI-Generated Voice Firm Clamps Down After 4chan Makes Celebrity Voices For Abuse (vice.com) 107

An anonymous reader quotes a report from Motherboard: It was only a matter of time before the wave of artificial intelligence-generated voice startups became a play thing of internet trolls. On Monday, ElevenLabs, founded by ex-Google and Palantir staffers, said it had found an "increasing number of voice cloning misuse cases" during its recently launched beta. ElevenLabs didn't point to any particular instances of abuse, but Motherboard found 4chan members appear to have used the product to generate voices that sound like Joe Rogan, Ben Sharpio, and Emma Watson to spew racist and other sorts of material. ElevenLabs said it is exploring more safeguards around its technology.

The clips uploaded to 4chan on Sunday are focused on celebrities. But given the high quality of the generated voices, and the apparent ease at which people created them, they highlight the looming risk of deepfake audio clips. In much the same way deepfake video started as a method for people to create non-consensual pornography of specific people before branching onto other use cases, the trajectory of deepfake audio is only just beginning. [...] The clips run the gamut from harmless, to violent, to transphobic, to homophobic, to racist. One 4chan post that included a wide spread of the clips also contained a link to the beta from ElevenLabs, suggesting ElevenLabs' software may have been used to create the voices.

On its website ElevenLabs offers both "speech synthesis" and "voice cloning." For the latter, ElevenLabs says it can generate a clone of someone's voice from a clean sample recording, over one minute in length. Users can quickly sign up to the service and start generating voices. ElevenLabs also offers "professional cloning," which it says can reproduce any accent. Target use cases include voicing newsletters, books, and videos, the company's website adds. [...] On Monday, shortly after the clips circulated on 4chan, ElevenLabs wrote on Twitter that "Crazy weekend -- thank you to everyone for trying out our Beta platform. While we see our tech being overwhelmingly applied to positive use, we also see an increasing number of voice cloning misuse cases." ElevenLabs added that while it can trace back any generated audio to a specific user, it was exploring more safeguards. These include requiring payment information or "full ID identification" in order to perform voice cloning, or manually verifying every voice cloning request.

Social Networks

Documents Show 15 Social Media Companies Failed to Adequately Address Calls for Violence in 2021 (msn.com) 80

The Washington Post has obtained "stunning new details on how social media companies failed to address the online extremism and calls for violence that preceded the Capitol riot."

Their source? The bipartisan committee investigating attacks on America's Capitol on January 6, 2021 "spent more than a year sifting through tens of thousands of documents from multiple companies, interviewing social media company executives and former staffers, and analyzing thousands of posts. They sent a flurry of subpoenas and requests for information to social media companies ranging from Facebook to fringe social networks including Gab and the chat platform Discord."

Yet in the end it was written up in a 122-page memo that was circulated among the committee but not delved into in their final report. And this was partly because the committee was "concerned about the risks of a public battle with powerful tech companies, according to three people familiar with the matter who spoke on the condition of anonymity to discuss the panel's sensitive deliberations." The [committee staffer's] memo detailed how the actions of roughly 15 social networks played a significant role in the attack. It described how major platforms like Facebook and Twitter, prominent video streaming sites like YouTube and Twitch and smaller fringe networks like Parler, Gab and 4chan served as megaphones for those seeking to stoke division or organize the insurrection. It detailed how some platforms bent their rules to avoid penalizing conservatives out of fear of reprisals, while others were reluctant to curb the "Stop the Steal" movement after the attack....

The investigators also wrote that much of the content that was shared on Twitter, Facebook and other sites came from Google-owned YouTube, which did not ban election fraud claims until Dec. 9 and did not apply its policy retroactively. The investigators found that its lax policies and enforcement made it "a repository for false claims of election fraud." Even when these videos weren't recommended by YouTube's own algorithms, they were shared across other parts of the internet. "YouTube's policies relevant to election integrity were inadequate to the moment," the staffers wrote.

The draft report also says that smaller platforms were not reactive enough to the threat posed by Trump. The report singled out Reddit for being slow to take down a pro-Trump forum called "r/The-Donald." The moderators of that forum used it to "freely advertise" TheDonald.win, which hosted violent content in the lead-up to Jan. 6.... The committee also spoke to Facebook whistleblower Frances Haugen, whose leaked documents in 2021 showed that the country's largest social media platform largely had disbanded its election integrity efforts ahead of the Jan. 6 riot. But little of her account made it into the final document.

"The transcripts show the companies used relatively primitive technologies and amateurish techniques to watch for dangers and enforce their platforms' rules. They also show company officials quibbling among themselves over how to apply the rules to possible incitements to violence, even as the riot turned violent."
AI

Meet 'Unstable Diffusion', the Group Trying To Monetize AI Porn Generators (techcrunch.com) 89

An anonymous reader quotes a report from TechCrunch: When Stable Diffusion, the text-to-image AI developed by startup Stability AI, was open sourced earlier this year, it didn't take long for the internet to wield it for porn-creating purposes. Communities across Reddit and 4chan tapped the AI system to generate realistic and anime-style images of nude characters, mostly women, as well as non-consensual fake nude imagery of celebrities. But while Reddit quickly shut down many of the subreddits dedicated to AI porn, and communities like NewGrounds, which allows some forms of adult art, banned AI-generated artwork altogether, new forums emerged to fill the gap. By far the largest is Unstable Diffusion, whose operators are building a business around AI systems tailored to generate high-quality porn. The server's Patreon -- started to keep the server running as well as fund general development -- is currently raking in over $2,500 a month from several hundred donors.

"In just two months, our team expanded to over 13 people as well as many consultants and volunteer community moderators," Arman Chaudhry, one of the members of the Unstable Diffusion admin team, told TechCrunch in a conversation via Discord. "We see the opportunity to make innovations in usability, user experience and expressive power to create tools that professional artists and businesses can benefit from." Unsurprisingly, some AI ethicists are as worried as Chaudhry is optimistic. While the use of AI to create porn isn't new [...] Unstable Diffusion's models are capable of generating higher-fidelity examples than most. The generated porn could have negative consequences particularly for marginalized groups, the ethicists say, including the artists and adult actors who make a living creating porn to fulfill customers' fantasies.

Unstable Diffusion got its start in August -- around the same time that the Stable Diffusion model was released. Initially a subreddit, it eventually migrated to Discord, where it now has roughly 50,000 members. [...] Today, the Unstable Diffusion server hosts AI-generated porn in a range of different art styles, sexual preferences and kinks. [...] Users in these channels can invoke the bot to generate art that fits the theme, which they can then submit to a "starboard" if they're especially pleased with the results. Unstable Diffusion claims to have generated over 4,375,000 images to date. On a semiregular basis, the group hosts competitions that challenge members to recreate images using the bot, the results of which are used in turn to improve Unstable Diffusion's models. As it grows, Unstable Diffusion aspires to be an "ethical" community for AI-generated porn -- i.e. one that prohibits content like child pornography, deepfakes and excessive gore. Users of the Discord server must abide by the terms of service and submit to moderation of the images that they generate; Chaudhry claims the server employs a filter to block images containing people in its "named persons" database and has a full-time moderation team.
"Chaudhry sees Unstable Diffusion evolving into an organization to support broader AI-powered content generation, sponsoring dev groups and providing tools and resources to help teams build their own systems," reports TechCrunch. "He claims that Equilibrium AI secured a spot in a startup accelerator program from an unnamed 'large cloud compute provider' that comes with a 'five-figure' grant in cloud hardware and compute, which Unstable Diffusion will use to expand its model training infrastructure."

In addition to the grant, Unstable Diffusion will launch a Kickstarter campaign and seek venture funding, Chaudhry says.

"We plan to create our own models and fine-tune and combine them for specialized use cases which we shall spin off into new brands and products," Chaudhry added.
Intel

Intel Confirms Alder Lake BIOS Source Code Leaked (tomshardware.com) 61

Tom's Hardware reports: We recently broke the news that Intel's Alder Lake BIOS source code had been leaked to 4chan and Github, with the 6GB file containing tools and code for building and optimizing BIOS/UEFI images. We reported the leak within hours of the initial occurrence, so we didn't yet have confirmation from Intel that the leak was genuine. Intel has now issued a statement to Tom's Hardware confirming the incident:

"Our proprietary UEFI code appears to have been leaked by a third party. We do not believe this exposes any new security vulnerabilities as we do not rely on obfuscation of information as a security measure. This code is covered under our bug bounty program within the Project Circuit Breaker campaign, and we encourage any researchers who may identify potential vulnerabilities to bring them our attention through this program...."


The BIOS/UEFI of a computer initializes the hardware before the operating system has loaded, so among its many responsibilities, is establishing connections to certain security mechanisms, like the TPM (Trusted Platform Module). Now that the BIOS/UEFI code is in the wild and Intel has confirmed it as legitimate, both nefarious actors and security researchers alike will undoubtedly probe it to search for potential backdoors and security vulnerabilities....

Intel hasn't confirmed who leaked the code or where and how it was exfiltrated. However, we do know that the GitHub repository, now taken down but already replicated widely, was created by an apparent LC Future Center employee, a China-based ODM that manufactures laptops for several OEMs, including Lenovo.

Thanks to Slashdot reader Hmmmmmm for sharing the news.
AI

YouTuber Trains AI On 4Chan's Most Hateful Board (engadget.com) 94

An anonymous reader quotes a report from Engadget: As Motherboard and The Verge note, YouTuber Yannic Kilcher trained an AI language model using three years of content from 4chan's Politically Incorrect (/pol/) board, a place infamous for its racism and other forms of bigotry. After implementing the model in ten bots, Kilcher set the AI loose on the board -- and it unsurprisingly created a wave of hate. In the space of 24 hours, the bots wrote 15,000 posts that frequently included or interacted with racist content. They represented more than 10 percent of posts on /pol/ that day, Kilcher claimed.

Nicknamed GPT-4chan (after OpenAI's GPT-3), the model learned to not only pick up the words used in /pol/ posts, but an overall tone that Kilcher said blended "offensiveness, nihilism, trolling and deep distrust." The video creator took care to dodge 4chan's defenses against proxies and VPNs, and even used a VPN to make it look like the bot posts originated from the Seychelles. The AI made a few mistakes, such as blank posts, but was convincing enough that it took roughly two days for many users to realize something was amiss. Many forum members only noticed one of the bots, according to Kilcher, and the model created enough wariness that people accused each other of being bots days after Kilcher deactivated them.
"It's a reminder that trained AI is only as good as its source material," concludes the report.
AI

Eric Schmidt Thinks AI Is As Powerful As Nukes 84

An anonymous reader quotes a report from Motherboard: Former Google CEO Eric Schmidt compared AI to nuclear weapons and called for a deterrence regime similar to the mutually-assured destruction that keeps the world's most powerful countries from destroying each other. Schmidt talked about the dangers of AI at the Aspen Security Forum at a panel on national security and artificial intelligence on July 22. While fielding a question about the value of morality in tech, Schmidt explained that he, himself, had been naive about the power of information in the early days of Google. He then called for tech to be better in line with the ethics and morals of the people it serves and made a bizarre comparison between AI and nuclear weapons.

Schmidt imagined a near future where China and the U.S. needed to cement a treaty around AI. "In the 50s and 60s, we eventually worked out a world where there was a 'no surprise' rule about nuclear tests and eventually they were banned," Schmidt said. "It's an example of a balance of trust, or lack of trust, it's a 'no surprises' rule. I'm very concerned that the U.S. view of China as corrupt or Communist or whatever, and the Chinese view of America as failingwill allow people to say 'Oh my god, they're up to something,' and then begin some kind of conundrum. Begin some kind of thing where, because you're arming or getting ready, you then trigger the other side. We don't have anyone working on that and yet AI is that powerful."

Schmidt imagined a near future where both China and the U.S. would have security concerns that force a kind of deterrence treaty between them around AI. He speaks of the 1950s and '60s when diplomacy crafted a series of controls around the most deadly weapons on the planet. But for the world to get to a place where it instituted the Nuclear Test Ban Treaty, SALT II, and other landmark pieces of legislation, it took decades of nuclear explosions and, critically, the destruction of Hiroshima and Nagasaki. The two Japanese cities America destroyed at the end of World War II killed tens of thousands of people and proved to the world the everlasting horror of nuclear weapons. The governments of Russia and China then rushed to acquire the weapons. The way we live with the possibility these weapons will be used is through something called mutual assured destruction (MAD), a theory of deterrence that ensures if one country launches a nuke, it's possible that every other country will too. We don't use the most destructive weapon on the planet because of the possibility that doing so will destroy, at the very least, civilization around the globe.
"The problem with AI is not that it has the potentially world destroying force of a nuclear weapon," writes Motherboard's Matthew Gault. "It's that AI is only as good as the people who designed it and that they reflect the values of their creators. AI suffers from the classic 'garbage in, garbage out' problem: Racist algorithms make racist robots, all AI carries the biases of its creators, and a chatbot trained on 4chan becomes vile..."

"AI is a reflection of its creator. It can't level a city in a 1.2 megaton blast. Not unless a human teaches it to do so."
The Internet

Connecticut Will Pay a Security Analyst 150K To Monitor Election Memes (popsci.com) 140

An anonymous reader quotes a report from Popular Science: Ahead of the upcoming midterm elections, Connecticut is hiring a "security analyst" tasked with monitoring and addressing online misinformation. The New York Times first reported this new position, saying the job description will include spending time on "fringe sites like 4chan, far-right social networks like Gettr and Rumble and mainstream social media sites." The goal is to identify election-related rumors and attempt to mitigate the damage they might cause by flagging them to platforms that have misinformation policies and promoting educational content that can counter those false narratives.

Connecticut Governor Ned Lamont's midterm budget (PDF), approved in early May, set aside more than $6 million to make improvements to the state's election system. That includes $4 million to upgrade the infrastructure used for voter registration and election management and $2 million for a "public information campaign" that will provide information on how to vote. The full-time security analyst role is recommended to receive $150,000. "Over the last few election cycles, malicious foreign actors have demonstrated the motivation and capability to significantly disrupt election activities, thus undermining public confidence in the fairness and accuracy of election results," the budget stated, as an explanation for the funding.

While the role is a first for Connecticut, the NYT noted that it's part of a growing nationwide trend. Colorado, for example, has a Rapid Response Election Security Cyber Unit tasked with monitoring online misinformation, as well as identifying "cyber-attacks, foreign interference, and disinformation campaigns." Originally created in anticipation of the 2020 presidential election, which proved to be fruitful ground for misinformation, the NYT says the unit is being "redeployed" this year. Other states, including Arizona, California, Idaho, and Oregon, are similarly funding election information initiatives in an attempt to counter misinformation, provide educational information, or do both.

Social Networks

Can Tech Firms Prevent Violent Videos Circulating on the Internet? (theguardian.com) 116

This week New York's attorney general announced they're officially "launching investigations into the social media companies that the Buffalo shooter used to plan, promote, and stream his terror attack." Slashdot reader echo123 points out that Discord confirmed that roughly 30 minutes before the attack a "small group" was invited to join the shooter's server. "None of the people he invited to review his writings appeared to have alerted law enforcement," reports the New York Times., "and the massacre played out much as envisioned."

But meanwhile, another Times article tells a tangentially-related story from 2019 about what ultimately happened to "a partial recording of a livestream by a gunman while he murdered 51 people that day at two mosques in Christchurch, New Zealand." For more than three years, the video has remained undisturbed on Facebook, cropped to a square and slowed down in parts. About three-quarters of the way through the video, text pops up urging the audience to "Share THIS...." Online writings apparently connected to the 18-year-old man accused of killing 10 people at a Buffalo, New York, grocery store Saturday said that he drew inspiration for a livestreamed attack from the Christchurch shooting. The clip on Facebook — one of dozens that are online, even after years of work to remove them — may have been part of the reason that the Christchurch gunman's tactics were so easy to emulate.

In a search spanning 24 hours this week, The New York Times identified more than 50 clips and online links with the Christchurch gunman's 2019 footage. They were on at least nine platforms and websites, including Reddit, Twitter, Telegram, 4chan and the video site Rumble, according to the Times' review. Three of the videos had been uploaded to Facebook as far back as the day of the killings, according to the Tech Transparency Project, an industry watchdog group, while others were posted as recently as this week. The clips and links were not difficult to find, even though Facebook, Twitter and other platforms pledged in 2019 to eradicate the footage, pushed partly by public outrage over the incident and by world governments. In the aftermath, tech companies and governments banded together, forming coalitions to crack down on terrorist and violent extremist content online. Yet even as Facebook expunged 4.5 million pieces of content related to the Christchurch attack within six months of the killings, what the Times found this week shows that a mass killer's video has an enduring — and potentially everlasting — afterlife on the internet.

"It is clear some progress has been made since Christchurch, but we also live in a kind of world where these videos will never be scrubbed completely from the internet," said Brian Fishman, a former director of counterterrorism at Facebook who helped lead the effort to identify and remove the Christchurch videos from the site in 2019....

Facebook, which is owned by Meta, said that for every 10,000 views of content on the platform, only an estimated five were of terrorism-related material. Rumble and Reddit said the Christchurch videos violated their rules and they were continuing to remove them. Twitter, 4chan and Telegram did not respond to requests for comment

For what it's worth, this week CNN also republished an email they'd received in 2016 from 4chan's current owner, Hiroyuki Nishimura. The gist of the email? "If I liked censorship, I would have already done that."

But Slashdot reader Bruce66423 also shares an interesting observation from The Guardian's senior tech reporter about the major tech platforms. "According to Hany Farid, a professor of computer science at UC Berkeley, there is a tech solution to this uniquely tech problem. Tech companies just aren't financially motivated to invest resources into developing it." Farid's work includes research into robust hashing, a tool that creates a fingerprint for videos that allows platforms to find them and their copies as soon as they are uploaded...

Farid: It's not as hard a problem as the technology sector will have you believe... The core technology to stop redistribution is called "hashing" or "robust hashing" or "perceptual hashing". The basic idea is quite simple: you have a piece of content that is not allowed on your service either because it violated terms of service, it's illegal or for whatever reason, you reach into that content, and extract a digital signature, or a hash as it's called.... That's actually pretty easy to do. We've been able to do this for a long time. The second part is that the signature should be stable even if the content is being modified, when somebody changes say the size or the color or adds text. The last thing is you should be able to extract and compare signatures very quickly.

So if we had a technology that satisfied all of those criteria, Twitch would say, we've identified a terror attack that's being live-streamed. We're going to grab that video. We're going to extract the hash and we are going to share it with the industry. And then every time a video is uploaded with the hash, the signature is compared against this database, which is being updated almost instantaneously. And then you stop the redistribution.

It's a problem of collaboration across the industry and it's a problem of the underlying technology. And if this was the first time it happened, I'd understand. But this is not, this is not the 10th time. It's not the 20th time. I want to emphasize: no technology's going to be perfect. It's battling an inherently adversarial system. But this is not a few things slipping through the cracks.... This is a complete catastrophic failure to contain this material. And in my opinion, as it was with New Zealand and as it was the one before then, it is inexcusable from a technological standpoint.

"These are now trillion-dollar companies we are talking about collectively," Farid points out later. "How is it that their hashing technology is so bad?
Crime

Gunman Livestreams Killing of 10 On Twitch - After Radicalization On 4chan (nbcnews.com) 481

Slashdot reader DevNull127 writes: 10 people were killed in a grocery store in Buffalo, New York this afternoon — and three more were injured — by a gunman who livestreamed the massacre on Twitch. "A Twitch spokesperson said the platform has investigated and confirmed that the stream was removed 'less than two minutes after the violence started,'" reports NBC News.

The Raw Story reports that the 18-year-old suspected gunman had also apparently posted a 106-page manifesto online prior to the attack. A researcher at George Washington University program on extremism studied the manifesto, and points out that the suspected shooter "states that he was radicalized online on 4chan and was inspired by Brenton Tarrant's manifesto and livestreamed mass shooting in New Zealand."

The suspect reportedly used an assault rifle.

Less than two weeks ago, Slashdot posted the following:

28-year-old Brenton Tarrant killed 51 people in New Zealand in 2019. The Associated Press reports that at that point he'd been reading 4chan for 14 years, according to his mother — since the age of 14.

The year before, 25-year-old Alek Minassian, who killed 11 people in Toronto in 2018, namechecked 4chan in a pre-attack Facebook post.

But the Guardian now adds another a story from nine days ago — when a 23-year-old shooter with 1,000 rounds of ammunition opened fire from his apartment in Washington D.C. "Just two minutes after the shooting began, someone under the username "Raymond Spencer" logged onto the normally-anonymous 4chan and started a new thread titled 'shool [sic] shooting'. The newly published message contained a link — to a 30-second video of images captured from the digital scope of Spencer's rifle...."

NBC News reported that while Saturday's suspected shooter was livestreaming, "Some users of the website 4chan discussed the attack, and at least one archived the video in real-time, releasing photos of dead civilians inside the supermarket over the course of Saturday afternoon."
Crime

D.C. Shooter Shared Video of His Attack on 4chan, Then Edited Wikipedia Page (theguardian.com) 198

28-year-old Brenton Tarrant killed 51 people in New Zealand in 2019. The Associated Press reports that at that point he'd been reading 4chan for 14 years, according to his mother — since the age of 14.

The year before, 25-year-old Alek Minassian, who killed 11 people in Toronto in 2018, namechecked 4chan in a pre-attack Facebook post.

But the Guardian now adds another a story from nine days ago — when a 23-year-old shooter with 1,000 rounds of ammunition opened fire from his apartment in Washington D.C. Just two minutes after the shooting began, someone under the username "Raymond Spencer" logged onto the normally-anonymous 4chan and started a new thread titled "shool [sic] shooting". The newly published message contained a link — to a 30-second video of images captured from the digital scope of Spencer's rifle....

Even as police stormed the apartment building where Spencer hid, with officers maneuvering past a surveillance camera that he had set up in the hallway and was monitoring, Spencer continued to post to the message board. "They're in the wrong part of the building right now searching," he posted at one point. A few minutes later: "Waiting for police to catch up with me."

As he waited, Spencer logged on to Wikipedia to edit the entry for Edmund Burke School, which he had just opened fire on....

Police believe Spencer shot himself to death as officers breached his apartment.

Emulation (Games)

Leaked Game Boy Emulators For Switch Were Made By Nintendo, Experts Suggest (arstechnica.com) 9

An anonymous reader quotes a report from Ars Technica: In most cases, the release of yet another classic console emulator for the Switch wouldn't be all that noteworthy. But experts tell Ars that a pair of Game Boy and Game Boy Advance emulators for the Switch that leaked online Monday show signs of being official products of Nintendo's European Research & Development division (NERD). That has some industry watchers hopeful that Nintendo may be planning official support for some emulated classic portable games through the Nintendo Switch Online subscription service in the future. The two leaked emulators -- codenamed Hiroko for Game Boy and Sloop for Game Boy Advance -- first hit the Internet as fully compiled NSP files and encrypted NCA files linked from a 4chan thread posted to the Pokemon board Monday afternoon. Later in that thread, the original poster suggested that these emulators "are official in-house development versions of Game Boy Color/Advance emulators for Nintendo Switch Online, which have not been announced or released."

In short order, dataminers examining the package found a .git folder in the ROM. That folder includes commit logs that reference supposed development work circa August 2020 from a NERD employee and, strangely enough, a developer at Panasonic Vietnam. NERD's history includes work on the software for the NES Classic and SNES Classic, as well as the GameCube emulation technology in last year's Super Mario All-Stars, so the division's supposed involvement wouldn't be out of the ordinary. Footage from the leaked Game Boy Advance emulator also includes a "(c) Nintendo" and "(c) 2019 -- 2020 Nintendo" at various points. While suggestive, none of this is exactly hard evidence of Nintendo's involvement in making these emulators. Some skepticism might be warranted, too, because there is some historical precedent for an emulator developer trying to get more attention by pretending their homebrew product is a "leaked" official Nintendo release.

Some observers also pointed to other reasons to doubt that these leaks were an "official" Nintendo work product. ModernVintageGamer and others noted that the leaked GBA emulator includes an "export state to Flashcart" option designed "to confirm original behavior" on "original hardware," according to the GUI. That option is illustrated with a picture of an EZFlash third-party flash cartridge in the emulator interface, an odd choice given Nintendo's previous litigious attacks on such flashcart makers. A "savedata memory" option in the emulator also references the ability to "inter-operate with flashcarts, other emulators, [and] fan websites..." That's a list that would serve as a decent Johnny Carson "Carnac the Magnificent" setup for "things Nintendo wouldn't want to reference in an official product."
A prominent video game historian that Ars consulted with said they were "99.9% sure [the emulators are] real" and that "personally I'm absolutely convinced of its legitimacy."
Graphics

Vice Mocks GIFs as 'For Boomers Now, Sorry'. (And For Low-Effort Millennials) (vice.com) 227

"GIF folders were used by ancient civilisations as a way to store and catalogue animated pictures that were once employed to convey emotion," Vice writes: Okay, you probably know what a GIF folder is — but the concept of a special folder needed to store and save GIFs is increasingly alien in an era where every messaging app has its own in-built GIF library you can access with a single tap. And to many youngsters, GIFs themselves are increasingly alien too — or at least, okay, increasingly uncool. "Who uses gifs in 2020 grandma," one Twitter user speedily responded to Taylor Swift in August that year when the singer-songwriter opted for an image of Dwayne "The Rock" Johnson mouthing the words "oh my god" to convey her excitement at reaching yet another career milestone.

You don't have to look far to find other tweets or TikToks mocking GIFs as the preserve of old people — which, yes, now means millennials. How exactly did GIFs become so embarrassing? Will they soon disappear forever, like Homer Simpson backing up into a hedge...?

Gen Z might think GIFs are beloved by millennials, but at the same time, many millennials are starting to see GIFs as a boomer plaything. And this is the first and easiest explanation as to why GIFs are losing their cultural cachet. Whitney Phillips, an assistant professor of communication at Syracuse University and author of multiple books on internet culture, says that early adopters have always grumbled when new (read: old) people start to encroach on their digital space. Memes, for example, were once subcultural and niche. When Facebook came along and made them more widespread, Redditors and 4Chan users were genuinely annoyed that people capitalised on the fruits of their posting without putting in the cultural work. "That democratisation creates a sense of disgust with people who consider themselves insiders," Phillips explains. "That's been central to the process of cultural production online for decades at this point...."

In 2016, Twitter launched its GIF search function, as did WhatsApp and iMessage. A year later, Facebook introduced its own GIF button in the comment section on the site. GIFs became not only centralised but highly commercialised, culminating in Facebook buying GIPHY for $400 million in 2020. "The more GIFs there are, maybe the less they're regarded as being special treasures or gifts that you're giving people," Phillips says. "Rather than looking far and wide to find a GIF to send you, it's clicking the search button and typing a word. The gift economy around GIFs has shifted...."

Linda Kaye, a cyberpsychology professor at Edge Hill University, hasn't done direct research in this area but theorises that the ever-growing popularity of video-sharing on TikTok means younger generations are more used to "personalised content creation", and GIFs can seem comparatively lazy.

The GIF was invented in 1987 "and it's important to note the format has already fallen out of favour and had a comeback multiple times before," the article points out. It cites Jason Eppink, an independent artist and curator who curated an exhibition on GIFs for the Museum of the Moving Image in New York in 2014, who highlighted how GIFs were popular with GeoCities users in the 90s, "so when Facebook launched, they didn't support GIFs.... They were like, 'We don't want this ugly symbol of amateur web to clutter our neat and uniform cool new website." But then GIFs had a resurgence on Tumblr.

Vice concludes that while even Eppink no longer uses GIFs any more, "Perhaps the waxing and waning popularity of the GIF is an ironic mirror of the format itself — destined to repeat endlessly, looping over and over again."
Privacy

Twitch Source Code and Business Data Leaked (therecord.media) 66

An unknown individual has leaked the source code and business data of video streaming platform Twitch via a torrent file posted on the 4chan discussion board earlier today. From a report: The leaker said they shared the data as a response to the recent "hate raids" --coordinated bot attacks posting hateful and abusive content in Twitch chats -- that have plagued the platform's top streamers over the summer. "Their community is [...] a disgusting toxic cesspool, so to foster more disruption and competition in the online video streaming space, we have completely pwned them, and in part one, are releasing the source code from almost 6,000 internal Git repositories," the leaker said earlier today. The leaker claims that the leak contains the "entirety of twitch.tv, with commit history going back to its early beginnings, mobile, desktop and video game console Twitch clients, various proprietary SDKs and internal AWS services used by Twitch, every other property that Twitch owns including IGDB and CurseForge, an unreleased Steam competitor from Amazon Game Studios, and Twitch SOC internal red teaming tools."

Twitch has confirmed the breach. In a tweet it said, "We can confirm a breach has taken place. Our teams are working with urgency to understand the extent of this. We will update the community as soon as additional information is available."
The Military

Secret Military Aircraft Possibly Exposed On TikTok (warisboring.com) 86

An anonymous reader quotes a report from War Is Boring: An OPSEC violation has once again made a case for why using TikTok should be a punishable offense in the military, this time after someone revealed some US stealth technology testing going on and posted it to the Chinese government-affiliated platform. The stealthy object -possibly a component of a new drone or plane- was filmed on a tractor-trailer platform at Helendale Radar Cross Section Facility. After making their debut on a social media platform tied to America's top adversary, images of the object quickly made their way to the internet, gracing everything from 4chan to Reddit. It is unknown what project the object is tied to, though speculation has ranged from a new Boeing product to even the famed "TicTac" UFO sighted by Naval Aviators in recent years. Steve Trimble of Aviation Week wrote in a tweet: "I showed this to Gen Mark Kelly, Air Combat Command chief. His immediate reply was that he had no idea what it was. And then he took my laptop and stared at it for about 20 seconds. His expression was (WARNING: my impression) somewhere between confused and impressed."
The Internet

The 'Dead Internet' Theory Posits Forums are Now Almost Entirely Overrun By AI (theatlantic.com) 147

Ideas from 4chan (including its paranormal section) have percolated into the "dead internet" theory, writes the Atlantic, with a seminal post on another forum by "IlluminatiPirate" now arguing that the internet is almost entirely overrun by artificial intelligence: Like lots of other online conspiracy theories, the audience for this one is growing because of discussion led by a mix of true believers, sarcastic trolls, and idly curious lovers of chitchat... Peppered with casually offensive language, the post suggests that the internet died in 2016 or early 2017, and that now it is "empty and devoid of people," as well as "entirely sterile." Much of the "supposedly human-produced content" you see online was actually created using AI, IlluminatiPirate claims, and was propagated by bots, possibly aided by a group of "influencers" on the payroll of various corporations that are in cahoots with the government. The conspiring group's intention is, of course, to control our thoughts and get us to purchase stuff... He argues that all modern entertainment is generated and recommended by an algorithm; gestures at the existence of deepfakes, which suggest that anything at all may be an illusion; and links to a New York story from 2018 titled "How Much of the Internet Is Fake? Turns Out, a Lot of It, Actually."

"I think it's entirely obvious what I'm subtly suggesting here given this setup," the post continues. "The U.S. government is engaging in an artificial intelligence powered gaslighting of the entire world population." So far, the original post has been viewed more than 73,000 times...

The theory has become fodder for dramatic YouTube explainers, including one that summarizes the original post in Spanish and has been viewed nearly 260,000 times. Speculation about the theory's validity has started appearing in the widely read Hacker News forum and among fans of the massively popular YouTube channel Linus Tech Tips. In a Reddit forum about the paranormal, the theory is discussed as a possible explanation for why threads about UFOs seem to be "hijacked" by bots so often. The theory's spread hasn't been entirely organic. IlluminatiPirate has posted a link to his manifesto in several Reddit forums that discuss conspiracy theories... Anyway ... dead-internet theory is pretty far out-there. But unlike the internet's many other conspiracy theorists, who are boring or really gullible or motivated by odd politics, the dead-internet people kind of have a point... [Y]ou could even say that the point of the theory is so obvious, it's cliché — people talk about longing for the days of weird web design and personal sites and listservs all the time. Even Facebook employees say they miss the "old" internet. The big platforms do encourage their users to make the same conversations and arcs of feeling and cycles of outrage happen over and over, so much so that people may find themselves acting like bots, responding on impulse in predictable ways to things that were created, in all likelihood, to elicit that very response.

That 2018 article in New York magazine had argued that (at that time) a majority of web traffic was probably coming from bots — including especially high bot traffic on YouTube — while even the engagement metrics for major sites like Facebook had been gamed or inflated.

But whether or not that's changed, the Atlantic shares a compelling argument from a forum poster arguing that their very presence in this discussion proves they must be a bot. "If I was real I'm pretty sure I'd be out there living each day to the fullest and experiencing everything I possibly could with every given moment of the relatively infinitesimal amount of time I'll exist for instead of posting on the internet about nonsense."
Google

4chan Founder Chris 'Moot' Poole Has Left Google (cnbc.com) 91

Chris Poole, who founded controversial online community 4chan before joining Google in 2016, has left the search giant after jumping among several groups within the company, CNBC has learned. From the report: Poole's last official day at Google was April 13th, according to an internal repository viewed by CNBC, which described his last role as a product manager. Oftentimes, employee shares attached to hiring vest at the five-year mark, though it's unclear if that's a reason for Poole's departure now. Poole, who goes by the moniker "Moot," founded 4chan in 2003 at age 15. It grew into one of the most influential and controversial online communities to date. Rolling Stone famously called him a boy-genius and the "Mark Zuckerberg of the online underground." [...]

Poole revealed in 2016 that he'd joined Google as a continuation of his work, and in a now-removed post, stated he'd use his "experience from a dozen years of building online communities" and "grow in ways one simply cannot on their own." He joined as product manager in the photos and streams unit, which oversaw social networking efforts under VP Bradley Horowitz at the time. That sparked speculation that the company hired him to help it revamp its social media ambitions, some of which aimed to compete with Facebook. Poole jumped between several different roles during his five years. At one point, he reportedly became a partner at Google's in-house start-up incubator, Area 120, which was just getting off the ground in 2016. He then became a product manager in Google's Maps division, according to Crunchbase.

Slashdot Top Deals