Wikipedia

Wikipedia Says AI Is Causing a Dangerous Decline in Human Visitors (404media.co) 92

The Wikimedia Foundation, the nonprofit organization that hosts Wikipedia, says that it's seeing a significant decline in human traffic to the online encyclopedia because more people are getting the information that's on Wikipedia via generative AI chatbots that were trained on its articles and search engines that summarize them without actually clicking through to the site. 404 Media: The Wikimedia Foundation said that this poses a risk to the long term sustainability of Wikipedia. "We welcome new ways for people to gain knowledge. However, AI chatbots, search engines, and social platforms that use Wikipedia content must encourage more visitors to Wikipedia, so that the free knowledge that so many people and platforms depend on can continue to flow Sustainably," the Foundation's Senior Director of Product Marshall Miller said in a blog post. "With fewer visits to Wikipedia, fewer volunteers may grow and enrich the content, and fewer individual donors may support this work."
Anime

Japan Asks OpenAI To Stop Sora 2 From Infringing on 'Irreplaceable Treasures' Anime and Manga 41

The Japanese government has made a formal request asking OpenAI to refrain from copyright infringement. The request came after Sora 2 began generating videos featuring copyrighted characters from anime and video games. Minoru Kiuchi spoke at the Cabinet Office press conference on Friday and described manga and anime as "irreplaceable treasures" that Japan boasts to the world.

The request was made online by the Cabinet Office's Intellectual Property Strategy Headquarters. Sora 2, which launched recently, generates twenty-second videos at 1080p resolution. Social media is getting filled with videos showing characters from One Piece, Demon Slayer, Pokemon and Mario. Digital Minister Masaaki Taira expressed hopes that OpenAI would comply voluntarily. He indicated that measures under Japan's AI Promotion Act may be invoked if the issue remains unresolved.
Bitcoin

DOJ Seizes $15 Billion In Bitcoin From Massive 'Pig Butchering' Scam Based In Cambodia (cnbc.com) 70

The U.S. Department of Justice seized about $15 billion in bitcoin from wallets tied to Chen Zhi, founder of Cambodia's Prince Holding Group, who is accused of running one of the world's biggest "pig butchering" scams. Prosecutors say Zhi's network trafficked people into forced-labor scam compounds that defrauded victims worldwide through fake crypto investment schemes. CNBC reports: The seizure is the largest forfeiture action by the DOJ in history. An indictment charging the alleged pig butcher, Chen Zhi, was unsealed Tuesday in federal court in Brooklyn, New York. Zhi, who is also known as "Vincent," remains at large, according to the U.S. Attorney's Office for the Eastern District of New York. He was identified in court filings as the founder and chairman of Prince Holding Group, a multinational business conglomerate based in Cambodia, which prosecutors said grew "in secret .... into one of Asia's largest transnational criminal organizations. [...]

The scams duped people contacted via social media and messaging applications online into transferring cryptocurrency into accounts controlled by the scheme with false promises that the crypto would be invested and produce profits, according to the office. "In reality, the funds were stolen from the victims and laundered for the benefit of the perpetrators," the release said. "The scam perpetrators often built relationships with their victims over time, earning their trust before stealing their funds."

Prosecutors said that hundreds of people were trafficked and forced to work in the scam compounds, "often under the threat of violence." Zhi and a network of top executives in the Prince Group are accused of using political influence in multiple countries to protect their criminal enterprise and paid bribes to public officials to avoid actions by law enforcement authorities targeting the scheme, according to prosecutors.

United States

Three New California Laws Target Tech Companies' Interactions with Children 47

California Governor Gavin Newsom signed three bills on Monday that establish the nation's most comprehensive framework for regulating how technology companies interact with minors. AB 56 requires social media platforms to display health warnings to users under 18. A child must view a skippable ten-second warning upon logging on each day. An unskippable thirty-second warning must appear if a child spends more than three hours on a platform. That warning repeats after each additional hour. The warnings must state that social media "can have a profound risk of harm to the mental health and well-being of children and adolescents." Minnesota passed a similar law in July.

SB 243 makes California the first state to regulate AI companion chatbots. The law takes effect January 1, 2026. Companies must implement age verification and disclose that interactions are artificially generated. Chatbots cannot represent themselves as healthcare professionals. Companies must offer break reminders to minors and prevent them from viewing sexually explicit images. The legislation gained momentum after teenager Adam Raine died by suicide following conversations with OpenAI's ChatGPT. A Colorado family filed suit against Character AI after their daughter's suicide following problematic conversations with the company's chatbots.

AB 1043 requires device-makers like Apple and Google to collect birth dates when parents set up devices for children. Device-makers must group users into four age brackets and share this information with apps. Google, Meta, OpenAI, and Snap supported the bill. The Motion Picture Association opposed it.
Crime

Suspect Arrested After Threats Against TikTok's Culver City Headquarters 11

Police arrested 33-year-old Joseph Mayuyo after a series of online threats forced TikTok to evacuate its Culver City headquarters. TechCrunch reports: A press release from the Culver City Police Department says that TikTok employees reported receiving multiple threats, across various social media platforms, from 33-year-old Hawthorne resident Joseph Mayuyo. After an additional message threatened TikTok's Culver City headquarters, police say company security evacuated the office "out of an abundance of caution."

Police then investigated Mayuyo's home, according to the press release. During the investigation, he allegedly posted additional threatening statements, including one declaring that he would not be taken alive. Detectives obtained search and arrest warrants, and they negotiated with Mayuyo for 90 minutes before he voluntarily exited his home and was taken into custody, the police department says.

Business Insider reports that one TikTok employee described the threats as "really scary," while another was concerned that they seemed to specifically target the e-commerce department. Mayuyo's X account has reportedly been suspended for violating the platform's hateful content policy. A Medium account under his name published a post in July criticizing TikTokShop USA as a "scam."
China

China's K-visa Plans Spark Worries of a Talent Flood (cnbc.com) 70

An anonymous reader shares a report: Immigration anxieties and a challenging job market have sparked an online backlash over China's latest attempt at attracting global talent -- a new visa program announced in August. The program, which was rolled out on Wednesday with the aim of attracting foreign professionals, will also test how China balances its immigration policy with its pursuit of technological ambitions.

Under the new rules, young graduates -- in the fields of science, technology, engineering and mathematics or STEM -- no longer need backing from a local employer and can enjoy more flexibility in terms for entry frequency and duration of stay. The keyword "K-visa" -- as China's new visa category is called -- was among the top searches on social media site Weibo for days, before chatter about National Day traffic jams pushed it off the charts as millions hit the road for a week-long holiday.

Chinese social media users argue that the new visa tilts the playing field toward foreign graduates at the expense of those educated in China. Others on Weibo warned that without employer sponsorship, the program could invite fraudulent applications and open the door to a surge in arrivals from developing countries, piling pressure on an already strained labor market.

Facebook

Facebook Data Reveal the Devastating Real-World Harms Caused By the Spread of Misinformation (theconversation.com) 174

An anonymous reader quotes a report from The Conversation: Twenty-one years after Facebook's launch, Australia's top 25 news outlets now have a combined 27.6 million followers on the platform. They rely on Facebook's reach more than ever, posting far more stories there than in the past. With access to Meta's Content Library (Meta is the owner of Facebook), our big data study analysed more than three million posts from 25 Australian news publishers. We wanted to understand how content is distributed, how audiences engage with news topics, and the nature of misinformation spread. The study enabled us to track de-identified Facebook comments and take a closer look at examples of how misinformation spreads. These included cases about election integrity, the environment (floods) and health misinformation such as hydroxychloroquine promotion during the COVID pandemic. The data reveal misinformation's real-world impact: it isn't just a digital issue, it's linked to poor health outcomes, falling public trust, and significant societal harm. [...]

Our study has lessons for public figures and institutions. They, especially politicians, must lead in curbing misinformation, as their misleading statements are quickly amplified by the public. Social media and mainstream media also play an important role in limiting the circulation of misinformation. As Australians increasingly rely on social media for news, mainstream media can provide credible information and counter misinformation through their online story posts. Digital platforms can also curb algorithmic spread and remove dangerous content that leads to real-world harms. The study offers evidence of a change over time in audiences' news consumption patterns. Whether this is due to news avoidance or changes in algorithmic promotion is unclear. But it is clear that from 2016 to 2024, online audiences increasingly engaged with arts, lifestyle and celebrity news over politics, leading media outlets to prioritize posting stories that entertain rather than inform. This shift may pose a challenge to mitigating misinformation with hard news facts. Finally, the study shows that fact-checking, while valuable, is not a silver bullet. Combating misinformation requires a multi-pronged approach, including counter-messaging by trusted civic leaders, media and digital literacy campaigns, and public restraint in sharing unverified content.

AI

Is OpenAI's Video-Generating Tool 'Sora' Scraping Unauthorized YouTube Clips? (msn.com) 18

"OpenAI's video generation tool, Sora, can create high-definition clips of just about anything you could ask for..." reports the Washington Post.

"But OpenAI has not specified which videos it grabbed to make Sora, saying only that it combined 'publicly available and licensed data'..." With ChatGPT, OpenAI helped popularize the now-standard industry practice of building more capable AI tools by scraping vast quantities of text from the web without consent. With Sora, launched in December, OpenAI staff said they built a pioneering video generator by taking a similar approach. They developed ways to feed the system more online video — in more varied formats — including vertical videos and longer, higher-resolution clips... To explore what content OpenAI may have used, The Washington Post used Sora to create hundreds of videos that show it can closely mimic movies, TV shows and other content...

In dozens of tests, The Post found that Sora can create clips that closely resemble Netflix shows such as "Wednesday"; popular video games like "Minecraft"; and beloved cartoon characters, as well as the animated logos for Warner Bros., DreamWorks and other Hollywood studios, movies and TV shows. The publicly available version of Sora can generate only 20-second clips, without audio. In most cases, the look-alike scenes were made by typing basic requests like "universal studios intro." The results also showed that Sora can create AI videos with the logos or watermarks that broadcasters and tech companies use to brand their video content, including those for the National Basketball Association, Chinese-owned social app TikTok and Amazon-owned streaming platform Twitch...

Sora's ability to re-create specific imagery and brands suggests a version of the originals appeared in the tool's training data, AI researchers said. "The model is mimicking the training data. There's no magic," said Joanna Materzynska, a PhD researcher at Massachusetts Institute of Technology who has studied datasets used in AI. An AI tool's ability to reproduce proprietary content doesn't necessarily indicate that the original material was copied or obtained from its creators or owners. Content of all kinds is uploaded to video and social platforms, often without the consent of the copyright holder... Materzynska co-authored a study last year that found more than 70 percent of public video datasets commonly used in AI research contained content scraped from YouTube.

Netflix and Twitch said they did not have a content partnership for training OpenAI, according to the article (which adds that OpenAI "has yet to face a copyright suit over the data used for Sora.")

Two key quotes from the article:
  • "Unauthorized scraping of YouTube content continues to be a violation of our Terms of Service." — YouTube spokesperson Jack Malon
  • "We train on publicly available data consistent with fair use and use industry-leading safeguards to avoid replicating the material they learn from." — OpenAI spokesperson Kayla Wood

Social Networks

What Happens After the Death of Social Media? (noemamag.com) 112

"These are the last days of social media as we know it," argues a humanities lecturer from University College Cork exploring where technology and culture intersect, warning they could become lingering derelicts "haunted by bots and the echo of once-human chatter..."

"Whatever remains of genuine, human content is increasingly sidelined by algorithmic prioritization, receiving fewer interactions than the engineered content and AI slop optimized solely for clicks... " In recent years, Facebook and other platforms that facilitate billions of daily interactions have slowly morphed into the internet's largest repositories of AI-generated spam. Research has found what users plainly see: tens of thousands of machine-written posts now flood public groups — pushing scams, chasing clicks — with clickbait headlines, half-coherent listicles and hazy lifestyle images stitched together in AI tools like Midjourney... While content proliferates, engagement is evaporating. Average interaction rates across major platforms are declining fast: Facebook and X posts now scrape an average 0.15% engagement, while Instagram has dropped 24% year-on-year. Even TikTok has begun to plateau. People aren't connecting or conversing on social media like they used to; they're just wading through slop, that is, low-effort, low-quality content produced at scale, often with AI, for engagement.

And much of it is slop: Less than half of American adults now rate the information they see on social media as "mostly reliable" — down from roughly two-thirds in the mid-2010s... Platforms have little incentive to stem the tide. Synthetic accounts are cheap, tireless and lucrative because they never demand wages or unionize. Systems designed to surface peer-to-peer engagement are now systematically filtering out such activity, because what counts as engagement has changed. Engagement is now about raw user attention — time spent, impressions, scroll velocity — and the net effect is an online world in which you are constantly being addressed but never truly spoken to.

"These are the last days of social media, not because we lack content," the article suggests, "but because the attention economy has neared its outer limit — we have exhausted the capacity to care..." Social media giants have stopped growing exponentially, while a significant proportion of 18- to 34-year-olds even took deliberate mental health breaks from social media in 2024, according to an American Psychiatric Association poll.) And "Some creators are quitting, too. Competing with synthetic performers who never sleep, they find the visibility race not merely tiring but absurd."

Yet his 5,000-word essay predicts social media's death rattle "will not be a bang but a shrug," since "the model is splintering, and users are drifting toward smaller, slower, more private spaces, like group chats, Discord servers and federated microblogs — a billion little gardens." Intentional, opt-in micro-communities are rising in their place — like Patreon collectives and Substack newsletters — where creators chase depth over scale, retention over virality. A writer with 10,000 devoted subscribers can potentially earn more and burn out less than one with a million passive followers on Instagram... Even the big platforms sense the turning tide. Instagram has begun emphasizing DMs, X is pushing subscriber-only circles and TikTok is experimenting with private communities. Behind these developments is an implicit acknowledgement that the infinite scroll, stuffed with bots and synthetic sludge, is approaching the limit of what humans will tolerate....

The most radical redesign of social media might be the most familiar: What if we treated these platforms as public utilities rather than private casinos...? Imagine social media platforms with transparent algorithms subject to public audit, user representation on governance boards, revenue models based on public funding or member dues rather than surveillance advertising, mandates to serve democratic discourse rather than maximize engagement, and regular impact assessments that measure not just usage but societal effects... This could take multiple forms, like municipal platforms for local civic engagement, professionally focused networks run by trade associations, and educational spaces managed by public library systems... We need to "rewild the internet," as Maria Farrell and Robin Berjon mentioned in a Noema essay.

We need governance scaffolding, shared institutions that make decentralization viable at scale... [R]eal change will come when platforms are rewarded for serving the public interest. This could mean tying tax breaks or public procurement eligibility to the implementation of transparent, user-controllable algorithms. It could mean funding research into alternative recommender systems and making those tools open-source and interoperable. Most radically, it could involve certifying platforms based on civic impact, rewarding those that prioritize user autonomy and trust over sheer engagement.

"Social media as we know it is dying, but we're not condemned to its ruins. We are capable of building better — smaller, slower, more intentional, more accountable — spaces for digital interaction, spaces..."

"The last days of social media might be the first days of something more human: a web that remembers why we came online in the first place — not to be harvested but to be heard, not to go viral but to find our people, not to scroll but to connect. We built these systems, and we can certainly build better ones."
Crime

Myanmar's 'Cyber-Slavery Compounds' May Hold 100,000 Trafficked People (theguardian.com) 35

It was "little more than empty fields" five years ago — but it's now "a vast, heavily guarded complex stretching for 210 hectares (520 acres)," reports the Guardian, "the frontline of a multibillion-dollar criminal fraud industry fuelled by human trafficking and brutal violence." Myanmar, Cambodia and Laos have in recent years become havens for transnational crime syndicates running scam centres such as KK Park, which use enslaved workers to run complex online fraud and scamming schemes that generate huge profits. There have been some attempts to crack down on the centres and rescue the workers, who can be subjected to torture and trapped inside. But drone images and new research shared exclusively with the Guardian reveal that the number of such centres operating along the Thai-Myanmar border has more than doubled since Myanmar's military seized power in 2021, with construction continuing to this day.

Data from the Australian Strategic Policy Institute (Aspi), a defence thinktank in Canberra, shows that the number of Myanmar scam centres on the Thai border has increased from 11 to 27, and they have expanded in size by an average of 5.5 hectares a month. Drone images and photographs of KK Park and other Myanmar scam centres, Tai Chang and Shwe Kokko, taken by the Guardian in August show new features and active building work... Myanmar's military junta has allowed the spread of scam centres inside the country as these criminal enterprises have become an essential part of the country's conflict economy since the coup, helping it rise to the top of the global list of countries harbouring organised crime. According to Aspi's analysis, Myanmar's military, which has lost huge swathes of territory since the coup and is struggling to retain its grip on power, cannot take meaningful measures against the scam compounds without endangering its precarious relations with the crucial armed militias who are profiting from them.

While 7,000 people were freed from the compounds earlier this year, "Thai police estimated earlier this year that as many as 100,000 people were held inside Myanmar scam centres," the article notes.

Elsewhere the Guardian reports that "The centres are run by Chinese criminal gangs," and describes people who unwittingly came to Thailand for customer service jobs, only to be trafficked to Myanmar's guarded "cyberslavery compounds" and "forced to send thousands of messages from fake social-media profiles, posing as a rich American investor to swindle US real estate agents into cryptocurrency scams." Since 2020, south-east Asia's cyber-slavery industry has entrapped hundreds of thousands of people and forced them to perform "pig butchering" — the brutal term for building trust with a fraud target before scamming them. At first, the industry mostly captured Chinese and Taiwanese people, then it moved on to south-east Asians and Indians — and now Africans.

Criminal syndicates have been shifting towards scamming victims in the US and Europe after Chinese efforts to prevent its citizens being targeted, experts told the Guardian. That has led some trafficking networks to seek recruits with English-language and tech skills — including east Africans, thousands of whom are now estimated to be trapped inside south-east Asian compounds, says Benedikt Hofmann, the UN Office on Drugs and Crime's representative for south-east Asia and the Pacific.


Thanks to long-time Slashdot reader mspohr for sharing the article.
Technology

From Discord To Bitchat, Tech At the Heart of Nepal Protests (france24.com) 5

An anonymous reader quotes a report from France24: Fueled in part by anger over flashy lifestyles flaunted by elites, young anti-corruption demonstrators mainly in their 20s rallied on Monday. The loose grouping, largely viewed as members of "Gen Z", flooded the capital Kathmandu to demand an end to a ban on Facebook, YouTube and other popular sites. The rallies ended in chaos and tragedy, with at least 19 protesters killed in a police crackdown on Monday. The apps were restored, but protests widened in anger.

On Tuesday, other Nepalis joined the crowds. Parliament was set ablaze, KP Sharma Oli resigned as prime minister, and the army took charge of the streets. Now, many activists are taking to the US group-chat app Discord to talk over their next steps. One server with more than 145,000 members has hosted feverish debate about who could be an interim leader, with many pushing 73-year-old former chief justice Sushila Karki. It is just one example of how social media has driven demands for change. [...]

More than half of Nepal's 30 million people are online, according to the World Bank. Days before the protests, many had rushed to VPN services — or virtual private networks — to evade blocks on platforms. Fears of a wider internet shutdown also drove a surge in downloads for Bluetooth messaging app Bitchat, created by tech billionaire Jack Dorsey. "Tech played... an almost decisive role," journalist Pranaya Rana told AFP. "The whole thing started with young people posting on social media about corruption, and the lavish lives that the children of political leaders were leading."

Hashtags such as #NepoKids, short for nepotism, compared the designer clothing and luxury holidays shown off in their Instagram posts to the difficulties faced by ordinary Nepalis. One post liked 13,000 times accused politicians' children of "living like millionaires," asking: "Where is the tax money going?" "NepoKids was trending all the time," including in rural areas where Facebook is popular, said rights activist Sanjib Chaudhary. "This fuelled the fire" of anger that "has been growing for a long time," he said. [...] Chaudhary said the government "seriously underestimated the power of social media."
Nepal's first female prime minister was sworn in Friday as interim leader after protesters held an informal vote on Discord. "Former chief justice Sushila Karki, 73, was the unlikely choice of the 'Gen Z' protesters behind the movement that started out as a social media demonstration against the lavish lifestyles of 'Nepo Kids' but spilled out onto the streets and into the deadliest social unrest Nepal has seen in years," reports CNN World.

"Karki has spent much of her career within the very establishment the youth are protesting against, yet her reputation as a fearless and incorruptible jurist has appealed to many young people in the country of 30 million."
Social Networks

Sam Altman Says Bots Are Making Social Media Feel 'Fake' (techcrunch.com) 83

An anonymous reader quotes a report from TechCrunch: X enthusiast and Reddit shareholder Sam Altman had an epiphany on Monday: Bots have made it impossible to determine whether social media posts are really written by humans, he posted. The realization came while reading (and sharing) some posts from the r/Claudecode subreddit, which were praising OpenAI Codex. OpenAI launched the software programming service that takes on Anthropic's Claude Code in May. Lately, that subreddit has been so filled with posts from self-proclaimed Code users announcing that they moved to Codex that one Reddit user even joked: "Is it possible to switch to codex without posting a topic on Reddit?"

This left Altman wondering how many of those posts were from real humans. "I have had the strangest experience reading this: I assume it's all fake/bots, even though in this case I know codex growth is really strong and the trend here is real," he confessed on X. He then live-analyzed his reasoning. "I think there are a bunch of things going on: real people have picked up quirks of LLM-speak, the Extremely Online crowd drifts together in very correlated ways, the hype cycle has a very 'it's so over/we're so back' extremism, optimization pressure from social platforms on juicing engagement and the related way that creator monetization works, other companies have astroturfed us so i'm extra sensitive to it, and a bunch more (including probably some bots)."

[...] Altman also throws a dig at the incentives when social media sites and creators rely on engagement to make money. Fair enough. But then Altman confesses that one of the reasons he thinks the pro-OpenAI posts in this subreddit might be bots is because OpenAI has also been "astroturfed." That typically involves posts by people or bots paid for by the competitor, or paid by some third-degree contractor, giving the competitor plausible deniability. [...] Altman surmises, "The net effect is somehow AI twitter/AI Reddit feels very fake in a way it really didn't a year or two ago." If that's true, who's fault is it? GPT has led models to become so good at writing, that LLMs have become a plague not just to social media sites (which have always had a bot problem) but to schools, journalism, and the courts.

Crime

'Swatting' Hits a Dozen US Universities. The FBI is Investigating (msn.com) 110

The Washington Post covers "a string of false reports of active shooters at a dozen U.S. universities this month as students returned to campus." The FBI is investigating the incidents, according to a spokesperson who declined to specify the nature of the probe. While universities have proved a popular swatting target, the agency "is seeing an increase in swatting events across the country," the FBI spokesperson said... Local officials are frustrated by the anonymous calls tying up first responders, straining public safety budgets and needlessly traumatizing college students who grew up in an era in which gun violence has in some way shaped their school experience...

The recent string of swattings began Thursday with a false report to the University of Tennessee at Chattanooga, quickly followed by one about Villanova University later that day. Hoaxes at 10 more schools followed... Villanova also received a second threat. As the calls about shootings came in, officials on many of the campuses pushed out emergency notifications directing students and employees to shelter in place, while police investigated what turned out to be false reports. (Iowa State was able to verify the lack of a threat before a campuswide alert was sent, its police chief said. [They had a live video feed from the location the caller claimed to be from.]) In at least three cases, 911 calls reporting a shooting purported to come from campus libraries, where the sound of gunshots could be heard over the phone, officials told The Washington Post...

Although false bomb reports, shooter threats and swatting incidents are not new, bad actors used to be more easily traceable through landline phones. But the era of internet-based services, virtual private networks, and anonymous text and chat tools has made unmasking hoax callers far more challenging... In 2023, a Post investigation found that more than 500 schools across the United States were subject to a coordinated swatting effort that may have had origins abroad...

[In Chattanooga, Tennessee last week] a dispatcher heard gunfire during a call reporting an on-campus shooting. "We grabbed everybody that wasn't already out on the street and got to that location," said University of Tennessee at Chattanooga Police spokesman Brett Fuchs. About 150 officers from several agencies responded. There was no shooter.

The New York Times reports that an online group called "Purgatory" is "suspected of being connected to several of the episodes, including reports of shootings, according to cybersecurity experts, law enforcement agencies and the group members' own posts in a social media chat." (Though the Times, couldn't verify the group's claims.) Federal authorities previously connected the same network to a series of bomb scares and bogus shooting reports in early 2024, for which three men pleaded guilty this year... Bragging about its recent activities, Purgatory said that it could arrange more swatting episodes for a fee.
USA Today tries to quantify the reach of swatting: Estimated swatting incidents jumped from 400 in 2011 to more than 1,000 in 2019, according to the Anti-Defamation League, which cited a former FBI agent whose expertise is in swatting. From January 2023 to June 2024 alone, more than 800 instances of swatting were recorded at U.S. elementary, middle and high schools, according to the K-12 School Shootings Database, created by a University of Central Florida doctoral student in response to the Parkland High School shooting in 2018.tise is in swatting... David Riedman, a data scientist and creator of the K-12 School Shooting Database, estimates that in 2023, it cost $82,300,000 for police to respond to false threats.
Thanks to long-time Slashdot reader schwit1 for sharing the news.
Social Networks

Mastodon Says It Doesn't 'Have the Means' To Comply With Age Verification Laws (techcrunch.com) 67

Mastodon says it cannot comply with Mississippi's new age verification law because its decentralized software does not support age checks and the nonprofit lacks resources to enforce them. "The social nonprofit explains that Mastodon doesn't track its users, which makes it difficult to enforce such legislation," reports TechCrunch. "Nor does it want to use IP address-based blocks, as those would unfairly impact people who were traveling, it says." From the report: The statement follows a lively back-and-forth conversation earlier this week between Mastodon founder and CEO Eugen Rochko and Bluesky board member and journalist Mike Masnick. In the conversation, published on their respective social networks, Rochko claimed, "there is nobody that can decide for the fediverse to block Mississippi." (The Fediverse is the decentralized social network that includes Mastodon and other services, and is powered by the ActivityPub protocol.) "And this is why real decentralization matters," said Rochko.

Masnick pushed back, questioning why Mastodon's individual servers, like the one Rochko runs at mastodon.social, would not also be subject to the same $10,000 per user fines for noncompliance with the law. On Friday, however, the nonprofit shared a statement with TechCrunch to clarify its position, saying that while Mastodon's own servers specify a minimum age of 16 to sign up for its services, it does not "have the means to apply age verification" to its services. That is, the Mastodon software doesn't support it. The Mastodon 4.4 release in July 2025 added the ability to specify a minimum age for sign-up and other legal features for handling terms of service, partly in response to increased regulation around these areas. The new feature allows server administrators to check users' ages during sign-up, but the age-check data is not stored. That means individual server owners have to decide for themselves if they believe an age verification component is a necessary addition.

The nonprofit says Mastodon is currently unable to provide "direct or operational assistance" to the broader set of Mastodon server operators. Instead, it encourages owners of Mastodon and other Fediverse servers to make use of resources available online, such as the IFTAS library, which provides trust and safety support for volunteer social network moderators. The nonprofit also advises server admins to observe the laws of the jurisdictions where they are located and operate. Mastodon notes that it's "not tracking, or able to comment on, the policies and operations of individual servers that run Mastodon."
Bluesky echoed those comments in a blog post last Friday, saying the company doesn't have the resources to make the substantial technical changes this type of law would require.
The Courts

4chan and Kiwi Farms Sue the UK Over Its Age Verification Law (404media.co) 103

An anonymous reader quotes a report from 404 Media: 4chan and Kiwi Farms sued the United Kingdom's Office of Communications (Ofcom) over its age verification law in U.S. federal court Wednesday, fulfilling a promise it announced on August 23. In the lawsuit, 4chan and Kiwi Farms claim that threats and fines they have received from Ofcom "constitute foreign judgments that would restrict speech under U.S. law." Both entities say in the lawsuit that they are wholly based in the U.S. and that they do not have any operations in the United Kingdom and are therefore not subject to local laws. Ofcom's attempts to fine and block 4chan and Kiwi Farms, and the lawsuit against Ofcom, highlight the messiness involved with trying to restrict access to specific websites or to force companies to comply with age verification laws.

The lawsuit calls Ofcom an "industry-funded global censorship bureau." "Ofcom's ambitions are to regulate Internet communications for the entire world, regardless of where these websites are based or whether they have any connection to the UK," the lawsuit states. "On its website, Ofcom states that 'over 100,000 online services are likely to be in scope of the Online Safety Act -- from the largest social media platforms to the smallest community forum.'" [...] Ofcom began investigating 4chan over alleged violations of the Online Safety Act in June. On August 13, it announced a provisional decision and stated that 4chan had "contravened its duties" and then began to charge the site a penalty of [roughly $26,000] a day. Kiwi Farms has also been threatened with fines, the lawsuit states.
"American citizens do not surrender our constitutional rights just because Ofcom sends us an e-mail. In the face of these foreign demands, our clients have bravely chosen to assert their constitutional rights," said Preston Byrne, one of the lawyers representing 4chan and Kiwi Farms.

"We are aware of the lawsuit," an Ofcom spokesperson told 404 Media. "Under the Online Safety Act, any service that has links with the UK now has duties to protect UK users, no matter where in the world it is based. The Act does not, however, require them to protect users based anywhere else in the world."
Earth

Burning Man Hit By 50 MPH Dust Storm. Possible Monsoon Thunderstorms Forecast (msn.com) 60

"A fierce dust storm hit the Black Rock Desert on the eve of its annual Burning Man festival," reports the San Francisco Chronicle, "causing at least four minor injuries and damaging campsites that had been set up early." [Alternate URL]

"Winds of up to 50 mph stirred up the lake bed's alkaline dust so ferociously that participants in the annual art and culture festival reported not being able to see beyond a foot... " The dust storm arrived Saturday evening after strong thunderstorms in the Sierra Nevada drifted off the mountains and whipped up strong winds in the Nevada desert... At 5:14 p.m. Saturday, the weather service issued a dust storm advisory for Black Rock City and warned of "a wall of blowing dust coming off the Smoke Creek and Black Rock Desert playa areas is tracking northward at around 30 mph." The agency warned of visibility less than 1 mile and wind gusts exceeding 45 mph. A weather station at Black Rock City Airport measured gusts up to 52 mph at 5:50 p.m... ["We saw structures being ripped and torn down by the wind speeds even though we buttoned everything down as best as we could..." one Burner told the Chronicle.] Camp residents posted a slew of videos to social media featuring dust tornadoes, destroyed campsites, and fellow campers struggling to hold onto bucking canvases as the wind threatened to rip them away. "Every popup canopy I've seen has been destroyed," one Burner wrote on Reddit... ["Make sure you carry your particle/dust mask and goggles with you when you venture out on playa!" warns Burning Man's official weather page.]

Even after Saturday's storm, Burners won't be out of the woods from hazardous weather. The weather service warned of possible monsoon thunderstorms and heavy rain Sunday through Wednesday, raising concerns that this year's festival could echo disastrous 2023 conditions, when heavy storms stranded tens of thousands of attendees amid thick mud. "It's becoming increasingly likely that we could see an even greater flash flood threat," the weather service wrote in an online forecast. "If you're on the playa at the Black Rock Desert, you may very well be in for a muddy mess Monday through Wednesday." Slow-moving storms could drop an inch of rain or more in a short period.

"Still, gates to the festival had opened by Sunday morning," the article adds, "with organizers cautioning new arrivals to 'drive safely!'"

Burning Man's official weather page currently links to a National Weather Service page with a "Flood Watch" warning through 9 p.m. Sunday, and also predicting a chance of thunderstorms on Sunday and Monday.
Social Networks

After Tea Leak, 33,000 Women's Addresses Were Purportedly Mapped on Google Maps (bbc.com) 130

After the Tea dating-advice app leaked information on its users, the BBC found two online maps "purporting to represent the locations of women who had signed up for Tea... showing 33,000 pins spread across the United States." The maps were hosted on Google Maps. (Notified by the BBC, Google deleted the maps, saying they violated their harassment policies.)

"Since the breach, more than 10 women have filed class actions against the company which owns Tea," the article points out, noting that leaked content is also spreading around social media: Since the breach, the BBC has found websites, apps and even a "game" featuring the leaked data... The "game" puts the selfies submitted by women head-to-head, instructing users to click on the one they prefer, with leaderboards of the "top 50" and "bottom 50"... [And one researcher calculates more than 12,000 posts on 4Chan referenced the Tea app over the three weeks after the leak.]

It is unsurprising that the leak was exploited. The app had drawn criticism ever since it had grown in popularity. Defamation, with the spread of unproven allegations, and doxxing, when someone's identifying information is published without their consent, were real possibilities. Men's groups had wanted to take the app down — and when they found the data breach, they saw it as a chance for retribution.

They weren't the only ones with a gripe against Tea. Back in 2023 the fiance of Tea's CEO founder approached the administrator of a collection of Facebook groups called "Are We Dating the Same Guy?" to see if she'd be the "face" of the Tea app, reports 404 Media. But they add that after Tea failed to recruit her, Tea "shifted tactics" to raid her Facebook groups instead: Tea paid influencers to undermine Are We Dating the Same Guy and created competing Facebook groups with nearly identical names. 404 Media also identified a number of seemingly hijacked Facebook accounts that spammed the real Are We Dating The Same Guy groups with links to Tea app.
Reviews for the Tea app show several women later thought the app was affiliated with their trusted Facebook groups, the reporter said this week on a 404 Media podcast.

And they add that founder Sean Cook took over the "Tara" personna that his fiance has used for technical support. "So he's on the app pretend to be a woman, talking to other women who are on the app in order to weed out men who are being deceptive..."

Thanks to Slashdot reader samleecole for sharing the article.
Security

Amid Service Disruption, Colt Confirms 'Criminal Group' Accessed Their Data, As Ransomware Gang Threatens to Sell It (bleepingcomputer.com) 7

British telecommunications service provider Colt Telecom "has offices in over 30 countries across North America, Europe, and Asia, reports CPO magazine. "It manages nearly 1,000 data centers and roughly 75,000 km of fiber infrastructure."

But now "a cyber attack has caused widespread multi-day service disruption..." On August 14, 2025, the telecom giant said it had detected a cyber attack that began two days earlier, on August 12. Upon learning of the cyber intrusion, the telecommunications service provider responded by proactively taking some systems offline to contain the cyber attack. Although Colt Telecom's cyber incident response team was working around the clock to mitigate the impacts of the cyber attack, service disruption has persisted for days. However, the service disruption did not affect the company's core network infrastructure, suggesting that Colt customers could still access its network services... The company also did not provide a clear timeline for resolving the service disruption. A week after the apparent ransomware attack, Colt Online and the Voice API platform remained unavailable.
And now Colt Technology Services "confirms that customer documentation was stolen," reports the tech news site BleepingComputer: "A criminal group has accessed certain files from our systems that may contain information related to our customers and posted the document titles on the dark web," reads an updated security incident advisory on Colt's site.

"We understand that this is concerning for you."

"Customers are able to request a list of filenames posted on the dark web from the dedicated call centre."

As first spotted by cybersecurity expert Kevin Beaumont, Colt added the no-index HTML meta tag to the web page, making it so it won't be indexed by search engines.

This statement comes after the Warlock Group began selling on the Ramp cybercrime forum what they claim is 1 million documents stolen from Colt. The documents are being sold for $200,000 and allegedly contain financial information, network architecture data, and customer information... The Warlock Group (aka Storm-2603) is a ransomware gang attributed to Chinese threat actors who utilize the leaked LockBit Windows and Babuk VMware ESXi encryptors in attacks... Last month, Microsoft reported that the threat actors were exploiting a SharePoint vulnerability to breach corporate networks and deploy ransomware.

"Colt is not the only telecom firm that has been named by WarLock on its leak website in recent days," SecurityWeek points out. "The cybercriminals claim to have also stolen data from France-based Orange."

Thanks to long-time Slashdot reader Z00L00K for sharing the news.
Social Networks

Bluesky Blocks Service In Mississippi Over Age Assurance Law (techcrunch.com) 72

Bluesky has blocked access to its service in Mississippi rather than comply with a new state law requiring age verification for all social media users. TechCrunch reports: In a blog post published on Friday, the company explains that, as a small team, it doesn't have the resources to make the substantial technical changes this type of law would require, and it raised concerns about the law's broad scope and privacy implications. Mississippi's HB 1126 requires platforms to introduce age verification for all users before they can access social networks like Bluesky. On Thursday, U.S. Supreme Court justices decided to block an emergency appeal that would have prevented the law from going into effect as the legal challenges it faces played out in the courts. As a result, Bluesky had to decide what it would do about compliance.

Instead of requiring age verification before users could access age-restricted content, this law requires age verification of all users. That means Bluesky would have to verify every user's age and obtain parental consent for anyone under 18. The company notes that the potential penalties for noncompliance are hefty, too -- up to $10,000 per user. Bluesky also stresses that the law goes beyond child safety, as intended, and would create "significant barriers that limit free speech and disproportionately harm smaller platforms and emerging technologies." To comply, Bluesky would have to collect and store sensitive information from all its users, in addition to the detailed tracking of minors. This is different from how it's expected to comply with other age verification laws, like the U.K.'s Online Safety Act (OSA), which only requires age checks for certain content and features.

Mississippi's law blocks anyone from using the site unless they provide their personal and sensitive information. The company notes that its decision only applies to the Bluesky app built on the AT Protocol. Other apps may approach the decision differently.

The Almighty Buck

4chan Refuses To Pay UK Online Safety Act Fines (bbc.com) 95

An anonymous reader quotes a report from the BBC: A lawyer representing the online message board 4chan says it won't pay a proposed fine by the UK's media regulator as it enforces the Online Safety Act. According to Preston Byrne, managing partner of law firm Byrne & Storm, Ofcom has provisionally decided to impose a 20,000-pound fine "with daily penalties thereafter" for as long as the site fails to comply with its request. "Ofcom's notices create no legal obligations in the United States," he told the BBC, adding he believed the regulator's investigation was part of an "illegal campaign of harassment" against US tech firms.

"4chan has broken no laws in the United States -- my client will not pay any penalty," Mr Byrne said. Ofcom began investigating 4chan over whether it was complying with its obligations under the UK's Online Safety Act. Then in August, it said it had issued 4chan with "a provisional notice of contravention" for failing to comply with two requests for information. Ofcom said its investigation would examine whether the message board was complying with the act, including requirements to protect its users from illegal content.
"American businesses do not surrender their First Amendment rights because a foreign bureaucrat sends them an email," law firms Byrne & Storm and Coleman Law wrote. "Under settled principles of US law, American courts will not enforce foreign penal fines or censorship codes. If necessary, we will seek appropriate relief in US federal court to confirm these principles."

The statement calls on the Trump administration to intervene and protect American businesses from "extraterritorial censorship mandates."

Slashdot Top Deals