The Media

CNN Criticizes Microsoft's 'Making a Mess of the News' By Replacing MSN's Staff With AI (cnn.com) 74

CNN decries "false and bizarre" news stories being published by Microsoft on MSN.com, "one of the world's most trafficked websites and a place where millions of Americans get their news every day." Microsoft's decision to increasingly rely on the use of automation and artificial intelligence over human editors to curate its homepage appears to be behind the site's recent amplification of false and bizarre stories, people familiar with how the site works told CNN.

The site, which comes pre-loaded as the default start page on devices running Microsoft software, including on Microsoft's latest "Edge" browser... employed more than 800 editors in 2018 to help select and curate news stories shown to millions of readers around the world. But in recent years Microsoft has laid off editors, some of whom were told they were being replaced by "automation," what they understand to be AI.

CNN points out that while Microsoft's president "has publicly lectured on the responsible use" of AI, "the apparent role of AI in Microsoft's recent amplification of bogus stories raises questions about the company's public adoption of the nascent technology and for the journalism industry as a whole." CNN notes that an AI-generated poll urging readers to guess the cause of a swimmer's death "was not the first public blunder caused by Microsoft's embrace of AI." In September Microsoft republished a story about Brandon Hunter, a former NBA player who died unexpectedly at the age of 42, under the headline, "Brandon Hunter useless at 42." Then, in October, Microsoft republished an article that claimed that San Francisco Supervisor Dean Preston had resigned from his position after criticism from Elon Musk. The story was entirely false.

Some of the articles featured by Microsoft were initially published by obscure websites that might have gone unnoticed amid the daily deluge of online misinformation that circulates every day. But Microsoft's decision to republish articles from fringe outlets has elevated those stories to potentially millions of additional readers, breathing life into their claims. Editors who formerly worked for Microsoft told CNN that these kinds of false stories, or virtually any other articles from low-quality websites, would not be prominently featured by Microsoft were it not for its use of AI. Ryn Pfeuffer, who worked intermittently as a contractor for Microsoft for eight years, said she received a call in May 2020 with the news that her entire team was being laid off. 2020 was the year, a Microsoft spokesperson told CNN in a statement on Wednesday, that the company began transitioning to a "personalized feed" that is "tailored by an algorithm to the interests of our audiences."

MSN "has also published other junk content, including bogus stories about fishermen catching mermaids and Bigfoot spottings," reports the tech news site Futurism, "in the wake of ditching its human editors in favor of automation.

"Noticing a pattern yet? The company pumps out trash-tier AI content, then waits until it's called out publicly to quietly delete it and move onto the next trainwreck." We've known that Microsoft's MSN news portal has been pumping out a garbled, AI-generated firehose for well over a year now. The company has been using the website to distribute misleading and oftentimes incomprehensible garbage to hundreds of millions of readers per month... And if MSN presents a vision of how the tech industry's obsession with AI is going to play out in the information ecosystem, we're in for a rough ride.
CNN got this reaction from a user whose default browser changed from Chrome to Microsoft Edge after a software update — and discovered their home page had switched to MSN.com. "It felt like I was standing in line at the grocery store reading a National Enquirer front page."

A company spokesperson assured CNN that Microsoft was "committed to addressing the recent issue of low quality articles."
Social Networks

Will The Future See Interconnected Social Media Platforms? (theverge.com) 37

"For the last two decades, our social networking and social media platforms have been universes unto themselves," writes the Verge's editor-at-large: Each has its own social graph, charting who you follow and who follows you. Each has its own feed, its own algorithms, its own apps, and its own user interfaces (though they've all pretty much landed on the same aesthetics over time). Each also has its own publishing tools, its own character limits, its own image filters. Being online means constantly flitting between these places and their ever-shifting sets of rules and norms. Now, though, we may be at the beginning of a new era. Instead of a half-dozen platforms competing to own your entire life, apps like Mastodon, Bluesky, Pixelfed, Lemmy, and others are building a more interconnected social ecosystem.

If this ActivityPub-fueled change takes off, it will break every social network into a thousand pieces. All posts, of all types, will be separated from their platforms. We'll get new tools for creating those posts, new tools for reading them, new tools for organizing them, and new tools for moderating them and sharing them and remixing them and everything else besides.

He's talking about a decades-old concept called POSSE: Publish (on your) Own Site, Syndicate Everywhere. ("Sometimes the P is also 'Post,' and the E can be 'Elsewhere.' The idea is the same either way." The idea is that you, the poster, should post on a website that you own. Not an app that can go away and take all your posts with it, not a platform with ever-shifting rules and algorithms. Your website. But people who want to read or watch or listen to or look at your posts can do that almost anywhere because your content is syndicated to all those platforms... [Y]our blog becomes the hub for everything, your main home on the internet.
The article argues that for now, "the best we have are tools like Micro.blog, a six-year-old platform for cross-posters." But the article ultimately envisions a future with not just new posting tools, but also new reading tools "with different ideas about how to display and organize posts."
Privacy

Face Search Engine PimEyes Blocks Searches of Children's Faces (nytimes.com) 25

PimEyes, a search engine that relies on facial recognition to help people scan billions of images to find photos of themselves on the internet, announced that it has banned searches of minors as part of the company's "no harm policy." The New York Times reports: PimEyes, a subscription-based service that uses facial recognition technology to find online photos of a person, has a database of nearly three billion faces and enables about 118,000 searches per day, according to [PimEyes CEO Giorgi Gobronidze]. The service is advertised as a way for people to search for their own face to find any unknown photos on the internet, but there are no technical measures in place to ensure that users are searching only for themselves. Parents have used PimEyes to find photos of their children on the internet that they had not known about. But the service could also be used nefariously by a stranger. It had previously banned more than 200 accounts for inappropriate searches of children's faces, Mr. Gobronidze said.

"Images of children might be used by the individuals with twisted moral compass and values, such as pedophiles, child predators," Mr. Gobronidze said. PimEyes will still allow searches of minors' faces by human rights organizations that work on children's rights issues, he added. Mr. Gobronidze said that blocking searches of children's faces had been on "the road map" since he acquired the site in 2021, but the protection was fully deployed only this month after the publication of a New York Times article on A.I.-based threats to children. Still, the block isn't airtight. PimEyes is using age detection A.I. to identify photos of minors. Mr. Gobronidze said that it worked well for children under the age of 14 but that it had "accuracy issues" with teenagers.

It also may be unable to identify children as such if they're not photographed from a certain angle. To test the blocking system, The Times uploaded a photo of Mary-Kate and Ashley Olsen from their days as child stars to PimEyes. It blocked the search for the twin who was looking straight at the camera, but the search went through for the other, who is photographed in profile. The search turned up dozens of other photos of the twin as a child, with links to where they appeared online. Mr. Gobronidze said PimEyes was still perfecting its detection system.

AI

Newspapers Want Payment for Articles Used to Power ChatGPT (msn.com) 151

An anonymous reader shared this report from the Washington Post: For years, tech companies like Open AI have freely used news stories to build data sets that teach their machines how to recognize and respond fluently to human queries about the world. But as the quest to develop cutting-edge AI models has grown increasingly frenzied, newspaper publishers and other data owners are demanding a share of the potentially massive market for generative AI, which is projected to reach to $1.3 trillion by 2032, according to Bloomberg Intelligence.

Since August, at least 535 news organizations — including the New York Times, Reuters and The Washington Post — have installed a blocker that prevents their content from being collected and used to train ChatGPT. Now, discussions are focused on paying publishers so the chatbot can surface links to individual news stories in its responses, a development that would benefit the newspapers in two ways: by providing direct payment and by potentially increasing traffic to their websites. In July, Open AI cut a deal to license content from the Associated Press as training data for its AI models. The current talks also have addressed that idea, according to two people familiar with the talks who spoke on the condition of anonymity to discuss sensitive matters, but have concentrated more on showing stories in ChatGPT responses.

Other sources of useful data are also looking for leverage. Reddit, the popular social message board, has met with top generative AI companies about being paid for its data, according to a person familiar with the matter, speaking on the condition of anonymity to discuss private negotiations. If a deal can't be reached, Reddit is considering blocking search crawlers from Google and Bing, which would prevent the forum from being discovered in searches and reduce the number of visitors to the site. But the company believes the trade-off would be worth it, the person said, adding: "Reddit can survive without search."

"The moves mark a growing sense of urgency and uncertainty about who profits from online information," the article argues. "With generative AI poised to transform how users interact with the internet, many publishers and other companies see fair payment for their data as an existential issue."

They also cite James Grimmelmann, a professor of digital and information law at Cornell University, who suggests Open AI's decision to negotiate "may reflect a desire to strike deals before courts have a chance weigh in on whether tech companies have a clear legal obligation to license — and pay for — content."
Cloud

Amazon and Microsoft's Cloud Dominance Referred for UK Competition Probe (cnbc.com) 6

Britain's anti-competition regulators have been tasked with investigating Microsoft and Amazon's dominance of the cloud computing market. From a report: Media watchdog Ofcom on Thursday referred its inquiry for further investigation to the Competition and Markets Authority, kickstarting the process. Ofcom said that it had identified features which make it more difficult for U.K. businesses to switch cloud providers, or use multiple cloud services, and that it is "particularly concerned" about the position of market leaders Amazon and Microsoft. "Some UK businesses have told us they're concerned about it being too difficult to switch or mix and match cloud provider, and it's not clear that competition is working well," Fergal Farragher, Ofcom's director responsible for the market study, said in a statement Thursday.

"So, we're referring the market to the CMA for further scrutiny, to make sure business customers continue to benefit from cloud services." Ofcom is concerned that so-called "hyperscalers" like Amazon Web Services and Microsoft Azure are limiting competition in the cloud computing market. These are companies that allow businesses of all stripes to carry out critical computing tasks -- like storage and management of data, delivery of content, analytics and intelligence -- over the internet, rather than through servers stored on site, or "on premise."

Security

GPUs From All Major Suppliers Are Vulnerable To New Pixel-Stealing Attack (arstechnica.com) 26

An anonymous reader quotes a report from Ars Technica: GPUs from all six of the major suppliers are vulnerable to a newly discovered attack that allows malicious websites to read the usernames, passwords, and other sensitive visual data displayed by other websites, researchers have demonstrated in a paper (PDF) published Tuesday. The cross-origin attack allows a malicious website from one domain -- say, example.com -- to effectively read the pixels displayed by a website from example.org, or another different domain. Attackers can then reconstruct them in a way that allows them to view the words or images displayed by the latter site. This leakage violates a critical security principle that forms one of the most fundamental security boundaries safeguarding the Internet. Known as the same origin policy, it mandates that content hosted on one website domain be isolated from all other website domains. [...]

GPU.zip works only when the malicious attacker website is loaded into Chrome or Edge. The reason: For the attack to work, the browser must:

1. allow cross-origin iframes to be loaded with cookies
2. allow rendering SVG filters on iframes and
3. delegate rendering tasks to the GPU

For now, GPU.zip is more of a curiosity than a real threat, but that assumes that Web developers properly restrict sensitive pages from being embedded by cross-origin websites. End users who want to check if a page has such restrictions in place should look for the X-Frame-Options or Content-Security-Policy headers in the source.
"This is impactful research on how hardware works," a Google representative said in a statement. "Widely adopted headers can prevent sites from being embedded, which prevents this attack, and sites using the default SameSite=Lax cookie behavior receive significant mitigation against personalized data being leaked. These protections, along with the difficulty and time required to exploit this behavior, significantly mitigate the threat to everyday users. We are in communication and are actively engaging with the reporting researchers. We are always looking to further improve protections for Chrome users."

An Intel representative, meanwhile, said that the chipmaker has "assessed the researcher findings that were provided and determined the root cause is not in our GPUs but in third-party software." A Qualcomm representative said "the issue isn't in our threat model as it more directly affects the browser and can be resolved by the browser application if warranted, so no changes are currently planned." Apple, Nvidia, AMD, and ARM didn't comment on the findings.

An informational write-up of the findings can be found here.
The Media

Can Philanthropy Save Local Newspapers? (washingtonpost.com) 122

70 million Americans live in a county without a newspaper, according to a 2022 report cited in this editorial by the Washington Post's editorial board"

Who's to blame? The internet, mostly. Whereas deep-pocketed advertisers formerly relied on newspapers to reach their customers, they took to the audience-targeting capabilities of Facebook or Google. Web-based marketplaces also siphoned newspapers' once-robust revenue from classified ads.
But the Post emphasizes one positive new development: "a large pile of cash." In an initiative announced this month, 22 donor organizations, including the Knight Foundation and the John D. and Catherine T. MacArthur Foundation, are teaming up to provide more than $500 million to boost local news over five years — an undertaking called Press Forward... The injection of more than a half-billion dollars is sure to help the quest for a durable and replicable business model.

The even bigger imperative, however, is to elevate local news on the philanthropic food chain so that national and hometown funders prioritize this pivotal American institution. Failure on this front places more pressure on public policy solutions, and government activism mixes poorly with independent journalism...

One of the goals for Press Forward, accordingly, is building out the infrastructure — "from legal support to membership programs" — relied upon by local news providers to deliver their product. Jim Brady, vice president of journalism at the Knight Foundation, says it's easier than ever for news entrepreneurs to launch a local site because they can plug into existing technologies hammered out by their predecessors — and there's more development work still to fund on this front.

So where to go from here? Local philanthropic interests across the country could take a cue from the Press Forward partners and invest in the news organizations down the street.

China

Researchers Including Microsoft Spot Chinese Disinformation Campaign Using AI-Generated Photos (businesstimes.com.sg) 40

"Until now, China's influence campaigns have been focused on amplifying propaganda defending its policies on Taiwan and other subjects," reports the New York Times.

But a new piece co-authored by the newspaper's national security correspondent and its misinformation investigative reporter notes a new effort identified by researchers from Microsoft, the RAND Corporation, the University of Maryland, the intelligence company Recorded Future, and news-rating service NewsGuard. And that newly-discovered effort "suggests that Beijing is making more direct attempts to sow discord in the United States."

It began when, sensing an opportunity,"China's increasingly resourceful information warriors pounced" after high winds in Hawaii downed three power lines that sparked wildfires in Hawaii on August 8th... The disaster was not natural, they said in a flurry of false posts that spread across the internet, but was the result of a secret "weather weapon" being tested by the United States. To bolster the plausibility, the posts carried photographs that appeared to have been generated by artificial intelligence programs, making them among the first to use these new tools to bolster the aura of authenticity of a disinformation campaign... Recorded Future first reported that the Chinese government mounted a covert campaign to blame a "weather weapon" for the fires, identifying numerous posts in mid-August falsely claiming that MI6, the British foreign intelligence service, had revealed "the amazing truth behind the wildfire." Posts with the exact language appeared on social media sites across the internet, including Pinterest, Tumblr, Medium and Pixiv, a Japanese site used by artists. Other inauthentic accounts spread similar content, often accompanied with mislabeled videos, including one from a popular TikTok account, The Paranormal Chic, that showed a transformer explosion in Chile...

The Chinese campaign operated across many of the major social media platforms — and in many languages, suggesting it was aimed at reaching a global audience. Microsoft's Threat Analysis Center identified inauthentic posts in 31 languages, including French, German and Italian, but also in less prominent ones like Igbo, Odia and Guarani. The artificially generated images of the Hawaii wildfires identified by Microsoft's researchers appeared on multiple platforms, including a Reddit post in Dutch. "These specific A.I.-generated images appear to be exclusively used" by Chinese accounts used in this campaign, Microsoft said in a report. "They do not appear to be present elsewhere online."

The researchers "suggested that China was building a network of accounts that could be put to use in future information operations, including the next U.S. presidential election," according to the article. It adds that president Biden "has cut off China's access to the most advanced chips and the equipment made to produce them."

The article adds that the impact of China's misinformation campaign "is difficult to measure, though early indications suggest that few social media users engaged with the most outlandish of the conspiracy theories."
Movies

PR Firm Has Been Paying Rotten Tomatoes Critics For Positive Reviews 35

A new report says that a PR firm has been paying Rotten Tomatoes critics for positive reviews for over five years. From a report: Moviegoers, critics, and the average internet user have all used the aggregation site Rotten Tomatoes at one point or another. The website categorizes films and shows from "fresh" to "rotten," with rotten being those with lower ratings. Now it looks like the site's scores have been manipulated for more than five years. As noted by Vulture, it looks like a PR firm has manipulated movie scores on Rotten Tomatoes by paying the critics directly. This has been happening for years.

The PR firm, named Bunker 15, is said to pay as much as $50.00 for a single Rotten Tomatoes review. The payments, which aren't typically disclosed, are usually given to obscure critics who happen to be part of a pool tracked by Rotten Tomatoes. Though it's worth noting that the aggregation site's rules prohibit "Reviewing based on a financial incentive." Director Paul Schrader, also a critic, spoke out against Rotten Tomatoes which he says is part of a "broken" system. "The system is broken. Audiences are dumber. Normal people don't go through reviews like they used to. Rotten Tomatoes is something the studios can game. So they do." The site responded by delisting a variety of Bunker 15 films from their website. Furthermore, they issued a warning to any critics that reviewed them. The warning emphasizes that they do not tolerate manipulation on their platform.
AI

Gizmodo Fires Spanish Staff Amid Switch To AI Translator (arstechnica.com) 65

Last week, Gizmodo's parent company G/O Media fired the staff of its Spanish-language site Gizmodo en Espanol and began replacing them with AI translations of English-language articles. "G/O Media's decision to eschew human writers for AI is part of a recent trend of media companies experimenting with AI tools as a way to maximize content output while minimizing human labor costs," reports Ars Technica. "However, the practice remains controversial within the broader journalism community." The Verge first reported the news. From the report: Previously, Gizmodo en Espanol had a small but dedicated team who wrote original content tailored specifically for Spanish-speaking readers, as well as producing translations of Gizmodo's English articles. The site represented Gizmodo's first foray into international markets when it launched in 2012 after being acquired from Guanabee. Newly published articles on the site now contain a link to the English version of the article and a disclaimer stating (via our translation from Google Translate), "This content has been automatically translated from the source material. Due to the nuances of machine translation, there may be slight differences. For the original version, click here."

So far, Gizmodo's pivot to AI translation hasn't gone smoothly. On social media site X, journalist and Gizmodo reader Victor Millan noted that some of the site's new articles abruptly switch from Spanish to English midway through, possibly due to glitches in the AI translation system. [...] For Spanish-speaking audiences seeking news about science, technology, and Internet culture, the loss of original reporting from Gizmodo en Espanol is potentially a major blow. And while AI translation technology has improved significantly over the past decade, experts say it still can't fully replace human translators. Subtle errors, mistranslations, and lack of cultural knowledge can impair the quality of automatically translated content.

Google

Are We Seeing the End of the Googleverse? (theverge.com) 133

The Verge argues we're seeing "the end of the Googleverse. For two decades, Google Search was the invisible force that determined the ebb and flow of online content.

"Now, for the first time, its cultural relevance is in question... all around us are signs that the era of 'peak Google' is ending or, possibly, already over." There is a growing chorus of complaints that Google is not as accurate, as competent, as dedicated to search as it once was. The rise of massive closed algorithmic social networks like Meta's Facebook and Instagram began eating the web in the 2010s. More recently, there's been a shift to entertainment-based video feeds like TikTok — which is now being used as a primary search engine by a new generation of internet users...

Google Reader shut down in 2013, taking with it the last vestiges of the blogosphere. Search inside of Google Groups has repeatedly broken over the years. Blogger still works, but without Google Reader as a hub for aggregating it, most publishers started making native content on platforms like Facebook and Instagram and, more recently, TikTok. Discoverability of the open web has suffered. Pinterest has been accused of eating Google Image Search results. And the recent protests over third-party API access at Reddit revealed how popular Google has become as a search engine not for Google's results but for Reddit content. Google's place in the hierarchy of Big Tech is slipping enough that some are even admitting that Apple Maps is worth giving another chance, something unthinkable even a few years ago. On top of it all, OpenAI's massively successful ChatGPT has dragged Google into a race against Microsoft to build a completely different kind of search, one that uses a chatbot interface supported by generative AI.

Their article quotes the founder of the long-ago Google-watching blog, "Google Blogoscoped," who remembers that when Google first came along, "they were ad-free with actually relevant results in a minimalistic kind of design. If we fast-forward to now, it's kind of inverted now. The results are kind of spammy and keyword-built and SEO stuff. And so it might be hard to understand for people looking at Google now how useful it was back then."

The question, of course, is when did it all go wrong? How did a site that captured the imagination of the internet and fundamentally changed the way we communicate turn into a burned-out Walmart at the edge of town? Well, if you ask Anil Dash, it was all the way back in 2003 — when the company turned on its AdSense program. "Prior to 2003-2004, you could have an open comment box on the internet. And nobody would pretty much type in it unless they wanted to leave a comment. No authentication. Nothing. And the reason why was because who the fuck cares what you comment on there. And then instantly, overnight, what happened?" Dash said. "Every single comment thread on the internet was instantly spammed. And it happened overnight...."

As he sees it, Google's advertising tools gave links a monetary value, killing anything organic on the platform. From that moment forward, Google cared more about the health of its own network than the health of the wider internet. "At that point it was really clear where the next 20 years were going to go," he said.

Sci-Fi

Pentagon's New UFO Website Lets You Explore Declassified Sightings Info (cnet.com) 54

The U.S. Department of Defense has launched a website collecting publicly available, declassified information on unidentified anomalous phenomena (UAPs). "For now, the general public will be able to read through the posted information," reports CNET. "Soon, US government employees, contractors, and service members with knowledge of US programs can report their own sightings, and later, others will be able to submit reports." From the report: "This website will provide information, including photos and videos, on resolved UAP cases as they are declassified and approved for public release," the department said in a release posted on Thursday. "The website's other content includes reporting trends and a frequently asked questions section as well as links to official reports, transcripts, press releases, and other resources that the public may find useful, such as applicable statutes and aircraft, balloon and satellite tracking sites."

For now, one of the most interesting parts of the site is its trends section. Apparently, most reported UAPs are round, either white, silver or translucent, spotted at around 10,000 to 30,000 feet, 1-4 meters in size, and do not emit thermal exhaust. Hotspots for sightings include both the US East and West coasts. There's also a small section of videos with names such as "DVIDS Video - Unresolved Case: Navy 2021 Flyby," and "UAP Video: Middle East Object." Readers are able to leave comments on the videos. Of the "Middle East Object" video, one person writes,"Noticed I never saw it cast a shadow. But other objects have shadows."

Social Networks

Judge Blocks Arkansas Law Requiring Parental OK For Minors To Create Social Media Accounts (apnews.com) 64

An anonymous reader quotes a report from the Associated Press: A federal judge on Thursday temporarily blocked Arkansas from enforcing a new law that would have required parental consent for minors to create new social media accounts, preventing the state from becoming the first to impose such a restriction. U.S. District Judge Timothy L. Brooks granted a preliminary injunction that NetChoice -- a tech industry trade group whose members include TikTok, Facebook parent Meta, and X, formerly known as Twitter -- had requested against the law. The measure, which Republican Gov. Sarah Huckabee Sanders signed into law in April, was set to take effect Friday.

In a 50-page ruling, Brooks said NetChoice was likely to succeed in its challenge to the Arkansas law's constitutionality and questioned the effectiveness of the restrictions. "Age-gating social media platforms for adults and minors does not appear to be an effective approach when, in reality, it is the content on particular platforms that is driving the state's true concerns," wrote Brooks, who was appointed to the bench by former President Barack Obama. NetChoice argued the requirement violated the constitutional rights of users and arbitrarily singled out types of speech that would be restricted.

Arkansas' restrictions would have only applied to social media platforms that generate more than $100 million in annual revenue. It also wouldn't have applied to certain platforms, including LinkedIn, Google and YouTube. Brooks' ruling said the the exemptions nullified the state's intent for imposing the restrictions, and said the law also didn't adequately define which platforms they would apply to. As an example, he cited confusion over whether the social media platform Snapchat would be subject to the age-verification requirement. Social media companies that knowingly violate the age verification requirement would have faced a $2,500 fine for each violation under the now-blocked law. The law also prohibited social media companies and third-party vendors from retaining users' identifying information after they've been granted access to the social media site.
In a statement on X, Sanders wrote: "Big Tech companies put our kids' lives at risk. They push an addictive product that is shown to increase depression, loneliness, and anxiety and puts our kids in human traffickers' crosshairs. Today's court decision delaying this needed protection is disappointing but I'm confident the Attorney General will vigorously defend the law and protect our children."
Piracy

Sports Leagues Ask US For 'Instantaneous' DMCA Takedowns and Website Blocking (arstechnica.com) 63

An anonymous reader quotes a report from Ars Technica: Sports leagues are urging the US to require "instantaneous" takedowns of pirated livestreams and new requirements for Internet service providers to block pirate websites. The Digital Millennium Copyright Act of 1998 requires websites to "expeditiously" remove infringing material upon being notified of its existence. But pirated livestreams of sports events often aren't taken down while the events are ongoing, said comments submitted last week by Ultimate Fighting Championship, the National Basketball Association, and National Football League.

The "DMCA does not define 'expeditiously,' and OSPs [online service providers] have exploited this ambiguity in the statutory language to delay removing content in response to takedown requests," the leagues told the US Patent and Trademark Office in response to a request for comments on addressing counterfeiting and piracy. The leagues urged the US "to establish that, in the case of live content, the requirement to 'expeditiously' remove infringing content means that content must be removed 'instantaneously or near-instantaneously' in response to a takedown request." The leagues claimed the change "would be a relatively modest and non-controversial update to the DMCA that could be included in the broader reforms being considered by Congress or could be addressed separately." They also want stricter "verification measures before a user is permitted to livestream."

The UFC separately submitted comments on its own, urging the US to require that ISPs block pirate sites. The UFC said that a "significant and growing" number of websites, typically operated from outside the US, don't respond to takedown requests and thus should be blocked by broadband network operators. The UFC wrote: "Unlike many other jurisdictions around the world, the US lacks a 'site-blocking' regime whereby copyright owners may obtain no-fault injunctions requiring domestic Internet service providers to block websites that are primarily geared at infringing activity. A 'site-blocking' regime, with appropriate safeguards to prevent abuse, would substantially facilitate all copyright owners' ability to address piracy, including UFC's." Website-blocking is bound to be a controversial topic, although the Federal Communications Commission's now-repeated net neutrality rules only prohibited blocking of "lawful Internet traffic." While the UFC said it just wants "websites that are primarily geared at infringing activity" to be blocked, a site-blocking regime could be used more expansively if there aren't strict limits.

Piracy

File-Hosting Icon AnonFiles Throws In the Towel, Domain For Sale 28

An anonymous reader quotes a report from TorrentFreak: Founded in 2011, AnonFiles.com became known as a popular hosting service that allowed users to share files up to 20GB without download restrictions. As the name suggests, registering an account wasn't required either; both up and downloading files was totally anonymous. The same also applies to BayFiles.com, an affiliated file-hosting service that was launched by The Pirate Bay. Both sites launched around the same time and shared a similar design and identical features. Both sites had millions of visitors but AnonFiles stood out with over 18 million visitors a month. This popularity didn't go unnoticed by rightsholders, who repeatedly flagged AnonFiles as a "notorious" pirate site.

Rightsholders and law enforcement authorities were not the only ones unhappy with the illegal content posted to the site. For AnonFiles' operators, it caused major problems too. The current owners purchased the site two years ago but didn't expect the abuse to be so massive that the only option would be to shut it down. According to a goodbye message posted on the site, they simply can't continue. "After trying endlessly for two years to run a file sharing site with user anonymity, we have been tired of handling the extreme volumes of people abusing it and the headaches it has created for us."

The operators tried to contain the abuse by setting up all sorts of automated filters and filename restrictions, taking thousands of false positives for granted, but that didn't help much. With tens of millions of uploads and petabytes of data, no anti-abuse measure was sufficient. And when the site's proxy service pulled the plug a few days ago, AnonFiles decided to call it quits. "We have auto banned contents of hundreds of thousands files. Banned file names and also banned specific usage patterns connected to abusive material," the AnonFiles team writes. "Even after all this the high volume of abuse will not stop. This is not the kind of work we imagine when acquiring it and recently our proxy provider shut us down. This can not continue."
The current owners have invited others to buy the domain name and give it a shot themselves.
Advertising

YouTube Ads May Have Led To Online Tracking of Children, Research Says 8

An anonymous reader quotes a report from the New York Times: This year, BMO, a Canadian bank, was looking for Canadian adults to apply for a credit card. So the bank's advertising agency ran a YouTube campaign using an ad-targeting system from Google that employs artificial intelligence to pinpoint ideal customers. But Google, which owns YouTube, also showed the ad to a viewer in the United States on a Barbie-themed children's video on the "Kids Diana Show," a YouTube channel for preschoolers whose videos have been watched more than 94 billion times. When that viewer clicked on the ad, it led to BMO's website, which tagged the user's browser with tracking software from Google, Meta, Microsoft and other companies, according to new research from Adalytics, which analyzes ad campaigns for brands. As a result, leading tech companies could have tracked children across the internet, raising concerns about whether they were undercutting a federal privacy law, the report said. The Children's Online Privacy Protection Act, or COPPA, requires children's online services to obtain parental consent before collecting personal data from users under age 13 for purposes like ad targeting.

Adalytics identified more than 300 brands' ads for adult products, like cars, on nearly 100 YouTube videos designated as "made for kids" that were shown to a user who was not signed in, and that linked to advertisers' websites. It also found several YouTube ads with violent content, including explosions, sniper rifles and car accidents, on children's channels. An analysis by The Times this month found that when a viewer who was not signed into YouTube clicked the ads on some of the children's channels on the site, they were taken to brand websites that placed trackers -- bits of code used for purposes like security, ad tracking or user profiling -- from Amazon, Meta's Facebook, Google, Microsoft and others -- on users' browsers. As with children's television, it is legal, and commonplace, to run ads, including for adult consumer products like cars or credit cards, on children's videos. There is no evidence that Google and YouTube violated their 2019 agreement with the F.T.C.

The report's findings raise new concerns about YouTube's advertising on children's content. In 2019, YouTube and Google agreed topay a record $170 million fineto settle accusations from the Federal Trade Commission and the State of New York that the company had illegally collected personal information from children watching kids' channels. Regulators said the company had profited from using children's data to target them with ads. YouTube then said it would limit the collection of viewers' data and stop serving personalized ads on children's videos. On Thursday, two United States senators sent a letter to the F.T.C., urging it to investigate whether Google and YouTube had violated COPPA, citing Adalytics and reporting by The New York Times. Senator Edward J. Markey, Democrat of Massachusetts, and Senator Marsha Blackburn, Republican of Tennessee, said they were concerned that the company may have tracked children and served them targeted ads without parental consent, facilitating "the vast collection and distribution" of children's data. "This behavior by YouTube and Google is estimated to have impacted hundreds of thousands, to potentially millions, of children across the United States," the senators wrote.
Google spokesman Michael Aciman called the report's findings "deeply flawed and misleading."

Google has stated that running ads for adults on children's videos is useful because parents watching could become customers. However, they acknowledge that violent ads on children's videos violate their policies and have taken steps to prevent such ads from running in the future. Google claims they do not use personalized ads on children's videos, ensuring compliance with COPPA.

Google notes that it does not inform advertisers if a viewer has watched a children's video, only that they clicked on the ad. Google also says it cannot control data collection on a brand's website after a YouTube viewer clicks an ad -- a process that could occur on any website.
Crime

'Bulletproof' Web Site Hosting Ransomware Finally Seized, Founder Indicted (cnbc.com) 16

An anonymous reader shared this report from CNBC: The mastermind behind a ransomware hosting service that allegedly helped criminals collect more than 5,000 bitcoin in ransom from hundreds of victims was indicted in federal court this week, prosecutors announced Thursday. Artur Grabowski's LolekHosted service operated for about a decade and advertised itself as a haven for "everything but child porn," according to Florida prosecutors. Clients allegedly used the hosting service to deploy ransomware viruses that infected around 400 networks around the world... [That's 400 just for the Netwalker ransomware, which the announcement calls "one of the ransomware variants facilitated by LolekHosted."]

Grabowski was charged with computer fraud, wire fraud, and conspiracy to commit international money laundering. Grabowski himself is also the subject of a $21.5 million seizure order... Grabowski, a Polish national, faces a maximum sentence of 45 years, if he is ever detained and convicted.

Grabowski also "remains a fugitive," according to an announcement from the U.S. Department of Justice. It notes that the 36-year-old's site — registered in 2014 — also "facilitated" brute-force attacks, and phishing.

"Grabowski allegedly facilitated the criminal activities of LolekHosted clients by allowing clients to register accounts using false information, not maintaining Internet Protocol (IP) address logs of client servers, frequently changing the IP addresses of client servers, ignoring abuse complaints made by third parties against clients, and notifying clients of legal inquiries received from law enforcement."
Google

CNET Deletes Thousands of Old Articles To Game Google Search (gizmodo.com) 48

According to Gizmodo, CNET has deleted thousands of old articles over the past few months in a bid to improve its performance in Google Search results. From the report: Archived copies of CNET's author pages show the company deleted small batches of articles prior to the second half of July, but then the pace increased. Thousands of articles disappeared in recent weeks. A CNET representative confirmed that the company was culling stories but declined to share exactly how many it has taken down. The move adds to recent controversies over CNET's editorial strategy, which has included layoffs and experiments with error-riddled articles written by AI chatbots.

"Removing content from our site is not a decision we take lightly. Our teams analyze many data points to determine whether there are pages on CNET that are not currently serving a meaningful audience. This is an industry-wide best practice for large sites like ours that are primarily driven by SEO traffic," said Taylor Canada, CNET's senior director of marketing and communications. "In an ideal world, we would leave all of our content on our site in perpetuity. Unfortunately, we are penalized by the modern internet for leaving all previously published content live on our site."

CNET shared an internal memo about the practice. Removing, redirecting, or refreshing irrelevant or unhelpful URLs "sends a signal to Google that says CNET is fresh, relevant and worthy of being placed higher than our competitors in search results," the document reads. According to the memo about the "content pruning" the company considers a number of factors before it "deprecates" an article, including SEO, the age and length of the story, traffic to the article, and how frequently Google crawls the page. The company says it weighs historical significance and other editorial factors before an article is taken down. When an article is slated for deletion, CNET says it maintains its own copy, and sends the story to the Internet Archive's Wayback Machine. The company also says current staffers whose articles are deprecated will be alerted at least 10 days ahead of time.
What does Google have to say about this? According to the company's Public Liaison for Google Search, Danny Sullivan, Google recommends against the practice. "Are you deleting content from your site because you somehow believe Google doesn't like 'old' content? That's not a thing! Our guidance doesn't encourage this," Sullivan said in a series of tweets.

If a website has an individual page with outdated content, that page "isn't likely to rank well. Removing it might mean, if you have a massive site, that we're better able to crawl other content on the site. But it doesn't mean we go, 'Oh, now the whole site is so much better' because of what happens with an individual page." Sullivan wrote. "Just don't assume that deleting something only because it's old will improve your site's SEO magically."
AI

Now You Can Block OpenAI's Web Crawler (theverge.com) 65

OpenAI now lets you block its web crawler from scraping your site to help train GPT models. From a report: OpenAI said website operators can specifically disallow its GPTBot crawler on their site's Robots.txt file or block its IP address. "Web pages crawled with the GPTBot user agent may potentially be used to improve future models and are filtered to remove sources that require paywall access, are known to gather personally identifiable information (PII), or have text that violates our policies," OpenAI said in the blog post. For sources that don't fit the excluded criteria, "allowing GPTBot to access your site can help AI models become more accurate and improve their general capabilities and safety."

Blocking the GPTBot may be the first step in OpenAI allowing internet users to opt out of having their data used for training its large language models. It follows some early attempts at creating a flag that would exclude content from training, like a "NoAI" tag conceived by DeviantArt last year. It does not retroactively remove content previously scraped from a site from ChatGPT's training data.

Social Networks

Are the Reddit Protests Over? (gizmodo.com) 97

"Three of Reddit's biggest communities are no longer focused entirely on John Oliver in a form of protest against Reddit," reports the Verge.

Gizmodo argues that this means "the Reddit protest is finally over. Reddit won." Despite the infinite blackout threats, most moderators relented as the weeks rolled by. Three major holdouts were r/aww, r/pics, and r/videos, some of Reddit's largest communities that account for more than 91 million subscribers. The three subreddits reopened weeks ago but adopted rules by popular vote that prohibited content that did not feature HBO's John Oliver, rendering the forums useless for their previous purposes.

For a while, the subreddits stood strong, but r/videos was the first to backpedal, dropping the John Oliver rule but requiring all posts to feature profanity. Soon that rule was abandoned as well. Last week, the moderators of r/aww announced the John Oliver rule was over, and over the weekend r/pics quietly gave up the protest as well, as reported by the Verge. "More than a month has passed, and as things on the internet go, the passion for the protest has waned and people's attention has shifted to other things," an r/aww moderator wrote in a post about the rule change.

According to Reddark, a site that tracks the subreddit protest, 1,843 of the original 8,829 protesting communities are still dark. But most of these are small communities, and today the only protesting subreddit with over 10 million subscribers is r/fitness.

The Verge: Two other big communities have switched back, too. r/pics (with more than 30 million subscribers) had perhaps been the most visibly tied to John Oliver: Oliver himself posted a series of silly photos specifically for the community to use, and at one point, the moderators of r/pics invited Oliver to join the mod team. But sometime recently, r/pics removed any obvious trace of its connections to John Oliver; the Wayback Machine shows that r/pics was all about John Oliver as of Friday but no longer on Saturday...

r/videos (with more than 26 million subscribers) actually dropped its John Oliver rule back in June; it was replaced by a new rule that all posts needed to contain profanity in the title after a community vote. Earlier this month, the r/videos moderators reverted the rules to what they were before the protests started...

In June, more than 8,000 communities went dark to protest the API pricing, but in the weeks since, many subreddits have opened back up (some after feeling pressure from Reddit) and are operating as they did before. Many users are still disgruntled, though, and made their feelings known in July's r/Place canvas.

More than 1,800 subreddits are still private in protest, according to the Reddark tracker.

Some key passages from the moderator's announcement at r/aww: What about the protest, though; did we win? The short answer is no. The long answer is also no, as Reddit's minimal attempts at positive outreach remain overshadowed by the plethora of depressing developments...

At the end of the day, Reddit's API changes have gone into effect. They did not extend the transition period or reduce the exorbitant prices. They granted exemptions to a few apps and moderation tools, but that's about it. The best thing I can say is that they did honor their commitment to ensuring the continued functionality of some mod tools... Despite some reassurances and promises from Reddit, their conduct and these changes have driven away many developers, leading to the shutdown of some tools and an uncertain future for others.

The announcement with a link labeled "and more importantly," which leads to a picture with a message for Reddit CEO Steve Huffman (who uses the name "Spez" when posting on Reddit.)

Slashdot Top Deals