The Courts

A Famous Climate Scientist Is In Court With Big Stakes For Attacks On Science (npr.org) 272

Julia Simon reports via NPR: In a D.C. courtroom, a trial is wrapping up this week with big stakes for climate science. One of the world's most prominent climate scientists is suing a right-wing author and a policy analyst for defamation. The case comes at a time when attacks on scientists are proliferating, says Peter Hotez, professor of Pediatrics and Molecular Virology at Baylor College of Medicine. Even as misinformation about scientists and their work keeps growing, Hotez says scientists haven't yet found a good way to respond. "The reason we're sort of fumbling at this is it's unprecedented. And there is no roadmap," he says. The climate scientist at the center of this trial is Michael Mann. The professor of earth and environmental science at the University of Pennsylvania gained prominence for helping make one of the most accessible, consequential graphs in the history of climate science. First published in the late 1990s, the graph shows thousands of years of relatively stable global temperatures. Then, when humans start burning lots of coal and oil, it shows a spike upward. Mann's graph looks like a hockey stick lying on its side, with the blade sticking straight up. The so-called "hockey stick graph" was successful in helping the public understand the urgency of global warming, and that made it a target, says Kert Davies, director of special investigations at the Center for Climate Integrity, a climate accountability nonprofit. "Because it became such a powerful image, it was under attack from the beginning," he says.

The attacks came from groups that reject climate science, some funded by the fossil fuel industry. In the midst of these types of attacks -- including the hacking of Mann's and other scientists' emails by unknown hackers -- Penn State, where Mann was then working, opened an investigation into his research. Penn State, as well as the National Science Foundation, found no evidence of scientific misconduct. But a policy analyst and an author wrote that they were not convinced. The trial in D.C. Superior Court involves posts from right-wing author Mark Steyn and policy analyst Rand Simberg. In an online post, Simberg compared Mann to former Penn State football coach Jerry Sandusky, a convicted child sex abuser. Simberg wrote that Mann was the "Sandusky of climate science," writing that Mann "molested and tortured data (PDF)." Steyn called Mann's research fraudulent. Mann sued the two men for defamation. Mann also sued the publishers of the posts, National Review and the Competitive Enterprise Institute, but in 2021, the court ruled they couldn't be held liable.

In court, Mann has argued that he lost funding and research opportunities. Steyn said in court that if Penn State's president, Graham Spanier, covered up child sexual assault, why wouldn't he cover up for Mann's science. The science in question used ice cores and tree rings to estimate Earth's past temperatures. "If Graham Spanier is prepared to cover up child rape, week in, week out, year in, year out, why would he be the least bit squeamish about covering up a bit of hanky panky with the tree rings and the ice cores?" Steyn asked the court. Mann and Steyn declined to speak to NPR during the ongoing trial. One of Simberg's lawyers, Victoria Weatherford, said "inflammatory does not equal defamatory" and that her client is allowed to express his opinion, even if it were wrong. "No matter how offensive or distasteful or heated it is," Weatherford tells NPR, "that speech is absolutely protected under the First Amendment when it's said against a public figure, if the person saying it believed that what they said was true."

Mars

NASA Regains Contact With Its 'Ingenuity' Mars Helicopter (npr.org) 12

"Good news..." NASA posted Saturday night on X. "We've reestablished contact with the Mars Helicopter..."

After a two-day communications blackout, NASA had instructed its Perseverance Mars rover "to perform long-duration listening sessions for Ingenuity's signal" — and apparently they did the trick. "The team is reviewing the new data to better understand the unexpected comms dropout" during the helicopter's record-breaking 72nd flight.

Slashdot reader Thelasko shared this report from NPR: Communications broke down on Thursday, when the little autonomous rotorcraft was sent on a "quick pop-up vertical flight," to test its systems after an unplanned early landing during its previous flight, the agency said in a status update on Friday night. The Perseverance rover, which relays data between the helicopter and Earth during the flights, showed that Ingenuity climbed to its assigned maximum altitude of 40 feet, NASA said.

During its planned descent, the helicopter and rover stopped communicating with each other...

Even before it came back online, RockDoctor (Slashdot reader #15,477) pointed out that the Mars copter has done this before. "Batteries dieing, resulting in a communications re-set, If I remember correctly."

Space.com also noted additional alternatives: "Perseverance is currently out of line-of-sight with Ingenuity, but the team could consider driving closer for a visual inspection," NASA's Jet Propulsion Laboratory in Southern California, which manages both robots' missions, said via X on Friday.

Ingenuity has stayed aloft for more than 128 minutes and covered a total of 11 miles (17.7 kilometers) during its 72 Mars flights, according to the mission's flight log.

The Internet

Is the Internet About to Get Weird Again? (rollingstone.com) 83

Long-time tech entrepreneur Anil Dash predicts a big shift in the digital landscape in 2024. And "regular internet users — not just the world's tech tycoons — may be the ones who decide how it goes." The first thing to understand about this new era of the internet is that power is, undoubtedly, shifting. For example, regulators are now part of the story — an ironic shift for anyone who was around in the dot com days. In the E.U., tech giants like Apple are being forced to hold their noses and embrace mandated changes like opening up their devices to allow alternate app stores to provide apps to consumers. This could be good news, increasing consumer choice and possibly enabling different business models — how about mobile games that aren't constantly pestering gamers for in-app purchases? Back in the U.S., a shocking judgment in Epic Games' (that's the Fortnite folks') lawsuit against Google leaves us with the promise that Android phones might open up in a similar way.

That's not just good news for the billions of people who own smartphones. It's part of a sea change for the coders and designers who build the apps, sites, and games we all use. For an entire generation, the imagination of people making the web has been hemmed in by the control of a handful of giant companies that have had enormous control over things like search results, or app stores, or ad platforms, or payment systems. Going back to the more free-for-all nature of the Nineties internet could mean we see a proliferation of unexpected, strange new products and services. Back then, a lot of technology was created by local communities or people with a shared interest, and it was as likely that cool things would be invented by universities and non-profits and eccentric lone creators as they were to be made by giant corporations....

In that era, people could even make their own little social networks, so the conversations and content you found on an online forum or discussion were as likely to have been hosted by the efforts of one lone creator than to have come from some giant corporate conglomerate. It was a more democratized internet, and while the world can't return to that level of simplicity, we're seeing signs of a modern revisiting of some of those ideas.

Dash's article (published in Rolling Stone) ends with examples of "people who had been quietly keeping the spirit of the human, personal, creative internet alive...seeing a resurgence now that the web is up for grabs again. "
  • The School for Poetic Computation (which Dash describes as "an eccentric, deeply charming, self-organized school for people who want to combine art and technology and a social conscience.")
  • Mask On Zone, "a collaboration with the artist and coder Ritu Ghiya, which gives demonstrators and protesters in-context guidance on how to avoid surveillance."

Dash concludes that "We're seeing the biggest return to that human-run, personal-scale web that we've witnessed since the turn of the millennium, with enough momentum that it's likely that 2024 is the first year since then that many people have the experience of making a new connection or seeing something go viral on a platform that's being run by a regular person instead of a commercial entity.

"It's going to make a lot of new things possible..."

A big thank-you for submitting the article to long-time Slashdot reader, DrunkenTerror.


AI

Researchers Have a Magic Tool To Understand AI: Harry Potter (bloomberg.com) 89

More than two decades after J.K. Rowling introduced the world to a universe of magical creatures, forbidden forests and a teenage wizard, Harry Potter is finding renewed relevance in a very different body of literature: AI research. From a report: A growing number of researchers are using the best-selling Harry Potter books to experiment with generative artificial intelligence technology, citing the series' enduring influence in popular culture and the wide range of language data and complex wordplay within its pages. Reviewing a list of studies and academic papers referencing Harry Potter offers a snapshot into cutting-edge AI research -- and some of the thorniest questions facing the technology.

In perhaps the most notable recent example, Harry, Hermione and Ron star in a paper titled "Who's Harry Potter?" that sheds light on a new technique helping large language models to selectively forget information. It's a high-stakes task for the industry: Large language models, which power AI chatbots, are built on vast amounts of online data, including copyrighted material and other problematic content. That has led to lawsuits and public scrutiny for some AI companies. The paper's authors, Microsoft researchers Mark Russinovich and Ronen Eldan, said they've demonstrated that AI models can be altered or edited to remove any knowledge of the existence of the Harry Potter books, including characters and plots, without sacrificing the AI system's overall decision-making and analytical abilities.

The duo said they chose the books because of their universal familiarity. "We believed that it would be easier for people in the research community to evaluate the model resulting from our technique and confirm for themselves that the content has indeed been 'unlearned,'" said Russinovich, chief technology officer of Microsoft Azure. "Almost anyone can come up with prompts for the model that would probe whether or not it 'knows' the books. Even people who haven't read the books would be aware of plot elements and characters."

AI

Amazon Announces Q, an AI Chatbot for Businesses (cnbc.com) 37

Amazon on Tuesday announced a new chatbot called Q for people to use at work. From a report: The product, announced at Amazon Web Services' Reinvent conference in Las Vegas, represents Amazon's latest effort to challenge Microsoft and Google in productivity software. It comes one year after Microsoft-backed startup OpenAI launched its ChatGPT chatbot, which has popularized generative artificial intelligence for crafting human-like text in response to a few lines of human input.

A tier for business users will cost $20 per person per month. A version with additional features for developers and IT workers will cost $25 per person per month. The Copilot for Microsoft 365 and Duet AI for Google Workspace for business workers both cost $30 per person per month. Initially, Q can help people understand the capabilities of AWS and trouble-shoot issues. People will be able to talk with it in communication apps such as Salesforce's Slack and software developers' text-editing applications, Adam Selipsky, CEO of AWS, said onstage at Reinvent. It will also appear in AWS' online Management Console. Q can provide citations of documents to back up its chat responses. The tool can automatically make changes to source code so developers have less work to do, Selipsky said. The service will be able to connect to more than 40 enterprise systems, he said.

Databases

Online Atrocity Database Exposed Thousands of Vulnerable People In Congo (theintercept.com) 6

An anonymous reader quotes a report from The Intercept: A joint project of Human Rights Watch and New York University to document human rights abuses in the Democratic Republic of the Congo has been taken offline after exposing the identities of thousands of vulnerable people, including survivors of mass killings and sexual assaults. The Kivu Security Tracker is a "data-centric crisis map" of atrocities in eastern Congo that has been used by policymakers, academics, journalists, and activists to "better understand trends, causes of insecurity and serious violations of international human rights and humanitarian law," according to the deactivated site. This includes massacres, murders, rapes, and violence against activists and medical personnel by state security forces and armed groups, the site said. But the KST's lax security protocols appear to have accidentally doxxed up to 8,000 people, including activists, sexual assault survivors, United Nations staff, Congolese government officials, local journalists, and victims of attacks, an Intercept analysis found. Hundreds of documents -- including 165 spreadsheets -- that were on a public server contained the names, locations, phone numbers, and organizational affiliations of those sources, as well as sensitive information about some 17,000 "security incidents," such as mass killings, torture, and attacks on peaceful protesters.

The data was available via KST's main website, and anyone with an internet connection could access it. The information appears to have been publicly available on the internet for more than four years. [...] The spreadsheets, along with the main KST website, were taken offline on October 28, after investigative journalist Robert Flummerfelt, one of the authors of this story, discovered the leak and informed Human Rights Watch and New York University's Center on International Cooperation. HRW subsequently assembled what one source close to the project described as a "crisis team." Last week, HRW and NYU's Congo Research Group, the entity within the Center on International Cooperation that maintains the KST website, issued a statement that announced the takedown and referred in vague terms to "a security vulnerability in its database," adding, "Our organizations are reviewing the security and privacy of our data and website, including how we gather and store information and our research methodology." The statement made no mention of publicly exposing the identities of sources who provided information on a confidential basis. [...] The Intercept has not found any instances of individuals affected by the security failures, but it's currently unknown if any of the thousands of people involved were harmed.
"We deeply regret the security vulnerability in the KST database and share concerns about the wider security implications," Human Rights Watch's chief communications officer, Mei Fong, told The Intercept. Fong said in an email that the organization is "treating the data vulnerability in the KST database, and concerns around research methodology on the KST project, with the utmost seriousness." Fong added, "Human Rights Watch did not set up or manage the KST website. We are working with our partners to support an investigation to establish how many people -- other than the limited number we are so far aware of -- may have accessed the KST data, what risks this may pose to others, and next steps. The security and confidentiality of those affected is our primary concern."
The Media

CNN Criticizes Microsoft's 'Making a Mess of the News' By Replacing MSN's Staff With AI (cnn.com) 74

CNN decries "false and bizarre" news stories being published by Microsoft on MSN.com, "one of the world's most trafficked websites and a place where millions of Americans get their news every day." Microsoft's decision to increasingly rely on the use of automation and artificial intelligence over human editors to curate its homepage appears to be behind the site's recent amplification of false and bizarre stories, people familiar with how the site works told CNN.

The site, which comes pre-loaded as the default start page on devices running Microsoft software, including on Microsoft's latest "Edge" browser... employed more than 800 editors in 2018 to help select and curate news stories shown to millions of readers around the world. But in recent years Microsoft has laid off editors, some of whom were told they were being replaced by "automation," what they understand to be AI.

CNN points out that while Microsoft's president "has publicly lectured on the responsible use" of AI, "the apparent role of AI in Microsoft's recent amplification of bogus stories raises questions about the company's public adoption of the nascent technology and for the journalism industry as a whole." CNN notes that an AI-generated poll urging readers to guess the cause of a swimmer's death "was not the first public blunder caused by Microsoft's embrace of AI." In September Microsoft republished a story about Brandon Hunter, a former NBA player who died unexpectedly at the age of 42, under the headline, "Brandon Hunter useless at 42." Then, in October, Microsoft republished an article that claimed that San Francisco Supervisor Dean Preston had resigned from his position after criticism from Elon Musk. The story was entirely false.

Some of the articles featured by Microsoft were initially published by obscure websites that might have gone unnoticed amid the daily deluge of online misinformation that circulates every day. But Microsoft's decision to republish articles from fringe outlets has elevated those stories to potentially millions of additional readers, breathing life into their claims. Editors who formerly worked for Microsoft told CNN that these kinds of false stories, or virtually any other articles from low-quality websites, would not be prominently featured by Microsoft were it not for its use of AI. Ryn Pfeuffer, who worked intermittently as a contractor for Microsoft for eight years, said she received a call in May 2020 with the news that her entire team was being laid off. 2020 was the year, a Microsoft spokesperson told CNN in a statement on Wednesday, that the company began transitioning to a "personalized feed" that is "tailored by an algorithm to the interests of our audiences."

MSN "has also published other junk content, including bogus stories about fishermen catching mermaids and Bigfoot spottings," reports the tech news site Futurism, "in the wake of ditching its human editors in favor of automation.

"Noticing a pattern yet? The company pumps out trash-tier AI content, then waits until it's called out publicly to quietly delete it and move onto the next trainwreck." We've known that Microsoft's MSN news portal has been pumping out a garbled, AI-generated firehose for well over a year now. The company has been using the website to distribute misleading and oftentimes incomprehensible garbage to hundreds of millions of readers per month... And if MSN presents a vision of how the tech industry's obsession with AI is going to play out in the information ecosystem, we're in for a rough ride.
CNN got this reaction from a user whose default browser changed from Chrome to Microsoft Edge after a software update — and discovered their home page had switched to MSN.com. "It felt like I was standing in line at the grocery store reading a National Enquirer front page."

A company spokesperson assured CNN that Microsoft was "committed to addressing the recent issue of low quality articles."
Power

Does Nuclear Get In the Way of Renewable? France and Germany Disagree. (energypost.eu) 236

"France and Germany lead the camps in disagreeing on the future of nuclear in Europe," write two climate policy journalists. On the Energy Post blog they explore why — citing energy experts and politicians.

Germany "ultimately completed its nuclear exit in April 2023," while France "has the highest share of nuclear in the energy mix of any country in the world." [A] major concern is that more nuclear means less renewables, at a time when wind and solar need all the scale they can get... In a joint attempt to provide greater technical clarity on the nuclear power debate, French think tank IDDRI and German Agora Energiewende set out in 2018 to understand how nuclear energy will influence the transformation of energy systems in both countries. They found that if a high share of coal or nuclear based conventional power capacity stays online in both countries, this will likely to delay the time when market prices allow renewable power operators to cover their production costs and run the operations at a profit. They also found that exporting surplus electricity with conventional plants bites into renewable power investments abroad. At the same time, the growing share of renewables would eventually render most conventional plants unprofitable. "In order to avoid stranded assets, it is essential to gradually reduce conventional capacities," the bi-national report concluded...

Xavier Moreno, president of French think tank Ecological Realities and Energy Mix Study Circle (Cereme) and former vice president of French utility company Suez, said the all-renewables approach was complicated by a lack of viable electricity storage technologies. "Technically speaking, it would be necessary to store up to 20 percent to be able to smoothen renewable power supply." Those who believe that this will be possible through a combination of different storage options are chasing "a dream," Moreno argued.

The issue comes up when trading power in Europe's integrated energy market: should gate closure times be based on a decentralised, flexible renewables-based system, or a centralised grid based on nuclear baseloads? Rainer Hinrichs-Rahlwes, European policy expert for the German Renewable Energy Federation lobby group, says "Nuclear power plants and their inflexible output can cause grid congestion, the opposite of what is needed to accommodate large shares of wind and solar in a modern and flexible grid system."

The article notes that France plans to eliminate coal use by 2038, and already has one of the lowest emissions per head of any rich country. But "In mid-2023, 800 French scientists warned against the risks of the country's new nuclear programme, pointing to unresolved questions of radioactive waste management, which remain largely unresolved in most of the EU, including in France. The scientists also warned against risks of accidental contamination or meltdown."

Thanks to Slashdot reader AleRunner for submitting the article.
Google

Are We Seeing the End of the Googleverse? (theverge.com) 133

The Verge argues we're seeing "the end of the Googleverse. For two decades, Google Search was the invisible force that determined the ebb and flow of online content.

"Now, for the first time, its cultural relevance is in question... all around us are signs that the era of 'peak Google' is ending or, possibly, already over." There is a growing chorus of complaints that Google is not as accurate, as competent, as dedicated to search as it once was. The rise of massive closed algorithmic social networks like Meta's Facebook and Instagram began eating the web in the 2010s. More recently, there's been a shift to entertainment-based video feeds like TikTok — which is now being used as a primary search engine by a new generation of internet users...

Google Reader shut down in 2013, taking with it the last vestiges of the blogosphere. Search inside of Google Groups has repeatedly broken over the years. Blogger still works, but without Google Reader as a hub for aggregating it, most publishers started making native content on platforms like Facebook and Instagram and, more recently, TikTok. Discoverability of the open web has suffered. Pinterest has been accused of eating Google Image Search results. And the recent protests over third-party API access at Reddit revealed how popular Google has become as a search engine not for Google's results but for Reddit content. Google's place in the hierarchy of Big Tech is slipping enough that some are even admitting that Apple Maps is worth giving another chance, something unthinkable even a few years ago. On top of it all, OpenAI's massively successful ChatGPT has dragged Google into a race against Microsoft to build a completely different kind of search, one that uses a chatbot interface supported by generative AI.

Their article quotes the founder of the long-ago Google-watching blog, "Google Blogoscoped," who remembers that when Google first came along, "they were ad-free with actually relevant results in a minimalistic kind of design. If we fast-forward to now, it's kind of inverted now. The results are kind of spammy and keyword-built and SEO stuff. And so it might be hard to understand for people looking at Google now how useful it was back then."

The question, of course, is when did it all go wrong? How did a site that captured the imagination of the internet and fundamentally changed the way we communicate turn into a burned-out Walmart at the edge of town? Well, if you ask Anil Dash, it was all the way back in 2003 — when the company turned on its AdSense program. "Prior to 2003-2004, you could have an open comment box on the internet. And nobody would pretty much type in it unless they wanted to leave a comment. No authentication. Nothing. And the reason why was because who the fuck cares what you comment on there. And then instantly, overnight, what happened?" Dash said. "Every single comment thread on the internet was instantly spammed. And it happened overnight...."

As he sees it, Google's advertising tools gave links a monetary value, killing anything organic on the platform. From that moment forward, Google cared more about the health of its own network than the health of the wider internet. "At that point it was really clear where the next 20 years were going to go," he said.

Education

CalTech To Accept Khan Academy Success As Option For Admission (latimes.com) 35

"Given that too many schools don't teach calculus, chemistry and physics, CalTech is allowing potential undergraduates to demonstrate their ability in these fields by using Khan Academy," writes Slashdot reader Bruce66423. Los Angeles Times reports: One of Caltech's alternative paths is taking Khan Academy's free, online classes and scoring 90% or higher on a certification test. Sal Khan, academy founder, said Caltech's action is a "huge deal" for equitable access to college. While Caltech is small -- only 2,400 students, about 40% of them undergraduates -- Khan said he hoped its prestigious reputation would encourage other institutions to examine their admission barriers and find creative solutions to ease them. The Pasadena-based institute, with a 3% admission rate last year, boasts 46 Nobel laureates and cutting-edge research in such fields as earthquake engineering, behavioral genetics, geochemistry, quantum information and aerospace. "You have one of the most academically rigorous schools on the planet that has arguably one of the highest bars for admission, saying that an alternative pathway that is free and accessible to anyone is now a means to meeting their requirements," said Khan, whose nonprofit offers free courses, test prep and tutoring to more than 152 million users. [...]

The impetus for the policy change began in February, when Pallie, the admissions director, and two Caltech colleagues attended a workshop on equity hosted by the National Assn. for College Admission Counseling. They were particularly struck by one speaker, Melodie Baker of Just Equations, a nonprofit that seeks to widen math opportunities. As Baker pointed out the lack of access to calculus for many students, Pallie and her team began to question Caltech's admission requirement for the course, along with physics and chemistry. Pallie and Jared Leadbetter, a professor of environmental microbiology who heads the faculty admissions committee, began to look into potential course alternatives. Pallie connected with Khan's team, which started a second nonprofit, Schoolhouse.world, during the pandemic in 2020 to offer free tutoring. Peer tutors on the platform certify they are qualified for their jobs by scoring at least 90% on the course exam and videotaping themselves explaining how they solved each problem on it. The video helps ensure that the students actually took the exam themselves and understand the material. That video feature gave Caltech assurances about the integrity of the alternative path.

Under the new process, students would take a calculus, physics or chemistry class offered by Khan Academy and use the Schoolhouse platform to certify their mastery of the content as tutors do with a 90% score or better on the exam and a videotaped explanation of their reasoning. Proof of certification is required within one week of the application deadline, which is in November for early action and January for regular decisions. Pallie and Leadbetter also wanted to test whether the Khan Academy courses are sufficiently rigorous. Several Caltech undergraduates took the courses to assess whether all concepts were covered in enough breadth and depth to pass the campus placement exams in those subjects. Miranda, a rising Caltech junior studying mechanical engineering, took the calculus course and gave it a thumbs-up, although she added that students would probably want to use additional textbooks and other study materials to deepen their preparation for Caltech.

AI

Microsoft AI Suggests Food Bank As a 'Cannot Miss' Tourist Spot In Canada 50

An anonymous reader quotes a report from Ars Technica: Late last week, MSN.com's Microsoft Travel section posted an AI-generated article about the "cannot miss" attractions of Ottawa that includes the Ottawa Food Bank, a real charitable organization that feeds struggling families. In its recommendation text, Microsoft's AI model wrote, "Consider going into it on an empty stomach." Titled, "Headed to Ottawa? Here's what you shouldn't miss!," (archive here) the article extols the virtues of the Canadian city and recommends attending the Winterlude festival (which only takes place in February), visiting an Ottawa Senators game, and skating in "The World's Largest Naturallyfrozen Ice Rink" (sic).

As the No. 3 destination on the list, Microsoft Travel suggests visiting the Ottawa Food Bank, likely drawn from a summary found online but capped with an unfortunate turn of phrase: "The organization has been collecting, purchasing, producing, and delivering food to needy people and families in the Ottawa area since 1984. We observe how hunger impacts men, women, and children on a daily basis, and how it may be a barrier to achievement. People who come to us have jobs and families to support, as well as expenses to pay. Life is already difficult enough. Consider going into it on an empty stomach."

That last line is an example of the kind of empty platitude (or embarrassing mistaken summary) one can easily find in AI-generated writing, inserted thoughtlessly because the AI model behind the article cannot understand the context of what it is doing. The article is credited to "Microsoft Travel," and it is likely the product of a large language model (LLM), a type of AI model trained on a vast scrape of text found on the Internet.
The Almighty Buck

Disney, Netflix, and More Are Fighting FTC's 'Click To Cancel' Proposal (businessinsider.com) 195

Disney, Netflix, and other media and entertainment giants are pushing back against the FTC's "click to cancel" proposal (Warning: source paywalled; alternative source) that would make it easier for people to cancel streaming, gaming, and other services. Insider reports: Companies of all stripes have angered consumers by making services all too easy to sign up for but often confoundingly difficult to cancel, with gyms and news outlets considered among the worst offenders. The FTC has gone after individual companies; it recently sued Amazon, alleging the etailer "tricked" people into signing up for Amazon Prime. That followed the FTC's proposal in March for a regulation that's intended "to make it as easy for consumers to cancel their enrollment as it was to sign up." The policy would cover providers of both digital and physical subscriptions, from streamers and gym memberships to phone companies and cable TV distributors. The new rule would require companies to offer a simple mechanism for users to cancel subscriptions the same way they signed up. For example, you wouldn't have to cancel a service in person or over the phone if you signed up for it online. "I can't tell you how much time I've spent trying to cancel subscriptions I never wanted, let alone the cost!" one person wrote in a comment to the FTC.

The Internet & Television Association, which counts Disney, Paramount, and Warner Bros. Discovery as members, said in its public comment that the proposed reg is so vague, it would lead marketers to be excessive in their disclosures, leaving consumers "inundated" and "confused." The reg would even infringe on its members' freedom of speech, the association argued. "The proposal would also severely curtail or, in some cases, even prohibit companies from communicating with their customers, in violation of the First Amendment," the association wrote. Sirius XM wrote in its comments that one proposed requirement -- that companies maintain records of phone calls with customers -- would cost the company "several million" dollars a year to comply with. The Entertainment Software Association, the video gaming trade organization, noted that the FTC's proposed disclosure requirements "would interfere with game play and customer enjoyment." The ESA wrote that "most consumers understand autorenewal offers and are knowing and willing participants in the marketplace" and that letting customers cancel immediately would prevent member companies from offering them alternative plans or discounts. The ESA was joined in its comments by the Digital Media Association and Motion Picture Association, whose members include Netflix, Sony Pictures Entertainment, and Universal Pictures. The FTC will examine the feedback it's received through public comment before considering a final rule.

The Courts

Google Hit With Lawsuit Alleging It Stole Data From Millions of Users To Train Its AI Tools (cnn.com) 46

"CNN reports on a wide-ranging class action lawsuit claiming Google scraped and misused data to train its AI systems," writes long-time Slashdot reader david.emery. "This goes to the heart of what can be done with information that is available over the internet." From the report: The complaint alleges that Google "has been secretly stealing everything ever created and shared on the internet by hundreds of millions of Americans" and using this data to train its AI products, such as its chatbot Bard. The complaint also claims Google has taken "virtually the entirety of our digital footprint," including "creative and copywritten works" to build its AI products. The complaint points to a recent update to Google's privacy policy that explicitly states the company may use publicly accessible information to train its AI models and tools such as Bard.

In response to an earlier Verge report on the update, the company said its policy "has long been transparent that Google uses publicly available information from the open web to train language models for services like Google Translate. This latest update simply clarifies that newer services like Bard are also included." [...] The suit is seeking injunctive relief in the form of a temporary freeze on commercial access to and commercial development of Google's generative AI tools like Bard. It is also seeking unspecified damages and payments as financial compensation to people whose data was allegedly misappropriated by Google. The firm says it has lined up eight plaintiffs, including a minor.
"Google needs to understand that 'publicly available' has never meant free to use for any purpose," Tim Giordano, one of the attorneys at Clarkson bringing the suit against Google, told CNN in an interview. "Our personal information and our data is our property, and it's valuable, and nobody has the right to just take it and use it for any purpose."

The plaintiffs, the Clarkson Law Firm, previously filed a similar lawsuit against OpenAI last month.
Encryption

Security Researchers Latest To Blast UK's Online Safety Bill As Encryption Risk (techcrunch.com) 5

An anonymous reader quotes a report from TechCrunch: Nearly 70 IT security and privacy academics have added to the clamor of alarm over the damage the U.K.'s Online Safety Bill could wreak to, er, online safety unless it's amended to ensure it does not undermine strong encryption. Writing in an open letter (PDF), 68 U.K.-affiliated security and privacy researchers have warned the draft legislation poses a stark risk to essential security technologies that are routinely used to keep digital communications safe.

"As independent information security and cryptography researchers, we build technologies that keep people safe online. It is in this capacity that we see the need to stress that the safety provided by these essential technologies is now under threat in the Online Safety Bill," the academics warn, echoing concerns already expressed by end-to-end encrypted comms services such as WhatsApp, Signal and Element -- which have said they would opt to withdraw services from the market or be blocked by U.K. authorities rather than compromise the level of security provided to their users. [...] "We understand that this is a critical time for the Online Safety Bill, as it is being discussed in the House of Lords before being returned to the Commons this summer," they write. "In brief, our concern is that surveillance technologies are deployed in the spirit of providing online safety. This act undermines privacy guarantees and, indeed, safety online."

The academics, who hold professorships and other positions at universities around the country -- including a number of Russell Group research-intensive institutions such as King's College and Imperial College in London, Oxford and Cambridge, Edinburgh, Sheffield and Manchester to name a few -- say their aim with the letter is to highlight "alarming misunderstandings and misconceptions around the Online Safety Bill and its interaction with the privacy and security technologies that our daily online interactions and communication rely on."
"There is no technological solution to the contradiction inherent in both keeping information confidential from third parties and sharing that same information with third parties," the experts warn, adding: "The history of 'no one but us' cryptographic backdoors is a history of failures, from the Clipper chip to DualEC. All technological solutions being put forward share that they give a third party access to private speech, messages and images under some criteria defined by that third party."

Last week, Apple publicly voiced its opposition to the bill. The company said in a statement: "End-to-end encryption is a critical capability that protects the privacy of journalists, human rights activists, and diplomats. It also helps everyday citizens defend themselves from surveillance, identity theft, fraud, and data breaches. The Online Safety Bill poses a serious threat to this protection, and could put UK citizens at greater risk. Apple urges the government to amend the bill to protect strong end-to-end encryption for the benefit of all."
United Kingdom

UK Tightens Online Safety Bill Again as It Nears Final Approval (bloomberg.com) 31

The UK made last-minute amendments toughening up its sweeping, long-awaited Online Safety Bill following scrutiny in Parliament's upper chamber, the House of Lords. From a report: Internet companies carrying pornographic content will be explicitly required to use age verification or estimation measures, and ensure these methods are effective, the Department for Science, Innovation and Technology said in an emailed statement Friday. Executives will be held personally responsible for child safety on their platforms, the statement said.

DSIT didn't respond to follow-up questions about the detail of this policy. Regulator Ofcom will be empowered to retrieve data on the online activity of deceased children to understand if and how their online activity may have played any role in their death, if requested by a coroner, the government said. It also announced Ofcom will research the role that app stores play in children's access to harmful content. The watchdog will also publish guidance on how platforms can reduce risks to women and have to improve public literacy of disinformation.

Security

SMS Phishers Harvested Phone Numbers, Shipment Data From UPS Tracking Tool (krebsonsecurity.com) 12

An anonymous reader quotes a report from KrebsOnSecurity: The United Parcel Service (UPS) says fraudsters have been harvesting phone numbers and other information from its online shipment tracking tool in Canada to send highly targeted SMS phishing (a.k.a. "smishing") messages that spoofed UPS and other top brands. The missives addressed recipients by name, included details about recent orders, and warned that those orders wouldn't be shipped unless the customer paid an added delivery fee. In a snail mail letter sent this month to Canadian customers, UPS Canada Ltd. said it is aware that some package recipients have received fraudulent text messages demanding payment before a package can be delivered, and that it has been working with partners in its delivery chain to try to understand how the fraud was occurring.

"During that review, UPS discovered a method by which a person who searched for a particular package or misused a package look-up tool could obtain more information about the delivery, potentially including a recipient's phone number," the letter reads. "Because this information could be misused by third parties, including potentially in a smishing scheme, UPS has taken steps to limit access to that information." The written notice goes on to say UPS believes the data exposure "affected packages for a small group of shippers and some of their customers from February 1, 2022 to April 24, 2023." [...]

In a statement provided to KrebsOnSecurity, Sandy Springs, Ga. based UPS [NYSE:UPS] said the company has been working with partners in the delivery chain to understand how that fraud was being perpetrated, as well as with law enforcement and third-party experts to identify the cause of this scheme and to put a stop to it. "Law enforcement has indicated that there has been an increase in smishing impacting a number of shippers and many different industries," reads an email from Brian Hughes, director of financial and strategy communications at UPS. "Out of an abundance of caution, UPS is sending privacy incident notification letters to individuals in Canada whose information may have been impacted," Hughes said. "We encourage our customers and general consumers to learn about the ways they can stay protected against attempts like this by visiting the UPS Fight Fraud website."

Youtube

Why YouTube Could Give Google an Edge in AI (theinformation.com) 30

Google last month upgraded its Bard chatbot with a new machine-learning model that can better understand conversational language and compete with OpenAI's ChatGPT. As Google develops a sequel to that model, it may hold a trump card: YouTube. From a report: The video site, which Google owns, is the single biggest and richest source of imagery, audio and text transcripts on the internet. And Google's researchers have been using YouTube to develop its next large-language model, Gemini, according to a person with knowledge of the situation. The value of YouTube hasn't been lost on OpenAI, either: The startup has secretly used data from the site to train some of its artificial intelligence models, said one person with direct knowledge of the effort. AI practitioners who compete with Google say the company may gain an edge from owning YouTube, which gives it more complete access to the video data than rivals that scrape the videos. That's especially important as AI developers face new obstacles to finding high-quality data on which to train and improve their models. Major website publishers from Reddit to Stack Exchange to DeviantArt are increasingly blocking developers from downloading data for that purpose. Before those walls came up, AI startups used data from such sites to develop AI models, according to the publishers and disclosures from the startups.

The advantage that Google gains in AI from owning YouTube may reinforce concerns among antitrust regulators about Google's power. On Wednesday, the European Commission kicked off a complaint about Google's power in the ad tech world, contending that Google favors its "own online display advertising technology services to the detriment of competing providers." The U.S. Department of Justice in January sued Google over similar issues. Google could use audio transcriptions or descriptions of YouTube videos as another source of text for training Gemini, leading to more-sophisticated language understanding and the ability to generate more-realistic conversational responses. It could also integrate video and audio into the model itself, giving it the multimodal capabilities many researchers believe are the next frontier in AI, according to interviews with nearly a dozen people who work on these types of machine-learning models. Google CEO Sundar Pichai told investors earlier this month that Gemini, which is still in development, is exhibiting multimodal capabilities not seen in any other model, though he didn't elaborate.

Google

Google Is Weaving Generative AI Into Online Shopping Features (bloomberg.com) 10

Google is bringing generative AI technology to shopping, aiming to get a jump on e-commerce sites like Amazon. From a report: The Alphabet-owned company announced features Wednesday aimed at helping people understand how apparel will fit on them, no matter their body size, and added capabilities for finding products using its search and image-recognition technology. Additionally, Google introduced new ways to research travel destinations and map routes using generative AI -- technology that can craft text, images or even video from simple prompts.

"We want to make Google the place for consumers to come shop, as well as the place for merchants to connect with consumers," Maria Renz, Google's vice president of commerce, said in an interview ahead of the announcement. "We've always been committed to an open ecosystem and a healthy web, and this is one way where we're bringing this technology to bear across merchants." Google is the world's dominant search engine, but 46% of respondents in a survey of US shoppers conducted last year said they still started their product searches and research on Amazon, according to the research firm CivicScience. TikTok, too, is making inroads, CivicScience's research found -- 18% of Gen Z online shoppers turn to the platform first. Google is taking note, with some of its new, AI-powered shopping exploration features aimed at capturing younger audiences.

A new virtual "try-on" feature, launching on Wednesday, will let people see how clothes fit across a range of body types, from XXS to 4XL sizes. Apparel will be overlaid on top of images of diverse models that the company photographed while developing the capability. Google said it was able to launch such a service because of a new image-based AI model that it developed internally, and the company is releasing a new research paper detailing its work alongside the announcement.

Facebook

More Than 2,000 Families Suing Social Media Companies Over Kids' Mental Health (cbsnews.com) 92

schwit1 shares a report from CBS News: When whistleblower Frances Haugen pulled back the curtain on Facebook in the fall of 2021, thousands of pages of internal documents showed troubling signs that the social media giant knew its platforms could be negatively impacting youth, and were doing little to effectively change it. With around 21 million American adolescents on social media, parents took note. Now, families are suing social media. Since we first reported this story last December, the number of families pursuing lawsuits has grown to over 2,000. More than 350 lawsuits are expected to move forward this year against TikTok, Snapchat, YouTube, Roblox and Meta -- the parent company to Instagram and Facebook.

Kathleen Spence: They're holding our children hostage and they're seeking and preying on them. Sharyn Alfonsi: Preying on them? Kathleen Spence: Yes. The Spence family is suing social media giant Meta. Kathleen and Jeff Spence say Instagram led their daughter Alexis into depression and to an eating disorder at the age of 12. [...] Attorney Matt Bergman represents the Spence family. He started the Social Media Victims Law Center after reading the Facebook papers and is now working with more than 1,800 families who are pursuing lawsuits against social media companies like Meta. Matt Bergman: Time and time again, when they have an opportunity to choose between safety of our kids and profits, they always choose profits.

This summer, Bergman and his team plan on starting the discovery process for the federal case against Meta and other social media companies, a multi-million dollar suit that he says is more about changing policy than financial compensation. This summer, Bergman and his team plan on starting the discovery process for the federal case against Meta and other social media companies, a multi-million dollar suit that he says is more about changing policy than financial compensation. Matt Bergman: They have intentionally designed a product that is addictive. They understand that if children stay online, they make more money. It doesn't matter how harmful the material is.

Supercomputing

UK To Invest 900 Million Pounds In Supercomputer In Bid To Build Own 'BritGPT' (theguardian.com) 35

An anonymous reader quotes a report from The Guardian: The UK government is to invest 900 million pounds in a cutting-edge supercomputer as part of an artificial intelligence strategy that includes ensuring the country can build its own "BritGPT". The treasury outlined plans to spend around 900 million pounds on building an exascale computer, which would be several times more powerful than the UK's biggest computers, and establishing a new AI research body. An exascale computer can be used for training complex AI models, but also have other uses across science, industry and defense, including modeling weather forecasts and climate projections. The Treasury said the investment will "allow researchers to better understand climate change, power the discovery of new drugs and maximize our potential in AI.".

An exascale computer is one that can carry out more than one billion billion simple calculations a second, a metric known as an "exaflops". Only one such machine is known to exist, Frontier, which is housed at America's Oak Ridge National Laboratory and used for scientific research -- although supercomputers have such important military applications that it may be the case that others already exist but are not acknowledged by their owners. Frontier, which cost about 500 million pounds to produce and came online in 2022, is more than twice as powerful as the next fastest machine.

The Treasury said it would award a 1 million-pound prize every year for the next 10 years to the most groundbreaking AI research. The award will be called the Manchester Prize, in memory of the so-called Manchester Baby, a forerunner of the modern computer built at the University of Manchester in 1948. The government will also invest 2.5 billion pounds over the next decade in quantum technologies. Quantum computing is based on quantum physics -- which looks at how the subatomic particles that make up the universe work -- and quantum computers are capable of computing their way through vast numbers of different outcomes.

Slashdot Top Deals