The Internet

Political Polarization Toned Down Through Anonymous Online Chats (arstechnica.com) 293

An anonymous reader quotes a report from Ars Technica: Political polarization in the US has become a major issue, as Republicans and Democrats increasingly inhabit separate realities on topics as diverse as election results and infectious diseases. [...] Now, a team of researchers has tested whether social media can potentially help the situation by getting people with opposite political leanings talking to each other about controversial topics. While this significantly reduced polarization, it appeared to be more effective for Republican participants. The researchers zeroed in on two concepts to design their approach. The first is the idea that simply getting people to communicate across the political divide might reduce the sense that at least some of their opponents aren't as extreme as they're often made out to be. The second is that anonymity would allow people to focus on the content of their discussion, rather than worrying about whether what they were saying could be traced back to them.

The researchers realized that they couldn't have any sort of control over conversations on existing social networks. So, they built their own application and hired professionals to do the graphics, support, and moderation. [...] People were randomly assigned to a few conditions. Some didn't use the app at all and were simply asked to write an essay on one of the topics under consideration (immigration or gun control). The rest were asked to converse on the platform about one of these topics. Every participant in these conversations was paired with a member of the opposing political party. Their partners were either unlabeled, labeled as belonging to the opposing party, or labeled as belonging to the same party (although the latter is untrue). Both before and after use of the app, participants answered questions about their view of politicized issues, members of their own party, and political opponents. These were analyzed in terms of issues and social influences, as well as rolled into a single index of polarization for the analysis.

The conversations appeared to have an effect, with polarization lowered by about a quarter of a standard deviation among those who engaged with political opponents that were labeled accordingly. Somewhat surprisingly, conversation partners who were mislabeled had a nearly identical effect, presumably because they suggested that a person's own party contained a diversity of perspectives on the topic. In cases where no party affiliation was given, the depolarization was smaller (0.15 standard deviations). The striking thing is that most of the change came from Republican participants. There, polarization was reduced by 0.4 standard deviations. In contrast, Democratic participants only saw it drop by 0.1 standard deviations -- a change that wasn't statistically significant. The error bars of the two groups of party members overlapped, however, so while large, it's not clear what this difference might tell us. The researchers went back and ran the conversations through sentiment analysis and focused on people whose polarization had dropped the most. They found that their conversation partners used less heated language at the start of the conversation. So it appears that displaying respect for your political opponents can still make a difference, at least in one-on-one conversations. While the conversations had a larger impact on people's views of individual issues, it also influenced their opinion of their political opponents more generally, and the difference between the two effects wasn't statistically significant.
The findings have been published in the journal Nature Human Behavior.
United States

Illinois Just Made It Possible To Sue People For Doxxing Attacks (arstechnica.com) 9

An anonymous reader quotes a report from Ars Technica: Last Friday, Illinois became one of the few states to pass an anti-doxxing law, making it possible for victims to sue attackers who "intentionally" publish their personally identifiable information with intent to harm or harass them. (Doxxing is sometimes spelled "doxing.") The Civil Liability for Doxing Act, which takes effect on January 1, 2024, passed after a unanimous vote. It allows victims to recover damages and to request "a temporary restraining order, emergency order of protection, or preliminary or permanent injunction to restrain and prevent the disclosure or continued disclosure of a person's personally identifiable information or sensitive personal information."

It's the first law of its kind in the Midwest, the Daily Herald reported, and is part of a push by the Anti-Defamation League (ADL) to pass similar laws at the state and federal levels. ADL's Midwest regional director, David Goldenberg, told the Daily Herald that ADL has seen doxxing become "over the past few years" an effective way of "weaponizing" the Internet. ADL has helped similar laws pass in Maryland, Nevada, Oregon, and Washington. [...] The law does not involve criminal charges but imposes civil liability on individuals who dox any Illinois residents. Actions can also be brought against individuals when "any element" of a doxxing offense occurs in the state. [...]

Goldenberg told Ars that the Illinois law was written to emphasize not how information was found and gathered by people seeking to dox others, but on what they did with the information and how much harm they caused. The law might need less updating as the Internet evolves if it doesn't focus on the methods used to mine personally identifiable information. "The reality is that those who are using the Internet to spread hate, to spread misinformation, to do bad are pretty nimble and technology changes on a near daily basis," Goldenberg told Ars. "The law was crafted in a way that ensures that if technology changes, and people use new technologies to share someone's personally identifiable information with the intent to do harm and that harm actually happens, this law remains relevant."

The Courts

Texas' TikTok Ban Hit With First Amendment Lawsuit (cnn.com) 37

Texas's ban on TikTok at state institutions violates the First Amendment, claims a lawsuit filed Thursday by a group of academics and civil society researchers. CNN reports: The Knight First Amendment Institute at Columbia University filed the lawsuit on behalf of the Coalition for Independent Technology Research, which works to study the impact of technology on society. The lawsuit specifically challenges Texas' TikTok ban in relation to public universities, saying it compromises academic freedom and impedes vital research. "The ban is not just ineffective but counterproductive. It's impeding researchers and scholars from studying the very things that Texas says it's concerned about -- like data-collection and disinformation," Jameel Jaffer, executive director of the Institute, told CNN.

The lawsuit cites the example of a University of North Texas researcher who studies young people's use of social media, who has been forced to abandon research projects that rely on university computers and to remove material about TikTok from her courses. The Knight Institute lawsuit notes that Texas has not imposed a ban on other online platforms that collect similar user data, such as Meta and Google. It further argues that a ban doesn't "meaningfully" constrain China's ability to collect sensitive data about Americans, because this data is widely available from other data brokers.

"It's entirely legitimate for government officials to be concerned about social media platforms' data-collection practices, but Imposing broad bans on Americans' access to the platforms isn't a reasonable, effective, or constitutional response to those concerns," Jaffer told CNN. "Like it or not, TikTok is an immensely popular communications platform, and its policies and practices are influencing culture and politics around the world," said Dave Karpf, a Coalition for Independent Technology Research board member and associate professor in the George Washington University School of Media and Public Affairs. "It's important that scholars and researchers be able to study the platform and illuminate the risks associated with it. Ironically, Texas's misguided ban is impeding our members from studying the very risks that Texas says it wants to address."

United Kingdom

UK Tightens Online Safety Bill Again as It Nears Final Approval (bloomberg.com) 31

The UK made last-minute amendments toughening up its sweeping, long-awaited Online Safety Bill following scrutiny in Parliament's upper chamber, the House of Lords. From a report: Internet companies carrying pornographic content will be explicitly required to use age verification or estimation measures, and ensure these methods are effective, the Department for Science, Innovation and Technology said in an emailed statement Friday. Executives will be held personally responsible for child safety on their platforms, the statement said.

DSIT didn't respond to follow-up questions about the detail of this policy. Regulator Ofcom will be empowered to retrieve data on the online activity of deceased children to understand if and how their online activity may have played any role in their death, if requested by a coroner, the government said. It also announced Ofcom will research the role that app stores play in children's access to harmful content. The watchdog will also publish guidance on how platforms can reduce risks to women and have to improve public literacy of disinformation.

Social Networks

Social Media Apps Will Have To Shield Children From Dangerous Stunts (theguardian.com) 62

An anonymous reader quotes a report from The Guardian: Social media firms will be ordered to protect children from encountering dangerous stunts and challenges on their platforms under changes to the online safety bill. The legislation will explicitly refer to content that "encourages, promotes or provides instructions for a challenge or stunt highly likely to result in serious injury" as the type of material that under-18s should be protected from. The bill will also require social media companies to proactively prevent children from seeing the highest risk forms of content, such as material encouraging suicide and self-harm. Tech firms could be required to use age-checking measures to prevent under-18s from seeing such material.

In another change to the legislation, which is expected to become law this year, social media platforms will have to introduce tougher age-checking measures to prevent children from accessing pornography -- bringing them in line with the bill's measures for mainstream sites such as Pornhub. Services that publish or allow pornography on their sites will be required to introduce "highly effective" age-checking measures such as age estimation tools that estimate someone's age from a selfie. Other amendments include requiring the communications watchdog Ofcom to produce guidance for tech firms on protecting women and girls online. Ofcom, which will oversee implementation of the act once it comes into force, will be required to consult with the domestic abuse commissioner and victims commissioner when producing the guidance, in order to ensure it reflects the voices of victims.

The updated bill will also criminalize the sharing of deepfake intimate images in England and Wales. In a further change it will require platforms to ask adult users if they wish to avoid content that promotes self-harm or eating disorders or racist content. Once the law comes into force breaches will carry a punishment of a fine of £18m or up to 10% of global turnover. In the most extreme cases, Ofcom will be able to block platforms.

United States

Pornhub Attacks States for Passing 'Unsafe' Age-Verification Laws (arstechnica.com) 98

Pornhub visitors in Virginia, Mississippi, and Arkansas will see a "very important message" on the adult website's homepage starting today. From a report: Pornhub's public service announcement prompts visitors to contact representatives and oppose recently passed age-verification laws in these states that Pornhub claims puts children and all users' privacy at risk. If users don't support Pornhub before laws go into effect, the company says, Pornhub could potentially restrict access in these states -- a threat it already followed through on in Utah.

In the PSA, adult entertainer Cherie Deville tells Pornhub users that instead of states requiring ID to access adult content, "the best and most effective solution for protecting children and adults alike is to verify users' age at a device level and allow or block access to age-restricted materials and websites accordingly." According to CNN, this PSA is part of a larger effort by Pornhub and its private equity owners, Ethical Capital Partners (ECP), to work with big tech companies to create new device-based age verification solutions. So far, ECP partner Solomon Friedman told CNN that ECP has lobbied Apple, Google, and Microsoft to "develop a technological standard that might turn a user's electronic device into the proof of age necessary to access restricted online content."

AI

First Empirical Study of the Real-World Economic Effects of New AI Systems (npr.org) 39

An anonymous reader quotes a report from NPR: Back in 2017, Brynjolfsson published a paper (PDF) in one of the top academic journals, Science, which outlined the kind of work that he believed AI was capable of doing. It was called "What Can Machine Learning Do? Workforce Implications." Now, Brynjolfsson says, "I have to update that paper dramatically given what's happened in the past year or two." Sure, the current pace of change can feel dizzying and kinda scary. But Brynjolfsson is not catastrophizing. In fact, quite the opposite. He's earned a reputation as a "techno-optimist." And, recently at least, he has a real reason to be optimistic about what AI could mean for the economy. Last week, Brynjolfsson, together with MIT economists Danielle Li and Lindsey R. Raymond, released what is, to the best of our knowledge, the first empirical study of the real-world economic effects of new AI systems. They looked at what happened to a company and its workers after it incorporated a version of ChatGPT, a popular interactive AI chatbot, into workflows.

What the economists found offers potentially great news for the economy, at least in one dimension that is crucial to improving our living standards: AI caused a group of workers to become much more productive. Backed by AI, these workers were able to accomplish much more in less time, with greater customer satisfaction to boot. At the same time, however, the study also shines a spotlight on just how powerful AI is, how disruptive it might be, and suggests that this new, astonishing technology could have economic effects that change the shape of income inequality going forward.
Brynjolfsson and his colleagues described how an undisclosed Fortune 500 company implemented an earlier version of OpenAI's ChatGPT to assist its customer support agents in troubleshooting technical issues through online chat windows. The AI chatbot, trained on previous conversations between agents and customers, improved the performance of less experienced agents, making them as effective as those with more experience. The use of AI led to an, on average, 14% increase in productivity, higher customer satisfaction ratings, and reduced turnover rates. However, the study also revealed that more experienced agents did not experience significant benefits from using AI.

The findings suggest that AI has the potential to improve productivity and reduce inequality by benefiting workers who were previously left behind in the technological era. Nonetheless, it raises questions about how the benefits of AI should be distributed and whether it may devalue specialized skills in certain occupations. While the impact of AI is still being studied, its ability to handle non-routine tasks and learn on the fly indicates that it could have different effects on the job market compared to previous technologies.
Social Networks

Anti-Porn Lobbyists Pressure Reddit To Shut Down Its NSFW Communities (vice.com) 187

An anonymous reader quotes a report from Motherboard: An anti-pornography group that claims all adult content is unhealthy is taking aim at Reddit, one of the biggest online platforms for sharing porn and sex worker resources. The National Center on Sexual Exploitation (NCOSE), formerly Morality in Media, celebrated changes to policy that resulted in adult performers losing their incomes, taken credit for pressuring Instagram to ban Pornhub from the platform, and encouraged its followers to help them shut down sites that host legal adult content, causing real-world harm to sex workers and pushing them toward the exploitation they claim to aim to prevent. The letter, signed by 320 "anti-sexual exploitation and violence experts," according to NCOSE, accuses Reddit of not doing enough to prevent image-based sexual abuse. The letter's co-signatories don't just push for better protection against non-consensual imagery, but demand that all adult content be banned from the site. This would result in a massive purge of hundreds of subreddits, many of them run by sex workers for posting consensual, legal content.

"Adopt strong policies against hardcore pornography and sexually explicit content, due to the inability for Reddit to ever sufficiently verify the age or consent of people depicted in such content," the letter urges Reddit. It also demands that the platform "ban users who upload sexually explicit material, especially if the material depicts child sexual abuse material or non-consensually shared intimate images, and prevent them from creating another account." "While these are steps forward, Reddit's failure to enact meaningful age and consent verirication[sic] practices and ineffective moderation strategy continues to allow such content to flourish on its platform," the letter states.
"If they cause enough fuss in the media, over and over, eventually Reddit will decide it's not financially worthwhile to stand up for sanity, and they'll just nuke porn out of convenience," a moderator for r/cumsluts, a 3-million subscriber community for adult content, told Motherboard. "Eventually groups like NCOSE will get porn outlawed from the web in general. It's just a matter of time, and reintroducing the laws several times under different acronyms until people get tired of fighting. I'm very pessimistic about this. Unfortunately, mindlessly shrieking 'Won't somebody please think of the children?' over and over is a dangerously over-effective tactic."

A moderator for r/18_19 told Motherboard that they don't expect Reddit to ban adult content anytime soon, but if it did, that it could push people to decentralized platforms, or platforms that are more difficult to moderate or search. "I don't think Reddit should ban porn or adult communities. In the short term, banning adult content would suck," they said. "A huge number of people come here for that. But it wouldn't be a big deal in the long run. Porn will be available, it would just take a while for it to consolidate around new locations."
Government

Amazon's Vow to Stop Squeezing Its Sellers Was Fake, Says California's Lawsuit (yahoo.com) 50

An anonymous reader shared this recent report from Bloomberg: Amazon continued blocking sellers from offering lower prices on rival sites, despite assuring antitrust enforcers it ended its policy that artificially inflated prices for consumers, according to newly unsealed filings in California's antitrust lawsuit against the e-commerce giant.

The Seattle-based company planned to expand penalties on sellers who presented lower prices outside Amazon, even after it claimed in 2019 that it stopped punishing third-party merchants who posted better deals on Walmart, Target, eBay, and, in some instances, their own websites, according to previously redacted portions of the suit that were made public.

The newly unsealed filings include an internal document in which Amazon states point-blank that despite "the recent removal of the price parity clause in our Business Solutions Agreement... our expectations and policies have not changed."

"Many of the complaint's allegations are inaccurate," an Amazon spokesperson told Bloomberg. "We look forward to presenting the facts to the court." California Attorney General Rob Bonta is seeking a court order blocking Amazon from continuing to engage in what he alleged is anticompetitive behavior, as well as compensation for consumers in the most populous U.S. state. A similar suit filed by Washington, D.C., was dismissed in 2021...

The 2022 suit came three years after Bloomberg reported that the company's policies were forcing sellers to charge more on competing sites like Walmart because Amazon would bury their products in search results if they offered lower prices elsewhere...

California's probe into Amazon's practices also highlighted concerns that ads on the platform are unhelpful for customers.

Amazon advertising revenue grew 19% in the fourth quarter, to $11.6 billion. The fast-growing revenue source helps prop up Amazon's otherwise low-margin online retail business that carries the high expense of operating warehouses around the country and delivering orders to shoppers' homes.

California's attorney general issued an official statement arguing that Amazon "has orchestrated the substantial market power it now enjoys through agreements at the retail and wholesale level that prevent effective price competition in the online retail marketplace." And it includes this fierce denunciation attributed directly to attorney general Bonta:

"As California families struggle to make ends meet, we're in court to stop Amazon from engaging in anticompetitive practices that keep prices artificially high and stifle competition. There is no shortage of evidence showing that the 'Everything store' is costing consumers more for just about everything. Amazon coerces merchants into agreements that keep prices artificially high, knowing full well that they can't afford to say no. With other e-commerce platforms unable to compete on price, consumers turn to Amazon as a one-stop shop for all their purchases. This perpetuates Amazon's market dominance, allowing the company to make increasingly untenable demands on its merchants and costing consumers more at checkout across California. We won't stand by while Amazon uses coercive contracting practices to dominate the market at the expense of California consumers, small business owners, and the economy."
Businesses

Groupon, Which Has Lost 99.4% of Its Value Since Its IPO, Names a New CEO (techcrunch.com) 25

An anonymous reader shares a report: A dozen years ago, Groupon shot to fame popularizing the online group buying format, confidently rejecting a $6 billion acquisition offer from Google and instead going public with a $17.8 billion market cap. The company today says it has 14 million active users, but almost consistently for the last decade, its financial position has been in a slow decline -- with stagnation in its core business model, little success in efforts to diversify, declining revenues and ongoing losses. And today comes the latest chapter in that story. The Chicago-based company, which today has a market cap of just $103 million (a drop of 99.4% from its public market debut), has appointed Dusan Senkypl, a current board member, as interim CEO. Senkypl will run the company... out of the Czech Republic. His appointment is effective immediately, the company said in a statement today. He replaces Kedar Deshpande, who had been Groupon's CEO for just 15 months.
Privacy

Hackers Claim They Breached T-Mobile More Than 100 Times In 2022 (krebsonsecurity.com) 14

An anonymous reader quotes a report from KrebsOnSecurity: Three different cybercriminal groups claimed access to internal networks at communications giant T-Mobile in more than 100 separate incidents throughout 2022, new data suggests. In each case, the goal of the attackers was the same: Phish T-Mobile employees for access to internal company tools, and then convert that access into a cybercrime service that could be hired to divert any T-Mobile user's text messages and phone calls to another device. The conclusions above are based on an extensive analysis of Telegram chat logs from three distinct cybercrime groups or actors that have been identified by security researchers as particularly active in and effective at "SIM-swapping," which involves temporarily seizing control over a target's mobile phone number.

Countless websites and online services use SMS text messages for both password resets and multi-factor authentication. This means that stealing someone's phone number often can let cybercriminals hijack the target's entire digital life in short order -- including access to any financial, email and social media accounts tied to that phone number. All three SIM-swapping entities that were tracked for this story remain active in 2023, and they all conduct business in open channels on the instant messaging platform Telegram. KrebsOnSecurity is not naming those channels or groups here because they will simply migrate to more private servers if exposed publicly, and for now those servers remain a useful source of intelligence about their activities.

Each advertises their claimed access to T-Mobile systems in a similar way. At a minimum, every SIM-swapping opportunity is announced with a brief "Tmobile up!" or "Tmo up!" message to channel participants. Other information in the announcements includes the price for a single SIM-swap request, and the handle of the person who takes the payment and information about the targeted subscriber. The information required from the customer of the SIM-swapping service includes the target's phone number, and the serial number tied to the new SIM card that will be used to receive text messages and phone calls from the hijacked phone number. Initially, the goal of this project was to count how many times each entity claimed access to T-Mobile throughout 2022, by cataloging the various "Tmo up!" posts from each day and working backwards from Dec. 31, 2022. But by the time we got to claims made in the middle of May 2022, completing the rest of the year's timeline seemed unnecessary. The tally shows that in the last seven-and-a-half months of 2022, these groups collectively made SIM-swapping claims against T-Mobile on 104 separate days -- often with multiple groups claiming access on the same days.
In a written statement to KrebsOnSecurity, T-Mobile said this type of activity affects the entire wireless industry.

"And we are constantly working to fight against it," the statement reads. "We have continued to drive enhancements that further protect against unauthorized access, including enhancing multi-factor authentication controls, hardening environments, limiting access to data, apps or services, and more. We are also focused on gathering threat intelligence data, like what you have shared, to help further strengthen these ongoing efforts."
The Courts

GitHub and EFF Back YouTube Ripper In Legal Battle With the RIAA (torrentfreak.com) 20

GitHub and digital rights group EFF have filed briefs supporting stream-ripping site Yout.com in its legal battle with the RIAA. GitHub warns that the lower court's decision threatens to criminalize the work of many other developers. The EFF, meanwhile, stresses that an incorrect interpretation of the DMCA harms people who use stream-rippers lawfully. TorrentFreak reports: In 2020, YouTube ripper Yout.com sued the RIAA, asking a Connecticut district court to declare that the site does not violate the DMCA's anti-circumvention provision. The music group had previously used DMCA takedown notices to remove many of Yout's appearances in Google's search results. This had a significant impact on revenues, the site argued, adding that it always believed it wasn't breaking any laws and hoped the court would agree. Last October, the Connecticut district court concluded that Yout had failed to show that it doesn't circumvent YouTube's technological protection measures. As such, it could be breaking the law. Yout operator Johnathan Nader opted to appeal the decision. Nader's attorneys filed their opening brief (PDF) last week at the Court of Appeals for the Second Circuit, asking it to reverse the lower court's decision. The YouTube ripper is not the only party calling for a reversal. Yesterday, Microsoft-owned developer platform GitHub submitted an amicus brief that argues for the same. And in a separate filing, the EFF also agrees that the lower court's decision should be overturned.

GitHub's brief starts by pointing out that the company takes no position on the ultimate resolution of this appeal, nor does it side with all of Yout's arguments. However, it does believe that the lower court's interpretation of the DMCA is dangerous. The district court held that stream rippers can violate the DMCA's anti-circumvention provision. The court noted that these tools allow people to download video and audio from YouTube, despite the streaming platform's lack of a download button. According to GitHub, this conclusion is premature, dangerous, and places other software types at risk. In the present lawsuit, GitHub reiterates that stream-ripping tools should not be outlawed. The fact that YouTube doesn't have a download button doesn't mean that tools that enable people to download videos circumvent technological access restrictions. "YouTube's decision not to provide its own 'download' button, however, is not a restriction on access to works. It merely affects how users experience them," GitHub writes. If the court order is allowed to stand, GitHub warns that a broad group of developers could be exposed to criminal liability, effectively chilling technological innovation. YouTube download tools are not the only types of software at risk, according to GitHub. There are many others that affect 'how users experience' online websites. These could also be seen as problematic, based on the district court's expansive interpretation of the DMCA. These widely accepted tools could put their creators at risk if the DMCA is interpreted too strictly, GitHub warns.

The Electronic Frontier Foundation (EFF) also submitted an amicus curiae brief (PDF) yesterday. The digital rights group takes interest in copyright cases, particularly when they get in the way of people's ability to freely use technology. In this instance, EFF points out that stream-rippers such as Yout.com provide a neutral technology with plenty of legal uses. They can be used for infringing purposes, but that's also true for existing technologies -- the printing press, for example. "Like every reproduction technology -- from the printing press to the smartphone -- these programs, colloquially called 'streamrippers,' have important lawful uses as well as infringing ones. "Video creators, educators, journalists, and human rights organizations all depend on the ability to make copies of user-uploaded videos," EFF adds. In common with GitHub, EFF notes that the absence of a download button on YouTube doesn't imply that download tools automatically violate the DMCA, especially when there are no effective download restrictions on the platform. [...] According to EFF, Yout and similar tools provide the same functions as video cassette recorders once did. They allow people to make copies of videos that are posted publicly by their creators. In addition, these tools are vital for some reporters and useful to creatives who use them for future work.

Spam

Google To Stop Exempting Campaign Email From Automated Spam Detection (washingtonpost.com) 94

Google plans to discontinue a pilot program that allows political campaigns to evade its email spam filters, the latest round in the technology giant's tussle with the GOP over online fundraising. The Washington Post reports: The company will let the program sunset at the end of January instead of prolonging it, Google's lawyers said in a filing on Monday. The filing, in U.S. District Court for the Eastern District of California, asked the court to dismiss a complaint lodged by the Republican National Committee accusing Google of "throttling its email messages because of the RNC's political affiliation and views." "The RNC is wrong," Google argued in its motion. "Gmail's spam filtering policies apply equally to emails from all senders, whether they are politically affiliated or not." [...]

While rejecting the GOP's attacks, Google nonetheless bowed to them. The company asked the Federal Election Commission to greenlight the pilot program, available to all campaigns and political committees registered with the federal regulator. The company anticipated at the time that a trial run would last through January 2023. Thousands of public comments implored the FEC to advise against the program, which consumer advocates and other individuals said would overwhelm Gmail users with spam. Anne P. Mitchell, a lawyer and founder of an email certification service called Get to the Inbox, wrote that Google was "opening up the floodgates to their users' inboxes ... to assuage partisan disgruntlement."

The FEC gave its approval in August, with one Democrat joining the commission's three Republicans to clear the way for the initiative. Ultimately, more than 100 committees of both parties signed up for the program, said Google spokesman Jose Castaneda. The RNC was not one of them, as Google emphasized in its motion to dismiss in the federal case in California. "Ironically, the RNC could have participated in a pilot program leading up to the 2022 midterm elections that would have allowed its emails to avoid otherwise-applicable forms of spam detection," the filing stated. "Many other politically-affiliated entities chose to participate in that program, which was approved by the FEC. The RNC chose not to do so. Instead, it now seeks to blame Google based on a theory of political bias that is both illogical and contrary to the facts alleged in its own Complaint." [...] "Indeed, effective spam filtering is a key feature of Gmail, and one of the main reasons why Gmail is so popular," the filing stated.

United States

Senator Wyden Urges FTC Probe of Neustar Over Possible Selling of User Data to Government (msn.com) 25

Until 2020 Neustar was the domain name registry "for a number of top-level domains," according to its page on Wikipedia, "including .biz, .us (on behalf of United States Department of Commerce), .co, .nyc (on behalf of the city of New York), and .in.

But now U.S. Senator Ron Wyden has asked America's Federal Trade Commission to investigate whether Neustar violated the privacy rights of millions, reports the Washington Post, "when it sold records of where they went online to the federal government."

America's Department of Defense funded a research team at Georgia Tech who purchased Neustar's data starting in 2016, notes a letter from Senator Wyden. Wyden has obtained emails between those researchers and "both the FBI and the Department of Justice, indicating that government officials asked the researchers to run specific queries and that the researchers wrote affidavits and reports for the government describing their findings."

But in addition, Wyden now cites a Department of Justice statement (entered an unrelated court case) which he says makes a concerning assertion: that Neustar executive Rodney Joffe, "who led the company's efforts to sell data to Georgia Tech, was also involved in the sale of DNS data directly to the U.S. government. The court documents say: Rodney Joffe and certain companies with which he was affiliated, including officers and employees of those companies, have provided assistance to and received payment from multiple agencies of the United States government. This has included assistance to the United States intelligence community and law enforcement agencies on cyber security matters. Certain of those companies have maintained contracts with the United States government resulting in payment by the United States of tens of millions of dollars for the provision of, among other things, Domain Name System ('DNS') data. These contracts included classified contracts that required company personnel to maintain security clearances.
From The Washington Post: The stipulation naming entrepreneur Rodney Joffe was the clearest confirmation to date of web histories being sold directly to federal law enforcement and intelligence agencies, instead of through information brokers exempt from restrictions on what telephone companies and websites can share with the government.
Wyden adds: The data that Neustar sold to Georgia Tech may have also included data collected from consumers who were explicitly promised that their data would not be sold to third parties. Between 2018 and 2020, Neustar acquired a competing recursive DNS service, which had previously been operated by Verisign. That service had been advertised to the public by Verisign with unqualified promises that "your public DNS data will not be sold to third parties."

When the product changed hands, users of Verisign's service were seamlessly transitioned to DNS servers that Neustar controlled. This meant that Neustar now received information about the websites accessed by these former Verisign-users, even though neither Verisign nor Neustar provided those users with meaningful, effective notice that the change of ownership had taken place, or that Neustar did not intend to honor the privacy promises that Verisign had previously made to those users. It is unclear if the data Neustar sold to Georgia Tech included data from users who had been promised by Verisign that their data would not be sold.

This is because both Neustar and Verisign have refused to answer questions from my office necessary to determine this important detail.

Bitcoin

Harvard Paper To Central Banks: Buy Bitcoin (politico.com) 110

A new working paper by Matthew Ferranti -- a fifth-year PhD candidate in Harvard's economics department and advisee of Ken Rogoff, a former economist at the IMF and the Federal Reserve Board of Governors who is now a Harvard professor -- has caused a minor splash. From a report: In it, Ferranti argues that it makes sense for many central banks to hold a small amount of Bitcoin under normal circumstances, and much more Bitcoin if they face sanctions risks, though his analysis finds gold is a more useful sanctions hedge. DFD caught up with Ferranti at Harvard's Cabot Science Library to discuss the working paper, which has not been peer-reviewed since its initial publication online late last month.

What are the implications of your findings?
You can read op-eds, for example in the Wall Street Journal, where people say, "We overused sanctions. It's going to come back to bite us because people are not going to want to use dollars." But the contribution of my paper is to put a number on that and say, "Okay, how big of a deal is this really? How much should we be concerned about it?" The numbers that come out of it are that yeah, it is a concern. It's not just you change your Treasury bonds by 1 percent or something. It's a lot bigger than that.

Rather than hedging sanctions risk with Bitcoin, shouldn't governments just avoid doing bad things?
There's not just one thing that gets you added to the U.S. sanctions list. If the only thing that could get you sanctioned, for example, was to invade another country, then most countries, as long as they don't plan to invade their neighbors, probably don't need to care about this at all, and so my research becomes less relevant. But it's kind of a nebulous thing. That might make countries pause and think about, "How reliable is the U.S?" The paper doesn't say anything about whether applying sanctions is a good or bad thing. There's a huge literature on how effective sanctions are. And I think the number that comes out of that is like a third of the time they work. Of course, they can have unintended consequences, like hurting the population of the country that you're sanctioning.

So why would a central bank bother with Bitcoin?
They're not correlated. They both sort of jump around, so there's diversification benefit to having both. And if you can't get enough gold to hedge your sanctions risk adequately -- think about a country that has very poor infrastructure, doesn't have the capability to store large amounts of gold, or countries whose reserves are so large that they simply cannot buy enough gold. Places like Singapore and China. You can't just turn around and buy $100 billion of gold.

Music

Last.fm Turns 20 (theverge.com) 6

Last.fm turned 20 years old over the weekend and users are still tracking their music playback hundreds of thousands of times a day. The Verge's Jacob Kastrenakes writes: Last.fm felt just a little bit revolutionary when it was first introduced in the early 2000s. The site's plug-ins -- which were originally created for a different service called Audioscrobbler -- tapped into your music player, took note of everything you listened to, and then displayed all kinds of statistics about your listening habits. Plus, it could recommend tracks and artists to you based on what other people with similar listening habits were interested in. "If this catches on, a system like this would be a really effective way to discover new artists and find people with similar tastes," the blogger Andy Baio wrote in February 2003 after first trying it out.

This was very much a precursor to the algorithmic recommendation systems that are built into every music streaming service today. Spotify, Apple Music, Tidal -- whatever it is you're listening to, they're all tracking your habits and using that to recommend new tracks to you. But on those services, your data is kept hidden behind the scenes. Using Last.fm was like having access to your year-end Spotify Wrapped but available every single day and always updating.

Streaming services' automated recommendations have largely obviated the need for a platform like Last.fm (I certainly haven't scrobbled anything in more than a decade). But I poked around, and it turns out there are still corners of the internet building vibrant communities around its features. One of the big uses is on Discord, where third-party developers have built a service called .fmbot that integrates scrobbling data into the popular chat room app. Thom, a backend developer based in the Netherlands, says the bot has more than 400,000 total users, with 40,000 people engaging with the service each day. It's particularly popular in Discords based around specific musical artists or genres -- where people "want to compare their statistics to each other" -- and among servers for small friend groups, so they can "dive deeper into what everyone is listening to," he says. The bot pulls in fun stats that people can brag about: the date of when they first listened to a given song, just how many days' worth of music they consumed each year, or a list of their top albums.
In 2008, we ran a story from Slashdot reader Rob Spengler about Last.fm's "mountain of data." Not only did he note how Last.fm was the "largest online radio outlet" at the time, surpassing Pandora and others, but he (hilariously, in hindsight) posed the question: "Does sitting on a mountain of data make Last.FM powerful enough to start making a stand against the record industry?"
The Almighty Buck

Will FTX's Collapse Strand Scientists? (science.org) 82

"Last week's collapse of the cryptocurrency exchange FTX is sending aftershocks through the scientific community," writes Science magazine: An undergraduate physics major at the Massachusetts Institute of Technology (MIT) who founded FTX and quickly became a billionaire, 30-year-old Sam Bankman-Fried began to back philanthropic organizations that supported a wide variety of science-related causes, most designed to improve human well-being. Now, with FTX in bankruptcy and under investigation for misuse of investors' money, his formerly flush foundations are suddenly strapped for cash and much of that work is at risk. One foundation, the Future Fund, was just launched in February. But by the end of June, its officials reported awarding 262 grants and "investments" totaling $132 million.

It's unclear how much of that money has been distributed. But on 10 November, five senior Future Fund officials resigned and announced in a statement, "We are devastated to say that it looks likely that there are many committed grants that the Future Fund will be unable to honor...."

Just what will happen to awards the Future Fund and the similar FTX Foundation have already made remains unclear. FTX owes billions of dollars to creditors and is now being investigated by the U.S. Securities and Exchange Commission and the Department of Justice, according to The Wall Street Journal. Writing in an online forum hosted by the Center for Effective Altruism, to which the Future Fund pledged nearly $14 million, Molly Kovite, legal operations manager for the Open Philanthropy foundation, noted that FTX's creditors could try to "claw back" their investments during bankruptcy proceedings. If grantees received awards after 11 August, which is 90 days prior to the bankruptcy filing, "the bankruptcy process will probably ask you, at some point, to pay all or part of that money back" she predicts.

That has grantees wondering how they will pay the bills. "Everyone is obviously really worried," Morrison says.

Thanks to Slashdot reader sciencehabit for submitting the article.
Biotech

Police Use DNA Phenotyping To Limit Pool of Suspects To 15,000 (vice.com) 50

An anonymous reader quotes a report from Motherboard: The Queensland, Australia police have used DNA phenotyping for the first time ever in hopes of leading to a breakthrough for a 1982 murder. The department partnered with a U.S.-based company called Parabon NanoLabs to create a profile image of the murder suspect, a Caucasian man with long blonde hair. Police claim that this image was generated using blood samples found at the scene of the murder of a man from 40 years ago; according to the Australian Broadcasting Corporation this is the first time "investigative genetic genealogy" has been used in Queensland.

This image does not factor in any environmental characteristics, such as tattoos, facial hair, and scars, and cannot determine the age or body mass of the suspect. However, Queensland investigators have published the image online and are offering a $500,000 reward and indemnity from prosecution to anyone who might have information about the suspect. The image is a vague rendering of a man that does not provide any more information than the sketch that the department already has of the suspect. This further perpetuates the hyper-surveillance of any man who resembles the image. Parabon NanoLabs has already been criticized by criminal justice and privacy experts for disseminating images that implicate too broad a pool of suspects.

The Queensland police department said that the DNA sample from the case generated a genealogy tree of "15,000 'linked' individuals" and they have not been able to find a close match yet. Instead of facing the possibility that DNA phenotyping may not be an effective tool for narrowing down a suspect, the police department's strategy is to ask the public for their DNA samples. Criminologist Xanthe Mallett said in a press release that to help police find a match, people can "opt-in" to share their own DNA samples with investigators through DNA services such as Family Tree and GEDMatch.
"Many members of the public that see this generated image will be unaware that it's a digital approximation, that age, weight, hairstyle, and face shape may be very different, and that accuracy of skin/hair/eye color is approximate," said Callie Schroeder, the Global Privacy Counsel at the Electronic Privacy Information Center.
Youtube

'The Disturbing Rise of Amateur Predator-Hunting Stings' (newyorker.com) 228

In 2004 NBC's news show "Dateline" began airing "To Catch a Predator" segments, in which a vigilante group posed online as minors to lure sex predators into in-person meetings — where they were then arrested by police.

The New Yorker looks at its cultural impact: Although there were only twenty episodes of the series, in three years, it's "this touchstone that I grew up with and that millions of people grew up with," Paul Renfro, a professor of history at Florida State University and the author of "Stranger Danger: Family Values, Childhood, and the American Carceral State," said. "It shaped how people think about sexual violence in ways that we haven't fully grappled with." The show focussed on the threat from strangers on the Internet, even though most victims of child sexual abuse are harmed by someone known to them. "On the show, it's not the family, it's not priests or rabbis or other authority figures who pose a threat to children, it's this devious stranger," Renfro said. The show's influence helped spur the passage of the Adam Walsh Act, in 2006, which created publicly searchable databases of people convicted of certain sex crimes. (There's little evidence that sex-offender registries have been effective at reducing sexual offenses.)
But today, "amateur predator hunting has come back into style," the article notes, citing the proliferation of online groups. "Recently, the Washington Post found more than a hundred and sixty, which have been responsible for nearly a thousand stings this year."

And then the New Yorker interviewed a woman named Cam, who with her husband and her brother-in-law decided to form "the Permian Basin Predator Patrol" — broadcasting their sting operations and humiliations of potential perpetrators on YouTube: [S]oon after the channel started drawing attention, they were called to a meeting at the Odessa Police Department. According to Cam, officers made it clear that they disapproved of their activities. "We were told we can't be involved with them, and that we can't send them anything directly," she said. "One, we're endangering ourselves, and, two, we're giving them more work — that's what it seemed like they were saying."

"We are very mindful of not trying to entrap a suspect," Lieutenant Brad Cline, who works in the Odessa Police Department's Crimes Against Persons Unit, said. "Taking a predator into custody can be very dangerous as well."

The article points out that "To Catch a Predator" was cancelled when Texas man Bill Conradt decided not to follow-up on his online messages — but "When a SWAT team burst into his house, trailed by a camera crew, Conradt shot himself."

So what did Cam's group do when the Odessa Police Department declined their help? The Permian Basin Predator Patrol continued to make videos. If she couldn't contribute to an arrest, Cam thought, at least she could get the word out to the public. She became an expert at figuring out the identities of the men she was chatting with, even when they used fake names.... Sometimes she'd find a man's family on Facebook and send his mother screenshots of the obscene messages he'd sent, or call his employer. "I believe three of them have been let go from their jobs," she said.

A sting by the Predator Catchers Indianapolis led to a man's conviction for child solicitation.... Although YouTube's predator hunters tend to portray themselves as the unequivocal good guys (Cam is an exception — most are men), their track record is more mixed.... The Ohio-based group Dads Against Predators has reportedly been banned from local grocery stores for causing disturbances. In 2018, a twenty-year-old in Connecticut hanged himself after a confrontation with a predator-hunter group. One video by the Permian Basin Predator Patrol ends with a man weeping, then running into traffic. (Cam said that she asked police to perform a welfare check on him, but she's not sure if it occurred.)

Twitter

How Twitter's Child Porn Problem Ruined Its Plans For an OnlyFans Competitor (theverge.com) 100

An anonymous reader quotes a report from The Verge: In the spring of 2022, Twitter considered making a radical change to the platform. After years of quietly allowing adult content on the service, the company would monetize it. The proposal: give adult content creators the ability to begin selling OnlyFans-style paid subscriptions, with Twitter keeping a share of the revenue. Had the project been approved, Twitter would have risked a massive backlash from advertisers, who generate the vast majority of the company's revenues. But the service could have generated more than enough to compensate for losses. OnlyFans, the most popular by far of the adult creator sites, is projecting $2.5 billion in revenue this year -- about half of Twitter's 2021 revenue -- and is already a profitable company.

Some executives thought Twitter could easily begin capturing a share of that money since the service is already the primary marketing channel for most OnlyFans creators. And so resources were pushed to a new project called ACM: Adult Content Monetization. Before the final go-ahead to launch, though, Twitter convened 84 employees to form what it called a "Red Team." The goal was "to pressure-test the decision to allow adult creators to monetize on the platform, by specifically focusing on what it would look like for Twitter to do this safely and responsibly," according to documents obtained by The Verge and interviews with current and former Twitter employees. What the Red Team discovered derailed the project: Twitter could not safely allow adult creators to sell subscriptions because the company was not -- and still is not -- effectively policing harmful sexual content on the platform.

"Twitter cannot accurately detect child sexual exploitation and non-consensual nudity at scale," the Red Team concluded in April 2022. The company also lacked tools to verify that creators and consumers of adult content were of legal age, the team found. As a result, in May -- weeks after Elon Musk agreed to purchase the company for $44 billion -- the company delayed the project indefinitely. If Twitter couldn't consistently remove child sexual exploitative content on the platform today, how would it even begin to monetize porn? Launching ACM would worsen the problem, the team found. Allowing creators to begin putting their content behind a paywall would mean that even more illegal material would make its way to Twitter -- and more of it would slip out of view. Twitter had few effective tools available to find it. Taking the Red Team report seriously, leadership decided it would not launch Adult Content Monetization until Twitter put more health and safety measures in place.
"Twitter still has a problem with content that sexually exploits children," reports The Verge, citing interviews with current and former staffers, as well as 58 pages of internal documents. "Executives are apparently well-informed about the issue, and the company is doing little to fix it."

"While the amount of [child sexual exploitation (CSE)] online has grown exponentially, Twitter's investment in technologies to detect and manage the growth has not," begins a February 2021 report from the company's Health team. "Teams are managing the workload using legacy tools with known broken windows. In short, [content moderators] are keeping the ship afloat with limited-to-no-support from Health."

Part of the problem is scale while the other part is mismanagement, says the report. "Meanwhile, the system that Twitter heavily relied on to discover CSE had begun to break..."

Slashdot Top Deals