×
Medicine

'Russia Might Have Caused Havana Syndrome' (washingtonpost.com) 188

An anonymous reader quotes an opinion piece from the Washington Post, published by the Editorial Board: A just-published investigation by Russian, American and German journalists has unearthed startling new information about the so-called Havana syndrome, or "Anomalous Health Incidents," as the government calls the unexplained bouts of painful disorientation that U.S. diplomats and intelligence officers have suffered in recent years. The new information suggests but does not prove that Russia's military intelligence agency is responsible. Earlier, agencies in the U.S. intelligence community had concluded that "it is very unlikely a foreign adversary is responsible." They need to look again. [...]

[T]he new investigation by the Insider, a Russian investigative news outlet, in collaboration with CBS's "60 Minutes" and Germany's Der Spiegel, paints a different picture. It identifies the possible culprit as Unit 29155, a "notorious assassination and sabotage squad" of the GRU, Moscow's military intelligence service. Senior members of the unit received "awards and political promotions for work related to the development of 'non-lethal acoustic weapons'" -- a term used in the Russian military-scientific literature to describe both sound- and radiofrequency-based directed energy devices. The investigation found documentary evidence that Unit 29155 "has been experimenting with exactly the kind of weaponized technology" experts suggest is a plausible cause. Moreover, the Insider reported, geolocation data shows that operators attached to Unit 29155, traveling undercover, were present in places where Havana syndrome struck, just before the incidents took place.

Even more concerning, the investigation found that a commonality among the Americans targeted was their work history on Russia issues. This included CIA officers who were helping Ukraine build up its intelligence capabilities in the years before Russia's full-scale invasion in 2022. One veteran of the CIA Kyiv station was named the new chief of station in Vietnam and was hit there. A second veteran of the CIA in Ukraine was hit in his apartment in Tashkent, Uzbekistan. Both these intelligence officers had to be medevaced and were treated at Walter Reed National Military Medical Center. The wife of a third CIA officer who had served in Kyiv was hit in London. "Of all the cases" examined by the news organizations, they said, "the most well-documented involve U.S. intelligence and diplomatic personnel with subject matter expertise in Russia or operational experience in countries such as Georgia and Ukraine," both of which were the scene of popular pro-Western uprisings in the past two decades. The news organizations point out that Russian President Vladimir Putin has often blamed these "color revolutions" on the CIA and the State Department. They conclude, "Putin would have every interest in neutralizing scores of U.S. intelligence officers he deemed responsible for his loss of the former satellites."
The Editorial Board is advocating for a thorough and aggressive investigation by the U.S. intelligence community that "takes into account all aspects of the incidents."

"If the incidents are a deliberate attack, the perpetrator must be identified and held to account. Along with sending a message to those who might harm American personnel, the United States needs to show all those who might join the diplomatic and intelligence services that the government will protect them abroad and at home from foreign adversaries, no matter what."
AI

BBC Will Stop Using AI For 'Doctor Who' Promotion After Receiving Complaints 79

The BBC says it has stopped using AI to promote Doctor Who after receiving complaints from viewers. Deadline reports: The BBC's marketing teams used the tech "as part of a small trial" to help draft some text for two promotional emails and mobile notifications, according to its complaints website, which was intended to highlight Doctor Who programming on the BBC. But the corporation received complaints over the reports that it was using generative AI, it added. "We followed all BBC editorial compliance processes and the final text was verified and signed-off by a member of the marketing team before it was sent," the BBC said. "We have no plans to do this again to promote Doctor Who."

The decision to stop promoting via generative AI represents a u-turn from the BBC, who said at the time of announcement that "generative AI offers a great opportunity to speed up making the extra assets to get more experiments live for more content that we are trying to promote." At the time, the BBC didn't mention that this would be the only time it uses the technology for Doctor Who promotion. Doctor Who will launch in May on the BBC and, for the first time, Disney+. A new trailer was unveiled last week.
AI

AI-Generated Science 32

Published scientific papers include language that appears to have been generated by AI-tools like ChatGPT, showing how pervasive the technology has become, and highlighting longstanding issues with some peer-reviewed journals. From a report: Searching for the phrase "As of my last knowledge update" on Google Scholar, a free search tool that indexes articles published in academic journals, returns 115 results. The phrase is often used by OpenAI's ChatGPT to indicate when the data the answer it is giving users is coming from, and the specific months and years found in these academic papers correspond to previous ChatGPT "knowledge updates."

"As of my last knowledge update in September 2021, there is no widely accepted scientific correlation between quantum entanglement and longitudinal scalar waves," reads a paper titled "Quantum Entanglement: Examining its Nature and Implications" published in the "Journal of Material Sciences & Manfacturing [sic] Research," a publication that claims it's peer-reviewed. Over the weekend, a tweet showing the same AI-generated phrase appearing in several scientific papers went viral.

Most of the scientific papers I looked at that included this phrase are small, not well known, and appear to be "paper mills," journals with low editorial standards that will publish almost anything quickly. One publication where I found the AI-generated phrase, the Open Access Research Journal of Engineering and Technology, advertises "low publication charges," an "e-certificate" of publication, and is currently advertising a call for papers, promising acceptance within 48 hours and publication within four days.
Businesses

Outdoor Voices To Close All Stores This Week (nytimes.com) 54

Outdoor Voices, an athletic apparel company, is closing all its stores on Sunday, The New York Times reported this week, citing four employees at four different stores. From the report: In an internal Slack message reviewed by The New York Times, some employees were notified on Wednesday that "Outdoor Voices is embarking on a new chapter as we transition to an exclusively online business." Products in stores are going to be discounted 50 percent, according to the Slack message. The news came as a surprise, two of the employees said, adding that they were not offered severance.

Outdoor Voices, which lists 16 retail locations on its website, did not immediately respond to a request for comment. Founded in 2014 by Ty Haney, the brand became popular for its muted tones and highly Instagrammable aesthetics. Think matching crop tops and leggings in pale shades of earthy tones. Its hashtag and company mantra, #DoingThings, became popular on social media, where brand loyalists would regularly share images of themselves participating in athletic activities like running or hiking or spinning. The company often hosted events, like group exercise classes, and even built an editorial platform called The Recreationalist. Many Outdoor Voices customers weren't just shoppers; they were devotees. The company was a chic athleisure brand perfectly positioned to attract millennials, but it was also selling a lifestyle. A lifestyle that helped the brand raise millions in funding.

AI

AI-Generated Articles Prompt Wikipedia To Downgrade CNET's Reliability Rating (arstechnica.com) 54

Wikipedia has downgraded tech website CNET's reliability rating following extensive discussions among its editors regarding the impact of AI-generated content on the site's trustworthiness. "The decision reflects concerns over the reliability of articles found on the tech news outlet after it began publishing AI-generated stories in 2022," adds Ars Technica. Futurism first reported the news. From the report: Wikipedia maintains a page called "Reliable sources/Perennial sources" that includes a chart featuring news publications and their reliability ratings as viewed from Wikipedia's perspective. Shortly after the CNET news broke in January 2023, Wikipedia editors began a discussion thread on the Reliable Sources project page about the publication. "CNET, usually regarded as an ordinary tech RS [reliable source], has started experimentally running AI-generated articles, which are riddled with errors," wrote a Wikipedia editor named David Gerard. "So far the experiment is not going down well, as it shouldn't. I haven't found any yet, but any of these articles that make it into a Wikipedia article need to be removed." After other editors agreed in the discussion, they began the process of downgrading CNET's reliability rating.

As of this writing, Wikipedia's Perennial Sources list currently features three entries for CNET broken into three time periods: (1) before October 2020, when Wikipedia considered CNET a "generally reliable" source; (2) between October 2020 and present, when Wikipedia notes that the site was acquired by Red Ventures in October 2020, "leading to a deterioration in editorial standards" and saying there is no consensus about reliability; and (3) between November 2022 and January 2023, when Wikipedia considers CNET "generally unreliable" because the site began using an AI tool "to rapidly generate articles riddled with factual inaccuracies and affiliate links."

Futurism reports that the issue with CNET's AI-generated content also sparked a broader debate within the Wikipedia community about the reliability of sources owned by Red Ventures, such as Bankrate and CreditCards.com. Those sites published AI-generated content around the same period of time as CNET. The editors also criticized Red Ventures for not being forthcoming about where and how AI was being implemented, further eroding trust in the company's publications. This lack of transparency was a key factor in the decision to downgrade CNET's reliability rating.
A CNET spokesperson said in a statement: "CNET is the world's largest provider of unbiased tech-focused news and advice. We have been trusted for nearly 30 years because of our rigorous editorial and product review standards. It is important to clarify that CNET is not actively using AI to create new content. While we have no specific plans to restart, any future initiatives would follow our public AI policy."
Social Networks

Supreme Court Hears Landmark Cases That Could Upend What We See on Social Media (cnn.com) 282

The US Supreme Court is hearing oral arguments Monday in two cases that could dramatically reshape social media, weighing whether states such as Texas and Florida should have the power to control what posts platforms can remove from their services. From a report: The high-stakes battle gives the nation's highest court an enormous say in how millions of Americans get their news and information, as well as whether sites such as Facebook, Instagram, YouTube and TikTok should be able to make their own decisions about how to moderate spam, hate speech and election misinformation. At issue are laws passed by the two states that prohibit online platforms from removing or demoting user content that expresses viewpoints -- legislation both states say is necessary to prevent censorship of conservative users.

More than a dozen Republican attorneys general have argued to the court that social media should be treated like traditional utilities such as the landline telephone network. The tech industry, meanwhile, argues that social media companies have First Amendment rights to make editorial decisions about what to show. That makes them more akin to newspapers or cable companies, opponents of the states say. The case could lead to a significant rethinking of First Amendment principles, according to legal experts. A ruling in favor of the states could weaken or reverse decades of precedent against "compelled speech," which protects private individuals from government speech mandates, and have far-reaching consequences beyond social media. A defeat for social media companies seems unlikely, but it would instantly transform their business models, according to Blair Levin, an industry analyst at the market research firm New Street Research.

AI

Scientific Journal Publishes AI-Generated Rat With Gigantic Penis (vice.com) 72

Jordan Pearson reports via Motherboard: A peer-reviewed science journal published a paper this week filled with nonsensical AI-generated images, which featured garbled text and a wildly incorrect diagram of a rat penis. The episode is the latest example of how generative AI is making its way into academia with concerning effects. The paper, titled "Cellular functions of spermatogonial stem cells in relation to JAK/STAT signaling pathway" was published on Wednesday in the open access Frontiers in Cell Development and Biology journal by researchers from Hong Hui Hospital and Jiaotong University in China. The paper itself is unlikely to be interesting to most people without a specific interest in the stem cells of small mammals, but the figures published with the article are another story entirely. [...]

It's unclear how this all got through the editing, peer review, and publishing process. Motherboard contacted the paper's U.S.-based reviewer, Jingbo Dai of Northwestern University, who said that it was not his responsibility to vet the obviously incorrect images. (The second reviewer is based in India.) "As a biomedical researcher, I only review the paper based on its scientific aspects. For the AI-generated figures, since the author cited Midjourney, it's the publisher's responsibility to make the decision," Dai said. "You should contact Frontiers about their policy of AI-generated figures." Frontier's policies for authors state that generative AI is allowed, but that it must be disclosed -- which the paper's authors did -- and the outputs must be checked for factual accuracy. "Specifically, the author is responsible for checking the factual accuracy of any content created by the generative AI technology," Frontier's policy states. "This includes, but is not limited to, any quotes, citations or references. Figures produced by or edited using a generative AI technology must be checked to ensure they accurately reflect the data presented in the manuscript."

On Thursday afternoon, after the article and its AI-generated figures circulated social media, Frontiers appended a notice to the paper saying that it had corrected the article and that a new version would appear later. It did not specify what exactly was corrected.
UPDATE: Frontiers retracted the article and issued the following statement: "Following publication, concerns were raised regarding the nature of its AI-generated figures. The article does not meet the standards of editorial and scientific rigor for Frontiers in Cell and Development Biology; therefore, the article has been retracted. This retraction was approved by the Chief Executive Editor of Frontiers. Frontiers would like to thank the concerned readers who contacted us regarding the published article."
Science

Firms Churning Out Fake Papers Are Now Bribing Journal Editors (science.org) 32

Nicholas Wise is a fluid dynamics researcher who moonlights as a scientific fraud buster, reports Science magazine. And last June he "was digging around on shady Facebook groups when he came across something he had never seen before." Wise was all too familiar with offers to sell or buy author slots and reviews on scientific papers — the signs of a busy paper mill. Exploiting the growing pressure on scientists worldwide to amass publications even if they lack resources to undertake quality research, these furtive intermediaries by some accounts pump out tens or even hundreds of thousands of articles every year. Many contain made-up data; others are plagiarized or of low quality. Regardless, authors pay to have their names on them, and the mills can make tidy profits.

But what Wise was seeing this time was new. Rather than targeting potential authors and reviewers, someone who called himself Jack Ben, of a firm whose Chinese name translates to Olive Academic, was going for journal editors — offering large sums of cash to these gatekeepers in return for accepting papers for publication. "Sure you will make money from us," Ben promised prospective collaborators in a document linked from the Facebook posts, along with screenshots showing transfers of up to $20,000 or more. In several cases, the recipient's name could be made out through sloppy blurring, as could the titles of two papers. More than 50 journal editors had already signed on, he wrote. There was even an online form for interested editors to fill out...

Publishers and journals, recognizing the threat, have beefed up their research integrity teams and retracted papers, sometimes by the hundreds. They are investing in ways to better spot third-party involvement, such as screening tools meant to flag bogus papers. So cash-rich paper mills have evidently adopted a new tactic: bribing editors and planting their own agents on editorial boards to ensure publication of their manuscripts. An investigation by Science and Retraction Watch, in partnership with Wise and other industry experts, identified several paper mills and more than 30 editors of reputable journals who appear to be involved in this type of activity. Many were guest editors of special issues, which have been flagged in the past as particularly vulnerable to abuse because they are edited separately from the regular journal. But several were regular editors or members of journal editorial boards. And this is likely just the tip of the iceberg.

The spokesperson for one journal publisher tells Science that its editors are receiving bribe offers every week..

Thanks to long-time Slashdot reader schwit1 for sharing the article..
Music

Spotify's Editorial Playlists Are Losing Influence Amid AI Expansion (bloomberg.com) 14

Once a dominant force in music discovery, Spotify's famed playlists like RapCaviar, which significantly influenced mainstream music and artist visibility, are losing ground. As the music industry shifts towards algorithmic suggestions and TikTok emerges as a major music promoter, Spotify's strategy evolves with more automated music discovery and less emphasis on human-curated playlists, signaling a potential end to the era where a few key playlists could make a star overnight. Bloomberg reports: Enter TikTok. In the late 2010s, as the algorithmic controlled, short-form video app emerged as a growing force in music promotion, Spotify took notice. On an earnings call in 2020, Spotify Chief Executive Officer Daniel Ek noted that users were increasingly opting for algorithmic suggestions and that Spotify would be leaning into the trend. "As we're getting better and better at personalization, we're serving better and better content and more and more of our users are choosing that," he said. From there, Spotify began implementing a number of changes that over time significantly altered the fundamental dynamics of how playlists get composed. Among other things, the company had already introduced a standardized pitching form that all artists and managers must use to submit tracks for playlist consideration. One former employee says the tool was created to foster a more merit-based system with a greater emphasis on data -- and less focus on the taste of individual curators. The goal, in part, was to give independent and smaller artists without the resources to personally court key playlist editors a better chance at placements. It was also a way to better protect the public-facing editors who in the early days were sometimes subjected to harassment from people disgruntled over their musical choices.

As the automated submission system took hold, the editors gradually grew more anonymous and less associated with particular playlists. In a handbook for the editorial team, Spotify instructed curators not to claim ownership of any one playlist. At the same time, Spotify began introducing multiple splashy features meant to encourage algorithm-driven listening, including an AI DJ and Daylist, two features that constantly change to fit listeners' habits and interests. (Spotify says "human expertise" guides the AI DJ.) Last year, Spotify laid off members of the teams involved in making playlists as part of its various cuts. And over time, the shift in emphasis has had consequences outside the company as well. These days, the same music industry sources who in the late 2010s learned to obsess over what was included and excluded from key Spotify playlists have started noticing something else -- it no longer seems to matter as much. Employees at different major labels say they've seen streams coming from RapCaviar drop anywhere from 30% to 50%.

The trend towards automated music discovery at Spotify shows no sign of slowing down. One internal presentation titled "Recapturing the Zeitgeist" encourages editorial curators to better utilize data. According to the people who have seen the plan, in addition to putting together a playlist, editorial curators would tag songs to help the algorithm accurately place them on relevant playlists that are automatically personalized for individual subscribers. The company has also shifted some human-curated playlists to personalized versions, including selections with seven-figure followings, like Housewerk and Indie Pop. These days, Spotify is also promoting something called Discovery Mode, wherein labels and artist teams can submit songs for additional algorithm pushes in exchange for a lower royalty rate. These tracks can only surface on personalized listening sessions, a former employee said, meaning Spotify would have a financial incentive to push people to them over editorially curated playlists. (For now, Discovery Mode songs only surface in radio or autoplay listening sessions.)
The shift toward algorithmic distribution isn't necessarily a bad thing, says Dan Smith, US general manager at Armada, an independent dance label. "The way fans discovered new music was radio back in the day, then Spotify editorial playlists, then there were a few years where people only discovered new music through TikTok," Brad said. "All those things still work ... we're all just trying different ways to make sure songs get to the right people."
Games

Way Too Many Games Were Released On Steam In 2023 (kotaku.com) 93

John Walker, reporting for Kotaku: Steam is by far the most peculiar of online storefronts. Built on top of itself for the last twenty years, Valve's behemothic PC game distributor is a clusterfuck of overlapping design choices, where algorithms rule over coherence, with 2023 seeing over 14,500 games released into the mayhem. Which is too many games. That breaks down to just under 40 a day, although given how people release games, it more accurately breaks down to about 50 every weekday. 50 games a day. On a storefront that goes to some lengths to bury new releases, and even buries pages where you can deliberately list new releases.

Compared to 2022, that's an increase of nearly 2,000 games, up almost 5,000 from five years ago. There's no reason to expect that growth to diminish any time soon. It's a volume of games that not only could no individual ever hope to keep up with, but nor could even any gaming site. Not even the biggest sites in the industry could afford an editorial team capable of playing 50 games a day to find and write about those worth highlighting. Realistically, not even a tenth of the games. And that's not least because of those 50 games per day, about 48 of them will be absolute dross. On one level, in this way Steam represents a wonderful democracy for gaming, where any developer willing to stump up the $100 entry fee can release their game on the platform, with barely any restrictions. On another level, however, it's a disaster for about 99 percent of releases, which stand absolutely no chance of garnering any attention, no matter their quality. The solution: human storefront curation, which Valve has never shown any intention of doing.

AI

AI Models May Enable a New Era of Mass Spying, Says Bruce Schneier (arstechnica.com) 37

An anonymous reader quotes a report from Ars Technica: In an editorial for Slate published Monday, renowned security researcher Bruce Schneier warned that AI models may enable a new era of mass spying, allowing companies and governments to automate the process of analyzing and summarizing large volumes of conversation data, fundamentally lowering barriers to spying activities that currently require human labor. In the piece, Schneier notes that the existing landscape of electronic surveillance has already transformed the modern era, becoming the business model of the Internet, where our digital footprints are constantly tracked and analyzed for commercial reasons.

Spying, by contrast, can take that kind of economically inspired monitoring to a completely new level: "Spying and surveillance are different but related things," Schneier writes. "If I hired a private detective to spy on you, that detective could hide a bug in your home or car, tap your phone, and listen to what you said. At the end, I would get a report of all the conversations you had and the contents of those conversations. If I hired that same private detective to put you under surveillance, I would get a different report: where you went, whom you talked to, what you purchased, what you did." Schneier says that current spying methods, like phone tapping or physical surveillance, are labor-intensive, but the advent of AI significantly reduces this constraint. Generative AI systems are increasingly adept at summarizing lengthy conversations and sifting through massive datasets to organize and extract relevant information. This capability, he argues, will not only make spying more accessible but also more comprehensive. "This spying is not limited to conversations on our phones or computers," Schneier writes. "Just as cameras everywhere fueled mass surveillance, microphones everywhere will fuel mass spying. Siri and Alexa and 'Hey, Google' are already always listening; the conversations just aren't being saved yet." [...]

In his editorial, Schneier raises concerns about the chilling effect that mass spying could have on society, cautioning that the knowledge of being under constant surveillance may lead individuals to alter their behavior, engage in self-censorship, and conform to perceived norms, ultimately stifling free expression and personal privacy. So what can people do about it? Anyone seeking protection from this type of mass spying will likely need to look toward government regulation to keep it in check since commercial pressures often trump technological safety and ethics. [...] Schneier isn't optimistic on that front, however, closing with the line, "We could prohibit mass spying. We could pass strong data-privacy rules. But we haven't done anything to limit mass surveillance. Why would spying be any different?" It's a thought-provoking piece, and you can read the entire thing on Slate.

China

'Global Science is Splintering Into Two - And This is Becoming a Problem' 168

The United States and China are pursuing parallel scientific tracks. To solve crises on multiple fronts, the two roads need to become one, Nature's editorial board wrote Wednesday. From the post: It's no secret that research collaborations between China and the United States -- among other Western countries -- are on a downward trajectory. Early indicators of a possible downturn have been confirmed by more sources. A report from Japan's Ministry of Education, Culture, Sports, Science and Technology, published in August, for instance, stated that the number of research articles co-authored by scientists in the two countries had fallen in 2021, the first annual drop since 1993. Meanwhile, data from Nature Index show that China-based scientists' propensity to collaborate internationally has been waning, when looking at the authorship of papers in the Index's natural-science journals.

Nature reported last month that China's decoupling from the countries loosely described as the West mirrors its strengthening of science links with low- and middle-income countries (LMICs), as part of its Belt and Road Initiative. There are many good reasons for China to be boosting science in LMICs, which could sorely do with greater research funding and capacity building. But this is also creating parallel scientific systems -- one centred on North America and Europe, and the other on China. The biggest challenges faced by humanity, from combating climate change to ending poverty, are embodied in a globally agreed set of targets, the United Nations Sustainable Development Goals (SDGs).

Approaching them without shared knowledge can only slow down progress by creating competing systems for advancing and implementing solutions. It's a scenario that the research community must be more aware of and work to avoid. Nature Index offers some reasons as to why collaboration between China and the West is declining. Travel restrictions during the COVID-19 pandemic took their toll, limiting collaborations and barring new ones from being forged. Geopolitical tensions have led many Western governments to restrict their research partnerships with China, on national-security grounds, and vice versa.
AI

'Hallucinate' Chosen As Cambridge Dictionary's Word of the Year (theguardian.com) 23

Cambridge dictionary's word of the year for 2023 is "hallucinate," a verb that took on a new meaning with the rise in popularity of artificial intelligence chatbots. The Guardian reports: The original definition of the chosen word is to "seem to see, hear, feel, or smell" something that does not exist, usually because of "a health condition or because you have taken a drug." It now has an additional meaning, relating to when artificial intelligence systems such as ChatGPT, which generates text that mimics human writing, "hallucinates" and produces false information. The word was chosen because the new meaning "gets to the heart of why people are talking about AI," according to a post on the dictionary site.

Generative AI is a "powerful" but "far from perfect" tool, "one we're all still learning how to interact with safely and effectively -- this means being aware of both its potential strengths and its current weaknesses." The dictionary added a number of AI-related entries this year, including large language model (or LLM), generative AI (or GenAI), and GPT (an abbreviation of Generative Pre-trained Transformer). "AI hallucinations remind us that humans still need to bring their critical thinking skills to the use of these tools," continued the post. "Large language models are only as reliable as the information their algorithms learn from. Human expertise is arguably more important than ever, to create the authoritative and up-to-date information that LLMs can be trained on."

Government

America's Net Neutrality Question: Should the FCC Define the Internet as a 'Common Carrier'? (fcc.gov) 132

The Washington Post's editorial board looks at America's "net neutrality" debate.

But first they note that America's communications-regulating FCC has "limited authority to regulate unless broadband is considered a 'common carrier' under the Telecommunications Act of 1996." The FCC under President Barack Obama moved to reclassify broadband so it could regulate broadband companies; the FCC under President Donald Trump reversed the change. Dismayed advocates warned the world that, without the protections in place, the internet would break. You'll never guess what happened next: nothing. Or, at least, almost nothing. The internet did not break, and internet service providers for the most part did not block and they did not throttle.

All the same, today's FCC, under Chairwoman Jessica Rosenworcel, has just moved to re-reclassify broadband. The interesting part is that her strongest argument doesn't have much to do with net neutrality, but with some of the other benefits the country could see from having a federal watchdog keeping an eye on the broadband business... Broadband is an essential service... Yet there isn't a single government agency with sufficient authority to oversee this vital tool. Asserting federal authority over broadband would empower regulation of any blocking, throttling or anti-competitive paid traffic prioritization that they might engage in. But it could also help ensure the safety and security of U.S. networks.

The FCC has, on national security grounds, removed authorization for companies affiliated with adversary states, such as China's Huawei, from participating in U.S. telecommunications markets. The agency can do this for phone carriers. But it can't do it for broadband, because it isn't allowed to. Or consider public safety during a crisis. The FCC doesn't have the ability to access the data it needs to know when and where there are broadband outages — much less the ability to do anything about those outages if they are identified. Similarly, it can't impose requirements for network resiliency to help prevent those outages from occurring in the first place — during, say, a natural disaster or a cyberattack.

The agency has ample power to police the types of services that are becoming less relevant in American life, such as landline telephones, and little power to police those that are becoming more important every day.

The FCC acknowledges this power would also allow them to prohibit "throttling" of content. But the Post's editorial also makes the argument that here in 2023 that's "unlikely to have any major effect on the broadband industry in either direction... Substantial consequences have only become less likely as high-speed bandwidth has become less limited."
Television

Jon Stewart's Apple TV Plus Show Ends, Reportedly Over Coverage of AI and China (theverge.com) 115

Shakrai writes: Multiple outlets are reporting that Apple TV Plus has cancelled Jon Stewart's popular show The Problem with Jon Stewart, reportedly over editorial disagreements with regards to planned stories on the People's Republic of China and AI. Fans and haters of Apple will both recall that Apple recently made changes to AirDrop, one of the few effective means Chinese dissidents and protesters had for exchanging information off-grid at scale, and will ask why Apple is apparently not only willing, but eager, to carry water for the PRC, overriding both human rights and practical business concerns in the process. "Apple approached Stewart directly and expressed its need for the host and his team to be 'aligned' with the company's views on topics discussed," reports The Verge, citing The Hollywood Reporter. "Rather than falling in line when Apple threatened to cancel the show, Stewart reportedly decided to walk."
Businesses

Bandcamp Slashes Nearly Half Its Staff After Epic Sale (sfchronicle.com) 61

Aidin Vaziri reports via the San Francisco Chronicle: Epic Games has initiated layoffs at Bandcamp, the Oakland-based online music distribution platform it recently sold to Songtradr. Among those affected were members of Bandcamp Daily, the platform's editorial arm, as confirmed by former staff members on social media channels. "About half the company was laid off today," senior editor JJ Skolnik announced on X (formerly Twitter) on Monday morning. This move comes weeks after Songtradr's acquisition of Bandcamp was announced on Sept. 28. The company did not disclose how many employees were impacted by the cuts.

Songtradr, a Santa Monica-based licensing company, had previously stated that not all Bandcamp employees would be absorbed after the platform's sale from Epic, citing the service's financial situation as the basis for workforce adjustments. [...] The sale comes as the company cuts around 16% of its workforce, about 830 employees, in the face of lower profits that were outpaced by growing expenses.

Businesses

'I'm a Luddite - and Why You Should Be One Too' (stltoday.com) 211

Los Angeles Times technology columnist Brian Merchant has written a book about the 1811 Luddite rebellion against industrial technology, decrying "entrepreneurs and industrialists pushing for new, dubiously legal, highly automated and labor-saving modes of production."

In a new piece he applauds the spirit of the Luddites. "The kind of visionaries we need now are those who see precisely how certain technologies are causing harm and who resist them when necessary." The parallels to the modern day are everywhere. In the 1800s, entrepreneurs used technology to justify imposing a new mode of work: the factory system. In the 2000s, CEOs used technology to justify imposing a new mode of work: algorithmically organized gig labor, in which pay is lower and protections scarce. In the 1800s, hosiers and factory owners used automation less to overtly replace workers than to deskill them and drive down their wages. Digital media bosses, call center operators and studio executives are using AI in much the same way. Then, as now, the titans used technology both as a new mode of production and as an idea that allowed them to ignore long-standing laws and regulations. In the 1800s, this might have been a factory boss arguing that his mill exempted him from a statute governing apprentice labor. Today, it's a ride-hailing app that claims to be a software company so it doesn't have to play by the rules of a cab firm.

Then, as now, leaders dazzled by unregulated technologies ignored their potential downsides. Then, it might have been state-of-the-art water frames that could produce an incredible volume of yarn — but needed hundreds of vulnerable child laborers to operate. Today, it's a cellphone or a same-day delivery, made possible by thousands of human laborers toiling in often punishing conditions.

Then, as now, workers and critics sounded the alarm...

Resistance is gathering again, too. Amazon workers are joining union drives despite intense opposition. Actors and screenwriters are striking and artists and illustrators have called for a ban of generative AI in editorial outlets. Organizing, illegal in the Luddites' time, has historically proved the best bulwark against automation. But governments must also step up. They must offer robust protections and social services for those in precarious positions. They must enforce antitrust laws. Crucially, they must develop regulations to rein in the antidemocratic model of technological development wherein a handful of billionaires and venture capital firms determine the shape of the future — and who wins and loses in it.

The clothworkers of the 1800s had the right idea: They believed everyone should share in the bounty of the amazing technologies their work makes possible.

That's why I'm a Luddite — and why you should be one, too.

So whatever happened to the Luddites? The article reminds readers that the factory system "took root," and "brought prosperity for some, but it created an immiserated working class.

"The 200 years since have seen breathtaking technological innovation — but much less social innovation in how the benefits are shared."
Transportation

Privately-Owned High-Speed Rail Opens New Line in Florida, Kills Pedestrian (thepointsguy.com) 220

At 11 a.m. Friday in Orlando Florida, a train completed its 240-mile journey from Miami, inaugurating a new line from Brightline that reaches speeds of up to 125 miles per hour and reduces the journey to just under three hours. "This is going to revolutionize transportation not just in the country and the state of Florida but right here in Central Florida and really just make our backyard bigger," Brightline's director of public affairs Katie Mitzner told a local news station.

Ironically, within hours a different Brightline train had struck and killed a pedestrian. "Brightline trains have the highest death rate in the U.S.," reports one local news station, "fatally striking 98 people since Miami-West Palm operations began — about one death for every 32,000 miles its trains travel, according to an ongoing Associated Press analysis." A police spokesperson said the death appeared to be a suicide.

"None of the accidents have been determined to be Brightline's fault," writes The Points Guy, "and the company has spent millions of dollars on safety improvements at grade crossings. It also launched a public-relations push to encourage all residents along its corridor to commit to staying safe. However, it is a very real and ongoing element of this service in Florida. We hope these efforts will continue to further reduce these incidents in communities that see frequent Brightline trains coming through."

The Points Guy also shared photos in their blog post describing what it was like to take a ride on America's only privately owned and operated inter-city passenger railroad: When the train ultimately pulled out of the station, a surreal feeling washed over me. Those of us on the inaugural service were the first passengers to ride the rails along this stretch of Florida's east coast in more than 55 years. Florida East Coast Railway, which still owns the tracks and operates frequent freight trains along them, ceased passenger service on July 31, 1968... Each seat has multiple power outlets, and the Wi-Fi truly was high-speed based on my experience and the test I ran. I was even able to successfully join (and participate in) our morning editorial team call on Zoom...

The scenery along the route was simply spectacular... With no grade crossings and fencing on both sides, we reached 125 mph for the final stretch of the journey. The cars along the highway stood no chance of keeping up as we traversed the 30-plus miles in only 18 minutes as the tower at Orlando International Airport came into view... With plans to expand to Tampa and construction underway on its planned Los Angeles-to-Las Vegas route, we likely haven't heard the last from Brightline as it seeks to transform train service in the United States.

"I think what Brightline has done here has laid the blueprint for how speed rail can be built in America with private dollars versus government funding," investor Ryn Rosberg told a local news site. "It's much more efficient and it gets done a lot quicker."

"There have been colorful station openings, lawsuits, threats of lawsuits, threats of legislation and yes, fatal accidents," writes the Palm Beach Post, "but Brightline train passengers can now take the train from any of its five South Florida stations to visit the Disney World, Universal Studios or Sea World tourist attractions."
The Media

Can Philanthropy Save Local Newspapers? (washingtonpost.com) 122

70 million Americans live in a county without a newspaper, according to a 2022 report cited in this editorial by the Washington Post's editorial board"

Who's to blame? The internet, mostly. Whereas deep-pocketed advertisers formerly relied on newspapers to reach their customers, they took to the audience-targeting capabilities of Facebook or Google. Web-based marketplaces also siphoned newspapers' once-robust revenue from classified ads.
But the Post emphasizes one positive new development: "a large pile of cash." In an initiative announced this month, 22 donor organizations, including the Knight Foundation and the John D. and Catherine T. MacArthur Foundation, are teaming up to provide more than $500 million to boost local news over five years — an undertaking called Press Forward... The injection of more than a half-billion dollars is sure to help the quest for a durable and replicable business model.

The even bigger imperative, however, is to elevate local news on the philanthropic food chain so that national and hometown funders prioritize this pivotal American institution. Failure on this front places more pressure on public policy solutions, and government activism mixes poorly with independent journalism...

One of the goals for Press Forward, accordingly, is building out the infrastructure — "from legal support to membership programs" — relied upon by local news providers to deliver their product. Jim Brady, vice president of journalism at the Knight Foundation, says it's easier than ever for news entrepreneurs to launch a local site because they can plug into existing technologies hammered out by their predecessors — and there's more development work still to fund on this front.

So where to go from here? Local philanthropic interests across the country could take a cue from the Press Forward partners and invest in the news organizations down the street.

Movies

Is Rotten Tomatoes 'Erratic, Reductive, and Easily Hacked'? (vulture.com) 43

Rotten Tomatoes celebrated its 25th year of assigning scores to movies based on their aggregate review. Now Vulture writes that Rotten Tomatoes "can make or break" movies, "with implications for how films are perceived, released, marketed, and possibly even green-lit". But unfortuately, the site "is also erratic, reductive, and easily hacked."

Vulture tells the story of a movie-publicity company contacting "obscure, often self-published critics" to say the film's teams "feel like it would benefit from more input from different critics" — while making undisclosed payments of $50 or more.) A critic asking if it's okay to pan the movie was informed that "super nice" critics move their bad reviews onto sites not included in Rotten Tomatoes scores.

Vulture says after bringing this to the site's attention, Rotten Tomatoes "delisted a number of the company's movies from its website and sent a warning to writers who reviewed them." But is there a larger problem? Filmmaker Paul Schrader even opines that "Audiences are dumber. Normal people don't go through reviews like they used to. Rotten Tomatoes is something the studios can game. So they do...." A third of U.S. adults say they check Rotten Tomatoes before going to the multiplex, and while movie ads used to tout the blurbage of Jeffrey Lyons and Peter Travers, now they're more likely to boast that a film has been "Certified Fresh...."

Another problem — and where the trickery often begins — is that Rotten Tomatoes scores are posted after a movie receives only a handful of reviews, sometimes as few as five, even if those reviews may be an unrepresentative sample. This is sort of like a cable-news network declaring an Election Night winner after a single county reports its results. But studios see it as a feature, since, with a little elbow grease, they can sometimes fool people into believing a movie is better than it is.

Here's how. When a studio is prepping the release of a new title, it will screen the film for critics in advance. It's a film publicist's job to organize these screenings and invite the writers they think will respond most positively. Then that publicist will set the movie's review embargo in part so that its initial Tomatometer score is as high as possible at the moment when it can have maximal benefits for word of mouth and early ticket sales... [I]n February, the Tomatometer score for Ant-Man and the Wasp: Quantumania debuted at 79 percent based on its first batch of reviews. Days later, after more critics had weighed in, its rating sank into the 40s. But the gambit may have worked. Quantumania had the best opening weekend of any movie in the Ant-Man series, at $106 million. In its second weekend, with its rottenness more firmly established, the film's grosses slid 69 percent, the steepest drop-off in Marvel history.

In studios' defense, Rotten Tomatoes' hastiness in computing its scores has made it practically necessary to cork one's bat. In a strategic blunder in May, Disney held the first screening of Indiana Jones and the Dial of Destiny at Cannes, the world's snootiest film festival, from which the first 12 reviews begot an initial score of 33 percent. "What they should've done," says Publicist No. 1, "was have simultaneous screenings in the States for critics who might've been more friendly." A month and a half later, Dial of Destiny bombed at the box office even though friendly critics eventually lifted its rating to 69 percent. "They had a low Rotten Tomatoes score just sitting out there for six weeks before release, and that was deadly," says a third publicist.

Slashdot Top Deals