Canada

Ubisoft Closes Game Studio Where Workers Voted to Unionize Two Weeks Ago (aftermath.site) 151

Ubisoft announced Wednesday it will close its studio in Halifax, Nova Scotia — two weeks after 74% of its staff voted to unionize.

This means laying off the 71 people at the studio, reports the gaming news site Aftermath: [Communications Workers of America's Canadian affiliate, CWA Canada] said in a statement to Aftermath the union will "pursue every legal recourse to ensure that the rights of these workers are respected and not infringed in any way." The union said in a news release that it's illegal in Canada for companies to close businesses because of unionization. That's not necessarily what happened here, according to the news release, but the union is "demanding information from Ubisoft about the reason for the sudden decision to close."

"We will be looking for Ubisoft to show us that this had nothing to do with the employees joining a union," former Ubisoft Halifax programmer and bargaining committee member Jon Huffman said in a statement. "The workers, their families, the people of Nova Scotia, and all of us who love video games made in Canada, deserve nothing less...."

Before joining Ubisoft, the studio was best known for its work on the Rocksmith franchise; under Ubisoft, it focused squarely on mobile games.

Ubisoft Halifax was quickly removed from the Ubisoft website on Wednesday...

AI

Nvidia CEO Jensen Huang Says AI Doomerism Has 'Done a Lot of Damage' (businessinsider.com) 105

Nvidia CEO Jensen Huang "said one of his biggest takeaways from 2025 was 'the battle of narratives' over the future of AI development between those who see doom on the horizon and the optimists," reports Business Insider.

Huang did acknowledge that "it's too simplistic" to entirely dismiss either side (on a recent episode of the "No Priors" podcast). But "I think we've done a lot of damage with very well-respected people who have painted a doomer narrative, end of the world narrative, science fiction narrative." "It's not helpful to people. It's not helpful to the industry. It's not helpful to society. It's not helpful to the governments..." [H]e cited concerns about "regulatory capture," arguing that no company should approach governments to request more regulation. "Their intentions are clearly deeply conflicted, and their intentions are clearly not completely in the best interest of society," he said. "I mean, they're obviously CEOs, they're obviously companies, and obviously they're advocating for themselves..."

"When 90% of the messaging is all around the end of the world and the pessimism, and I think we're scaring people from making the investments in AI that makes it safer, more functional, more productive, and more useful to society," he said.

Elsewhere in the podcast, Huang argues that the AI bubble is a myth. Business Insider adds that "a spokesperson for Nvidia declined to elaborate on Huang's remarks."

Thanks to Slashdot reader joshuark for sharing the article.
The Military

Airlines Cancel Hundreds of Flights After U.S. Attack on Venezuela (cnbc.com) 180

CNBC reports that U.S. airlines have "canceled hundreds of flights to airports in Puerto Rico and Aruba, according to flight tallies from FlightAware and carriers' sites."

JetBlue, Southwest, and American Airlines were among the multiple airlines showing cancelled flights, which "included close to 300 flights to and from San Juan, Puerto Rico's Luis Muñoz Marín International Airport, more than 40% of the day's schedule, according to FlightAware." Airlines canceled flights throughout the Caribbean on Saturday following U.S. strikes on Venezuela after the Federal Aviation Administration ordered commercial aircraft to avoid airspace in parts of the region.... It wasn't immediately clear how long the disruptions would last, though such broad restrictions are often temporary. Airlines said they would waive change fees and fare differences for customers affected by the airspace closures who could fly later in the month.
CNN cites a U.S. official who says more than 150 U.S. aircraft (including helicopters) launched from 20 different bases "on land and sea" during Friday's attack.

The U.S. has said the lights were out in Caracas during the attack, presumably because of a targeted strike on their power grid. "Videos filmed by Caracas residents showed parts of the city in the dark," reports the Miami Herald.

United Nations secretary-general António Guterres issued a statement via his spokesman saying he was "deeply concerned that the rules of international law have not been respected," (according to a Reuters report cited by the Guardian). The Guardian adds that "a number of nations have called for an emergency meeting of the UN Security Council, in New York, today, as a result of the U.S.'s unilateral action."
United Kingdom

UK Actors Vote To Refuse To Be Digitally Scanned In Pushback Against AI 44

An anonymous reader quotes a report from the Guardian: Actors have voted to refuse digital scanning to prevent their likeness being used by artificial intelligence in a pushback against AI in the arts. Members of the performing arts union Equity were asked if they would refuse to be scanned while on set, a common practice in which actors' likeness is captured for future use -- with 99% voting in favor of the move. The vote was an indicative ballot designed to demonstrate the strength of feeling on the issue, with more than 7,000 members polled on a 75% turnout. However, actors would not be legally protected if they refused to be scanned.

The union said it would write to Pact, the trade body representing the majority of producers and production companies in the UK, to negotiate new minimum standards for pay, as well as terms and conditions for actors working in film and TV. Equity said it may hold a formal ballot depending on the outcome of the negotiations, which, if backed, would give actors legal protection if they were being pressed to accept digital scanning on set.
The general secretary, Paul Fleming, said: "Artificial intelligence is a generation-defining challenge. And for the first time in a generation, Equity's film and TV members have shown that they are willing to take industrial action. Ninety per cent of TV and film is made on these agreements. Over three-quarters of artists working on them are union members. This shows that the workforce is willing to significantly disrupt production unless they are respected, and [if] decades of erosion in terms and conditions begins to be reversed."
United States

Repeal Section 230 and Its Platform Protections, Urges New Bipartisan US Bill (eff.org) 168

U.S. Senator Sheldon Whitehouse said Friday he was moving to file a bipartisan bill to repeal Section 230 of America's Communications Decency Act.

"The law prevents most civil suits against users or services that are based on what others say," explains an EFF blog post. "Experts argue that a repeal of Section 230 could kill free speech on the internet," writes LiveMint — though America's last two presidents both supported a repeal: During his first presidency, U.S. President Donald Trump called to repeal the law and signed an executive order attempting to curb some of its protections, though it was challenged in court. Subsequently, former President Joe Biden also voiced his opinion against the law.
An EFF blog post explains the case for Section 230: Congress passed this bipartisan legislation because it recognized that promoting more user speech online outweighed potential harms. When harmful speech takes place, it's the speaker that should be held responsible, not the service that hosts the speech... Without Section 230, the Internet is different. In Canada and Australia, courts have allowed operators of online discussion groups to be punished for things their users have said. That has reduced the amount of user speech online, particularly on controversial subjects. In non-democratic countries, governments can directly censor the internet, controlling the speech of platforms and users. If the law makes us liable for the speech of others, the biggest platforms would likely become locked-down and heavily censored. The next great websites and apps won't even get started, because they'll face overwhelming legal risk to host users' speech.
But "I strongly believe that Section 230 has long outlived its use," Senator Whitehouse said this week, saying Section 230 "a real vessel for evil that needs to come to an end." "The laws that Section 230 protect these big platforms from are very often laws that go back to the common law of England, that we inherited when this country was initially founded. I mean, these are long-lasting, well-tested, important legal constraints that have — they've met the test of time, not by the year or by the decade, but by the century.

"And yet because of this crazy Section 230, these ancient and highly respected doctrines just don't reach these people. And it really makes no sense, that if you're an internet platform you get treated one way; you do the exact same thing and you're a publisher, you get treated a completely different way.

"And so I think that the time has come.... It really makes no sense... [Testimony before the committee] shows how alone and stranded people are when they don't have the chance to even get justice. It's bad enough to have to live through the tragedy... But to be told by a law of Congress, you can't get justice because of the platform — not because the law is wrong, not because the rule is wrong, not because this is anything new — simply because the wrong type of entity created this harm."

Security

Sloppy AI Defenses Take Cybersecurity Back To the 1990s, Researchers Say 20

spatwei shares a report from SC Media: Just as it had at BSides Las Vegas earlier in the week, the risks of artificial intelligence dominated the Black Hat USA 2025 security conference on Aug. 6 and 7. We couldn't see all the AI-related talks, but we did catch three of the most promising ones, plus an off-site panel discussion about AI presented by 1Password. The upshot: Large language models and AI agents are far too easy to successfully attack, and many of the security lessons of the past 25 years have been forgotten in the current rush to develop, use and profit from AI.

We -- not just the cybersecurity industry, but any organization bringing AI into its processes -- need to understand the risks of AI and develop ways to mitigate them before we fall victim to the same sorts of vulnerabilities we faced when Bill Clinton was president. "AI agents are like a toddler. You have to follow them around and make sure they don't do dumb things," said Wendy Nather, senior research initiatives director at 1Password and a well-respected cybersecurity veteran. "We're also getting a whole new crop of people coming in and making the same dumb mistakes we made years ago." Her fellow panelist Joseph Carson, chief security evangelist and advisory CISO at Segura, had an appropriately retro analogy for the benefits of using AI. "It's like getting the mushroom in Super Mario Kart," he said. "It makes you go faster, but it doesn't make you a better driver."
Many of the AI security flaws resemble early web-era SQL injection risks. "Why are all these old vulnerabilities surfacing again? Because the GenAI space is full of security bad practices," said Nathan Hamiel, senior director of research and lead prototyping engineer at Kudelski Security. "When you deploy these tools, you increase your attack surface. You're creating vulnerabilities where there weren't any."

"Generative AI is over-scoped. The same AI that answers questions about Shakespeare is helping you develop code. This over-generalization leads you to an increased attack surface." He added: "Don't treat AI agents as highly sophisticated, super-intelligent systems. Treat them like drunk robots."
AI

Linux Kernel Could Soon Expose Every Line AI Helps Write 41

BrianFagioli shares a report from NERDS.xyz: Sasha Levin, a respected developer and engineer at Nvidia, has proposed a patch series aimed at formally integrating AI coding assistants into the Linux kernel workflow. The proposal includes two major changes. First, it introduces configuration stubs for popular AI development tools like Claude, GitHub Copilot, Cursor, Codeium, Continue, Windsurf, and Aider. These are symlinked to a centralized documentation file to ensure consistency. Second, and more notably, it lays out official guidelines for how AI-generated contributions should be handled. According to the proposed documentation, AI assistants must identify themselves in commit messages using a Co-developed-by: tag, but they cannot use Signed-off-by:, which legally certifies the commit under the Developer Certificate of Origin. That responsibility remains solely with the human developer.

One example shared in the patch shows a simple fix to a typo in the kernel's OPP documentation. Claude, an AI assistant, corrects "dont" to "don't" and commits the patch with the proper attribution: "Co-developed-by: Claude claude-opus-4-20250514." Levin's patch also creates a new section under Documentation/AI/ where the expectations and limitations of using AI in kernel development are laid out. This includes reminders to follow kernel coding standards, respect the development process, and understand licensing requirements. There are things AI often struggles with.
Privacy

Ask Slashdot: Do We Need Opt-Out-By-Default Privacy Laws? 92

"In large, companies failed to self-regulate," writes long-time Slashdot reader BrendaEM: They have not been respected the individual's right to privacy. In software and web interfaces, companies have buried their privacy setting so deep that they cannot be found in a reasonable amount of time, or an unreasonable amount of steps are needed to attempt to retain data. These companies have taken away the individual's right to privacy --by default.

Are laws needed that protect a person's privacy by default--unless specific steps are taken by that user/purchaser to relinquish it? Should the wording of the explanation be so written that the contract is brief, explaining the forfeiture of the privacy, and where that data might be going? Should a company selling a product be required to state before purchase which rights need to be dismissed for its use? Should a legal owner who purchased a product expect it to stop functioning--only because a newer user contract is not agreed to?

Share your own thoughts and experiences in the comments. What's your ideal privacy policy?

And do we need opt-out-by-defaut privacy laws?
AI

Why We're Unlikely to Get Artificial General Intelligence Any Time Soon (msn.com) 261

OpenAI CEO and Sam Altman believe Artificial General Intelligence could arrive within the next few years. But the speculations of some technologists "are getting ahead of reality," writes the New York Times, adding that many scientists "say no one will reach AGI without a new idea — something beyond the powerful neural networks that merely find patterns in data. That new idea could arrive tomorrow. But even then, the industry would need years to develop it." "The technology we're building today is not sufficient to get there," said Nick Frosst, a founder of the AI startup Cohere who previously worked as a researcher at Google and studied under the most revered AI researcher of the last 50 years. "What we are building now are things that take in words and predict the next most likely word, or they take in pixels and predict the next most likely pixel. That's very different from what you and I do." In a recent survey of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society that includes some of the most respected researchers in the field, more than three-quarters of respondents said the methods used to build today's technology were unlikely to lead to AGI.

Opinions differ in part because scientists cannot even agree on a way of defining human intelligence, arguing endlessly over the merits and flaws of IQ tests and other benchmarks. Comparing our own brains to machines is even more subjective. This means that identifying AGI is essentially a matter of opinion.... And scientists have no hard evidence that today's technologies are capable of performing even some of the simpler things the brain can do, like recognizing irony or feeling empathy. Claims of AGI's imminent arrival are based on statistical extrapolations — and wishful thinking. According to various benchmark tests, today's technologies are improving at a consistent rate in some notable areas, like math and computer programming. But these tests describe only a small part of what people can do.

Humans know how to deal with a chaotic and constantly changing world. Machines struggle to master the unexpected — the challenges, small and large, that do not look like what has happened in the past. Humans can dream up ideas that the world has never seen. Machines typically repeat or enhance what they have seen before. That is why Frosst and other sceptics say pushing machines to human-level intelligence will require at least one big idea that the world's technologists have not yet dreamed up. There is no way of knowing how long that will take. "A system that's better than humans in one way will not necessarily be better in other ways," Harvard University cognitive scientist Steven Pinker said. "There's just no such thing as an automatic, omniscient, omnipotent solver of every problem, including ones we haven't even thought of yet. There's a temptation to engage in a kind of magical thinking. But these systems are not miracles. They are very impressive gadgets."

While Google's AlphaGo could be humans in a game with "a small, limited set of rules," the article points out that the real world "is bounded only by the laws of physics. Modelling the entirety of the real world is well beyond today's machines, so how can anyone be sure that AGI — let alone superintelligence — is just around the corner?" And they offer this alternative perspective from Matteo Pasquinelli, a professor of the philosophy of science at Ca' Foscari University in Venice, Italy.

"AI needs us: living beings, producing constantly, feeding the machine. It needs the originality of our ideas and our lives."
Bitcoin

Trump's Crypto Venture Introduces a Stabelcoin 77

World Liberty Financial, a crypto venture backed by Donald Trump and his family, has launched a U.S. dollar-pegged stablecoin called USD1. The token is backed by U.S. Treasuries and cash equivalents and will soon go live on the Ethereum and Binance Smart Chain networks. CNBC reports: The development comes as the market cap for dollar-backed stablecoins -- cryptocurrencies that promise a fixed value peg to another asset -- has been climbing to new all-time-highs this year and has grown more than 46% in the past year, according to CryptoQuant. The market has long been dominated by Tether (USDT) and, more recently, Circle's USDC. "USD1 provides what algorithmic and anonymous crypto projects cannot -- access to the power of DeFi underpinned by the credibility and safeguards of the most respected names in traditional finance," said World Liberty Financial co-founder Zach Witkoff. "We're offering a digital dollar stablecoin that sovereign investors and major institutions can confidently integrate into their strategies for seamless, secure cross-border transactions."

Alex Thorn is head of firmwide research at Galaxy Digital, said at the Digital Asset Summit: "Stablecoins are seen as more politically easy to do in Congress but actually will be dramatically more impactful to the United States and the world than market structure [legislation]. Who regulates who is important ... if you're one of the people that's going to be regulated, but the stablecoin bill could solidify dollar dominance for 100 years."
Classic Games (Games)

Magnus Carlsen Quits Chess Tournament After Refusing to Change Out of Jeans (cnn.com) 180

Magnus Carlsen quit the World Rapid Chess Championship on Friday, reports CNN, "after he refused to change out of the jeans he was wearing..."

"Carlsen, the world champion from 2013 until 2023, allegedly replied, 'I'm out, f*** you,' after being informed that he would not be permitted to continue," reports the Hindustan Times.

The International Chess Federation (or FIDE) "said in a statement that Carlsen breached the tournament's dress code by wearing jeans," reports CNN: As a result, Carlsen would not have been paired for round nine, though he could have returned for the rest of the tournament had he not decided to walk away, per Chess.com. Since he had performed poorly in the earlier rounds, there was little chance that Carlsen could have defended his title regardless....

The standoff became "a matter of principle" for Carlsen, he told chess channel Take Take Take. "I haven't appealed, honestly I'm too old at this point to care too much, if this is what they want to do ... nobody wants to back down, if this is where we are, that's fine by me," he said. "I'll probably head off to somewhere where the weather is a bit nicer than here and that's it." He explained that he had been at a lunch meeting before heading to the tournament's second day and "barely had time to go the room, change, put on a shirt, jacket and honestly I didn't even think about the jeans."

Carlsen was also fined $200, according to the article. He has now also withdrawn from the World Blitz Championship which follows this tournament.

In a statement, the FIDE said their dress code and other regulations "are designed to ensure professionalism and fairness for all participants," and that the federation "remains committed to promoting chess and its values, including respect for the rules that all participants agree to follow."

The group's CEO added "Rules are applicable to all the participants, and it would be unfair towards all players who respected the dress-code, and those who were previously fined." (They added that "We gave Magnus more than enough time to change. But as he had stated himself in his interview — it became a matter of principle for him.")

CNN notes that Carlsen has already won five world rapid and seven world blitz titles in the last 10 years...
United Kingdom

UK Arts and Media Reject Plan To Let AI Firms Use Copyrighted Material (theguardian.com) 52

Writers, publishers, musicians, photographers, movie producers and newspapers have rejected the Labour government's plan to create a copyright exemption to help AI companies train their algorithms. From a report: In a joint statement, bodies representing thousands of creatives dismissed the proposal made by ministers on Tuesday that would allow companies such as Open AI, Google and Meta to train their AI systems on published works unless their owners actively opt out.

The Creative Rights in AI Coalition (Crac) said existing copyright laws must be respected and enforced rather than degraded. The coalition includes the British Phonographic Industry, the Independent Society of Musicians, the Motion Picture Association and the Society of Authors as well as Mumsnet, the Guardian, Financial Times, Telegraph, Getty Images, the Daily Mail Group and Newsquest.

Their intervention comes a day after the technology and culture minister Chris Bryant told parliament the proposed system, subject to a 10-week consultation, would "improve access to content by AI developers, whilst allowing rights holders to control how their content is used for AI training."

AI

Japan's 'God of Management' Comes Back To Life as an AI Model (japantimes.co.jp) 30

Panasonic has created an AI clone of its late founder Konosuke Matsushita based on his writings, speeches, and over 3,000 voice recordings. From a local media report: Known as Japan's "god of management," the Panasonic icon is one of the most respected by the Japanese business community, and comes back to life in digital form to impart wisdom directly to those he never met in person.

"As the number of people who received training directly from Matsushita has been on the decline, we decided to use generative AI technology to pass down our group's founding vision to the next generation," the company said in a statement. Codeveloped with the University of Tokyo-affiliated Matsuo Institute, the model can reproduce how a person thinks or talks. The company aims to further develop the digital clone to help make business decisions in the future.

Crime

Fake CV Lands Top 'Engineer' In Jail For 15 Years (bbc.com) 90

Daniel Mthimkhulu, former chief "engineer" at South Africa's Passenger Rail Agency (Prasa), was sentenced to 15 years in prison for claiming false engineering degrees and a doctorate. His fraudulent credentials allowed him to rise rapidly within Prasa, contributing to significant financial losses and corruption within the agency. The BBC reports: Once hailed for his successful career, Daniel Mthimkhulu was head of engineering at the Passenger Rail Agency of South Africa (Prasa) for five years -- earning an annual salary of about [$156,000]. On his CV, the 49-year-old claimed to have had several mechanical engineering qualifications, including a degree from South Africa's respected Witwatersrand University as well as a doctorate from a German university. However, the court in Johannesburg heard that he had only completed his high-school education.

Mthimkhulu was arrested in July 2015 shortly after his web of lies began to unravel. He had started working at Prasa 15 years earlier, shooting up the ranks to become chief engineer, thanks to his fake qualifications. The court also heard how he had forged a job offer letter from a German company, which encouraged Prasa to increase his salary so the agency would not lose him. He was also at the forefront of a 600m rand deal to buy dozens of new trains from Spain, but they could not be used in South Africa as they were too high. [...] In an interview from 2019 with local broadcaster eNCA, Mthimkhulu admitted that he did not have a PhD. "I failed to correct the perception that I have it. I just became comfortable with the title. I did not foresee any damages as a result of this," he said.

Games

'Civilization 7 Captures the Chaos of Human History In Manageable Doses' (theguardian.com) 62

An anonymous reader quotes a report from The Guardian, written by Julian Benson: It's been eight years since Civilization 6 -- the most recent in a very long-running strategy game series that sees you take a nation from the prehistoric settlement of their first town through centuries of development until they reach the space age. Since 2016 it has amassed an abundance of expansions, scenario packs, new nations, modes and systems for players to master -- but series producer Dennis Shirk at Firaxis Games feels that enough it enough. "It was getting too big for its britches," he says. "It was time to make something new."

"It's tough to even get through the whole game," designer Ed Beach says, singling out the key problem that Firaxis aims to solve with the forthcoming Civilization 7. While the early turns of a campaign in Civilization 6 can be swift, when you're only deciding the actions for the population of a single town, "the number of systems, units, and entities you must manage explodes after a while," Beach says. From turn one to victory, a single campaign can take more than 20 hours, and if you start falling behind other nations, it can be tempting to restart long before you see the endgame. That's why Civilization 7's campaign has been split into three ages -- Antiquity, Exploration and Modern -- with each ending in a dramatic explosion of global crises. "Breaking the game into chapters lets people get through history in a more digestible fashion," Beach says.

When you start a new campaign, you pick a leader and civilization to govern, and direct your people in establishing their first settlements and encounters with the other peoples populating a largely undeveloped land. You'll choose the technologies they research, the expansions they make to their cities, and whom they try to befriend or conquer. Every turn you complete or scientific, economic, cultural and military milestone you pass adds points to a meter running in the background. Once that meter hits 200, you and all the other surviving civilizations on the map will transition into the next age. When moving from Antiquity to Exploration and later Exploration to Modern, you select a new civilization to lead. You'll retain all the cities you controlled before but have access to different technologies and attributes. This may seem strange, but it's built to reflect history: think of London, which was once run by the Romans before being supplanted by the Anglo-Saxons. No empire lasts for ever, but they don't all collapse, either.

Breaking Civilization 7 into chapters also gives campaigns a new rhythm. As you approach the end of an age, you'll begin to face global crises. In Antiquity, for instance, you can see a proliferation of independent powers similar to the tribes that tore down Rome. "We're not calling them barbarians any more," Beach says. "It's a more nuanced way to present them." These crises multiply and strengthen until you reach the next age. "It's like a sci-fi or fantasy series with a huge, crazy conclusion, and then the next book starts nice and calm," Beach says. "There's a point where getting to the next age is a relief."
Here's a round-up of thoughts on Civilization 7 from some of the most respected gaming outlets and reviewers:

Civilization VII hands-on: This strategy sequel rethinks the long game -- Ars Technica's Samuel Axon
Civilization 7 pairs seismic changes with a lovably familiar formula -- Eurogamer's Chris Tapsell
Civilization 7 hands-on: Huge changes are coming to the classic strategy series - PC Gamer's Tyler Wilde
Civilization 7 lets you mix and match history -- and it's a blast - The Verge's Ash Parrish
Civilization 7 Hands-On Preview: Creating Your Legacy - Game Rant's Joshua Duckworth
Sid Meier's Civilization VII preview -- possibly the freshest sequel yet - GamesHub's Jam Walker
How Civilization 7 Rethinks The Series' Structure - GameSpot's Steve Watts
Beer

Alcohol Researcher Says Alcohol-Industry Lobbyists are Attacking His Work (yahoo.com) 154

"Last year, a major meta-analysis that re-examined 107 studies over 40 years came to the conclusion that no amount of alcohol improves health," the New York Times reported this June, citing a study co-authored by Tim Stockwell, an epidemiologist at the Canadian Institute for Substance Use Research. Dr. Stockwell (and other scientists he's collaborated with) "are overhauling decades-worth of scientific evidence — and newspaper headlines — that backed the health benefits of alcohol," writes the Telegraph, "or what is known in the scientific community as the J-curve. The J-curve is the theory that, like a capital J, the negative health consequences of drinking dip slightly into positive territory with moderate drinking — as it benefits such things as the heart — before rising sharply back into negative territory the more someone drinks."

But Stockwell's study prompted at least one scientist to accuse Stockwell of "cherry picking" evidence to suit an agenda — while a think-tank executive suggests he's a front for a worldwide temperance lobby: Dr Stockwell denies this. Speaking to The Telegraph, he in turn accused his detractors of being funded by the alcohol lobby and said his links to temperance societies were fleeting. He was the president of the Kettil Bruun Society (a think tank born out of what was the international temperance congresses) [from 2005 to 2007] and he has been reimbursed for addressing temperance movements and admits attending their meetings, but, he says, not as a member...

Former British government scientist Richard Harding, who gave evidence on safe drinking to the House of Commons select committee on science and technology in 2011, told The Telegraph that Dr Stockwell had wrongly taken a correlation to be causal. "Dr Stockwell's research is essentially epidemiology, which is the study of populations," Dr Harding said. "You record people's lifestyle and then see what diseases they get and try to correlate the disease with some aspect of their lifestyle. But it is just a correlation, it's just an association. Epidemiology can never establish causality on its own. And in this particular case, Dr Stockwell selected six studies out of 107 to focus on. You could say he cherry picked them. Really, the important thing is not the epidemiology, it's the effect that alcohol actually has on the body. We know the reasons why the curve is J-shaped; it's because of the protective effect moderate consumption has on heart disease and a number of other diseases."

Dr Stockwell rejects Dr Harding's criticism of his study, telling The Telegraph that Dr Harding "doesn't appear to have read it" and accusing him of being in the pocket of the alcohol industry. "We identified six high-quality studies out of 107 and they didn't find any J-shaped curve," Dr Stockwell said. "In fact, since our recent paper, we've now got genetic studies which are showing there's no benefits of low-level alcohol use. I personally think there might still be small benefits, but the point of our work is that, if there are benefits, they've been exaggerating them."

The article notes that Stockwell's research "has been published in The Lancet, among other esteemed organs," and that "scientists he has collaborated with on research highlighting the dangers of alcohol are in positions of power at major institutions, such as the World Health Organisation."

And honestly, the opposing viewpoint seems to be thinly-sourced. Besides Harding (the former British government scientist), the article cites:
  • An alcohol policy specialist at Brock University in Ontario (who argues rather unconvincingly that "you can't measure when someone didn't hurt themselves because a friend invited them for a drink.")

On the basis of that, the article writes "respected peers say it is far from settled science and have cast doubt on his research". (And that "fellow academics and experts" told The Telegraph "they read the report in disbelief.") Did the Telegraph speak to others who just aren't mentioned in the story? Or are they extrapolating, in that famous British tabloid journalism sort of way?


Education

First-Known TikTok Mob Attack Led By Middle Schoolers Tormenting Teachers (arstechnica.com) 135

An anonymous reader quotes a report from Ars Technica: A bunch of eighth graders in a "wealthy Philadelphia suburb" recently targeted teachers with an extreme online harassment campaign that The New York Times reported was "the first known group TikTok attack of its kind by middle schoolers on their teachers in the United States." According to The Times, the Great Valley Middle School students created at least 22 fake accounts impersonating about 20 teachers in offensive ways. The fake accounts portrayed long-time, dedicated teachers sharing "pedophilia innuendo, racist memes," and homophobic posts, as well as posts fabricating "sexual hookups among teachers."

The Pennsylvania middle school's principal, Edward Souders, told parents in an email that the number of students creating the fake accounts was likely "small," but that hundreds of students piled on, leaving comments and following the fake accounts. Other students responsibly rushed to report the misconduct, though, Souders said. "I applaud the vast number of our students who have had the courage to come forward and report this behavior," Souders said, urging parents to "please take the time to engage your child in a conversation about the responsible use of social media and encourage them to report any instances of online impersonation or cyberbullying." Some students claimed that the group attack was a joke that went too far. Certain accounts impersonating teachers made benign posts, The Times reported, but other accounts risked harming respected teachers' reputations. When creating fake accounts, students sometimes used family photos that teachers had brought into their classrooms or scoured the Internet for photos shared online.

Following The Times' reporting, the superintendent of the Great Valley School District (GVSD), Daniel Goffredo, posted a message to the community describing the impact on teachers as "profound." One teacher told The Times that she felt "kicked in the stomach" by the students' "savage" behavior, while another accused students of slander and character assassination. Both were portrayed in fake posts with pedophilia innuendo. "I implore you also to use the summer to have conversations with your children about the responsible use of technology, especially social media," Goffredo said. "What seemingly feels like a joke has deep and long-lasting impacts, not just for the targeted person but for the students themselves. Our best defense is a collaborative one." Goffredo confirmed that the school district had explored legal responses to the group attack. But ultimately the district found that they were "limited" because "courts generally protect students' rights to off-campus free speech, including parodying or disparaging educators online -- unless the students' posts threaten others or disrupt school," The Times reported. Instead, the middle school "briefly suspended several students," teachers told The Times, and held an eighth-grade assembly raising awareness of harms of cyberbullying, inviting parents to join.

IBM

Lynn Conway, Leading Computer Scientist and Transgender Pioneer, Dies At 85 (latimes.com) 155

Lynn Conway, a pioneering computer scientist who made significant contributions to VLSI design and microelectronics, and a prominent advocate for transgender rights, died Sunday from a heart condition. She was 85. Pulitzer Prize-winning journalist Michael Hiltzik remembers Conway in a column for the Los Angeles Times: As I recounted in 2020, I first met Conway when I was working on my 1999 book about Xerox PARC, Dealers of Lightning, for which she was a uniquely valuable source. In 2000, when she decided to come out as transgender, she allowed me to chronicle her life in a cover story for the Los Angeles Times Magazine titled "Through the Gender Labyrinth." That article traced her journey from childhood as a male in New York's strait-laced Westchester County to her decision to transition. Years of emotional and psychological turmoil followed, even as he excelled in academic studies. [Conway earned bachelor's and master's degrees in electrical engineering from Columbia University in 1961, quickly joining a team at IBM to design the world's fastest supercomputer. Despite personal success, she faced significant emotional turmoil, leading to her decision to transition in 1968. Initially supportive, IBM ultimately fired Conway due to their inability to reconcile her transition with the company's conservative image.]

The family went on welfare for three months. Conway's wife barred her from contact with her daughters. She would not see them again for 14 years. Beyond the financial implications, the stigma of banishment from one of the world's most respected corporations felt like an excommunication. She sought jobs in the burgeoning electrical engineering community around Stanford, working her way up through start-ups, and in 1973 she was invited to join Xerox's brand new Palo Alto Research Center, or PARC. In partnership with Caltech engineering professor Carver Mead, Conway established the design rules for the new technology of "very large-scale integrated circuits" (or, in computer shorthand, VLSI). The pair laid down the rules in a 1979 textbook that a generation of computer and engineering students knew as "Mead-Conway."

VLSI fostered a revolution in computer microprocessor design that included the Pentium chip, which would power millions of PCs. Conway spread the VLSI gospel by creating a system in which students taking courses at MIT and other technical institutions could get their sample designs rendered in silicon. Conway's life journey gave her a unique perspective on the internal dynamics of Xerox's unique lab, which would invent the personal computer, the laser printer, Ethernet, and other innovations that have become fully integrated into our daily lives. She could see it from the vantage point of an insider, thanks to her experience working on IBM's supercomputer, and an outsider, thanks to her personal history.

After PARC, she was recruited to head a supercomputer program at the Defense Department's Advanced Research Projects Agency, or DARPA -- sailing through her FBI background check so easily that she became convinced that the Pentagon must have already encountered transgender people in its workforce. A figure of undisputed authority in some of the most abstruse corners of computing, Conway was elected to the National Academy of Engineering in 1989. She joined the University of Michigan as a professor and associate dean in the College of Engineering. In 2002 she married a fellow engineer, Charles Rogers, and with him lived active life -- with a shared passion for white-water canoeing, motocross racing and other adventures -- on a 24-acre homestead not far from Ann Arbor, Mich.
In 2020, Conway received a formal apology from IBM for firing her 52 years earlier. Diane Gherson, an IBM senior vice president, told her, "Thanks to your courage, your example, and all the people who followed in your footsteps, as a society we are now in a better place.... But that doesn't help you, Lynn, probably our very first employee to come out. And for that, we deeply regret what you went through -- and know I speak for all of us."
Unix

Mike Karels, of 4.4 BSD Fame, Has Died (startribune.com) 10

Michael 'Mike' Karels, one of the authors of "The Design and Implementation of the 4.4Bsd Operating System" and a part of the Computer Systems Research Group at Berkeley, who received the USENIX Association Lifetime Achievement Award, has died. Longtime Slashdot reader bplipschitz shared the news.

The FreeBSD Foundation issued a statement in memory of Karels: "We are deeply saddened about the passing of Mike Karels, a pivotal figure in the history of BSD UNIX, a respected member of the FreeBSD community, and the Deputy Release Engineer for the FreeBSD Project. Mike's contributions to the development and advancement of BSD systems were profound and have left an indelible mark on the Project. Mike's vision and dedication were instrumental in shaping the FreeBSD we know and use today. His legacy will continue to inspire and guide us in our future endeavors."
Sony

Sony Lays Down the Gauntlet on AI 37

Sony Music Group, one of the world's biggest record labels, warned AI companies and music streaming platforms not to use the company's content without explicit permission. From a report: Sony Music, whose artists include Lil Nas X and Celine Dion, sent letters to more than 700 companies in an effort to protect its intellectual property, which includes album cover art, metadata, musical compositions and lyrics, from being used for training AI models. "Unauthorized use" of Sony Music Group content in the "training, development or commercialization of AI systems" deprives the company and its artists of control and compensation for those works, according to the letter, which was obtained by Bloomberg News.

[...] Sony Music, along with the rest of the industry, is scrambling to balance the creative potential of the fast-moving technology while also protecting artists' rights and its own profits. "We support artists and songwriters taking the lead in embracing new technologies in support of their art," Sony Music Group said in statement Thursday. "However, that innovation must ensure that songwriters' and recording artists' rights, including copyrights, are respected."

Slashdot Top Deals