Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Submission + - Can Cory Doctorow's 'Enshittification' Transform the Tech Industry Debate? (nytimes.com)

An anonymous reader writes: Over the course of a nearly four-decade career, Cory Doctorow has written 15 novels, four graphic novels, dozens of short stories, six nonfiction books, approximately 60,000 blog posts and thousands of essays. And yet for all the millions of words he’s published, these days the award-winning science fiction author and veteran internet activist is best known for just a single one: Enshittification. The term, which Doctorow, 54, popularized in essays in2022and2023, refers to the way that online platforms become worse to use over time, as the corporations that own them try to make more money. Though the coinage is cheeky, in Doctorow’s telling the phenomenon it describes is a specific, nearly scientific process that progresses according to discrete stages, like a disease.

Since then, the meaning has expanded to encompass a general vibe — a feeling far greater than frustration at Facebook, which long ago ceased being a good way to connect with friends, or Google, whose search is now baggy with SEO spam. Of late, the idea has been employed to describe everything from video games totelevisiontoAmerican democracyitself. “It’s frustrating. It’s demoralizing. It’s even terrifying,” Doctorow said in a 2024 speech. On Tuesday, Farrar Straus & Giroux will release “Enshittification: Why Everything Suddenly Got Worse and What to Do About It,” Doctorow’s book-length elaboration on his essays, complete with case studies (Uber, Twitter, Photoshop) and his prescriptions for change, which revolve around breaking up big tech companies and regulating them more robustly.

Submission + - This Isn't Your Father's Weed - and It's Tied to 42 Percent of Fatal Crashes (facs.org) 1

schwit1 writes: The study, just published in the Journal of the American College of Surgeons, reviewed data for 246 deceased Ohio drivers, and found an average THC blood level of 30.7 ng/ML — 15 times the state's legal limit — in 41.9% of dead drivers.

Here in Colorado, the average THC concentration of the legal stuff is approximately 21%, based on comprehensive lab testing of weed statewide. That's just the average concentration.

Frogurt — a name-brand grown and sold in Michigan — tests at 41%. Something called Future #1 is up to 37% THC, and the Permanent Marker brand runs at an average of 34%.

Back in the day, the hard-to-find 15% stuff was the much-sought-after "one-hit weed." The average pot today is 50% stronger. But for people willing to pay a little more, they can get stuff four times more powerful than much of anything your typical 1990 dorm-room smoker enjoyed behind the Redwood Curtain.

The industrial-scale pot-growing enabled by legalization made the stronger concentrations possible, probably even inevitable. Easy availability changes the equation, too. Around 2000, 5% or so of adults reported "regular" pot use of at least once a month. In 2024, that number was 15%.

But cultural conditions have changed greatly since 2000. People were likely less willing back then to answer positively. So we just don't know what the true figures are, but if the trendline is up for regular pot smoking, then the trendline for potency is way up.

About the only thing we can conclude with any certainty is that you'd have to be stoned out of your gourd to think it's a good idea to drive while impaired.

Comment Re:Spreading misinformation (Score 1) 226

Previous senile felon was the one who started this. And Jimmy Kimmel did spread factually misinformation, other news, even leftist media has confirmed that, Tim Pool collected all articles nicely into a youtube video you can watch. But you still live in your bubble and just wonder why is everyone else wrong when your the odd one out of touch with reality.

Submission + - OpenAI, Oracle, SoftBank Plan Five New AI Data Centers For $500 Billion (reuters.com)

An anonymous reader writes: OpenAI, Oracle, and SoftBank on Tuesday announced plans for five new artificial intelligence data centers in the United States to build out their ambitious Stargate project. [...] ChatGPT-maker OpenAI said on Tuesday it will open three new sites with Oracle in Shackelford County, Texas, Dona Ana County, New Mexico and an undisclosed site in the Midwest. Two more data center sites will be built in Lordstown, Ohio and Milam County, Texas by OpenAI, Japan's SoftBank and a SoftBank affiliate.

The new sites, the Oracle-OpenAI site expansion in Abilene, Texas, and the ongoing projects with CoreWeave will bring Stargate's total data center capacity to nearly 7 gigawatts and more than $400 billion in investment over the next three years, OpenAI said. The $500 billion project was intended to generate 10 gigawatts in total data center capacity. "AI can only fulfill its promise if we build the compute to power it," OpenAI CEO Sam Altman said in a statement. The Tuesday's announcement, expected to create 25,000 onsite jobs, follows Nvidia saying on Monday that it will invest up to $100 billion in OpenAI and supply data center chips. OpenAI and partners plan to use debt financing to lease chips for the Stargate project, people familiar with the matter said.

Submission + - $2.2 billion solar plant in California turned off after years of wasted money (nypost.com)

An anonymous reader writes: ‘Never lived up to its promises’

The solar power plant, which features three 459-foot towers and thousands of computer-controlled mirrors known as heliostats, cost some $2.2 billion to build.

Construction began in 2010 and was completed in 2014. Now, it's set to close in 2026 after failing to efficiently generate solar energy.

In 2011, the US Department of Energy under former President Barack Obama issued $1.6 billion in three federal loan guarantees for the project and the Secretary of Energy, Ernest Moniz, hailed it as "an example of how America is becoming a world leader in solar energy.”

But ultimately, it's been more emblematic of profligate government spending and unwise bets on poorly conceived, quickly outdated technologies.

"Ivanpah stands as a testament to the waste and inefficiency of government subsidized energy schemes,"Jason Isaac, CEO of the American Energy Institute, an American energy advocacy group, told Fox News via statement this past February. It “never lived up to its promises, producing less electricity than expected, while relying on natural gas to stay operational."

The Almighty Buck

Vietnam Shuts Down Millions of Bank Accounts Over Biometric Rules (icobench.com) 23

Longtime Slashdot reader schwit1 shares a report from ICO Bench: As of September 1, 2025, banks across Vietnam are closing accounts deemed inactive or non-compliant with new biometric rules. Authorities estimate that more than 86 million accounts out of roughly 200 million are at risk if users fail to update their identity verification.

The State Bank of Vietnam has also introduced stricter thresholds for transactions:
- Facial authentication is mandatory for online transfers above 10 million VND (about $379).
- Cumulative daily transfers over 20 million VND ($758) also require biometric approval.

The policy is part of the central bank's broader "cashless" strategy, aimed at combating fraud, identity theft, and deepfake-enabled scams. [...] While many Vietnamese citizens have updated their biometric data without issue, the measure has disproportionately affected foreign residents and expatriates who cannot easily return to local branches and dormant accounts that had been left inactive for years.
schwit1 highlights a post on X from Bitcoin expert and TFTC.io founder Marty Bent: "If users don't comply by the 30th they'll lose their money. This is why we bitcoin."

Submission + - What Will Universities Look Like Post-ChatGPT? (cameronharwick.com) 5

An anonymous reader writes: Lots of people are sounding the alarm on AI cheating in college.

Who could resist a tool that makes every assignment easier with seemingly no consequences? After spending the better part of the past two years grading AI-generated papers, Troy Jollimore, a poet, philosopher, and Cal State Chico ethics professor, has concerns. “Massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate,” he said. “Both in the literal sense and in the sense of being historically illiterate and having no knowledge of their own culture, much less anyone else’s.”

Economist Cameron Harwick says it's on professors to respond, and it's going to look like relying more on tests and not on homework—which means a diploma will have to be less about intelligence and more about agency and discipline.

This approach significantly raises the stakes of tests. It violates a longstanding maxim in education, that successful teaching involves quick feedback: frequent, small assignments that help students gauge how they’re doing, graded, to give them a push to actually do it.... Unfortunately, this conventional wisdom is probably going to have to go. If AI makes some aspect of the classroom easier, something else has to get harder, or the university has no reason to exist.

The signal that a diploma sends can’t continue to be “I know things”. ChatGPT knows things. A diploma in the AI era will have to signal discipline and agency – things that AI, as yet, still lacks and can’t substitute for. Any student who makes it through such a class will have a credible signal that they can successfully avoid the temptation to slack, and that they have the self-control to execute on long-term plans.


Youtube

YouTube Pulls Tech Creator's Self-Hosting Tutorial as 'Harmful Content' (jeffgeerling.com) 77

YouTube pulled a popular tutorial video from tech creator Jeff Geerling this week, claiming his guide to installing LibreELEC on a Raspberry Pi 5 violated policies against "harmful content." The video, which showed viewers how to set up their own home media servers, had been live for over a year and racked up more than 500,000 views. YouTube's automated systems flagged the content for allegedly teaching people "how to get unauthorized or free access to audio or audiovisual content."

Geerling says his tutorial covered only legal self-hosting of media people already own -- no piracy tools or copyright workarounds. He said he goes out of his way to avoid mentioning popular piracy software in his videos. It's the second time YouTube has pulled a self-hosting content video from Geerling. Last October, YouTube removed his Jellyfin tutorial, though that decision was quickly reversed after appeal. This time, his appeal was denied.

Submission + - GitGub to require you accept AI-written Issues (github.com) 1

jddj writes: GitHub (https://github.com/), the Microsoft-owned repository popular with all sorts of open-source projects, is about to require that those project repositories accept Copilot-written issues.

The concerns are that AI-written issues could be numerous, beyond what a project is staffed to handle, perhaps adding millions of issues to a backlog (in that there's _always_ something you could do better), that AI-Written issues will be no better than the AI slop polluting search results, perhaps often hallucinated, and that AI-written issues will become de-facto DoS attacks, as was seen with curl recently.

There is no opt-out, and Copilot is prevented from being blocked by repo managers.

The anger is fierce and widespread, with commenters suggesting that any such feature be opt-in, and that Copilot-generated issues be filterable.

United States

The Newark Airport Crisis is About To Become Everyone's Problem (theverge.com) 157

Newark Liberty International Airport has suffered six radar and radio outages in nine months, with the most recent occurring May 9th when controllers told pilots "our scopes just went black again" before handing off flights to other facilities. The outages have forced flight cancellations, diversions, and delays lasting over a week as airlines repositioned aircraft and crews.

The Federal Aviation Administration created the problem by relocating Newark's air traffic control operations from the understaffed N90 facility on Long Island to Philadelphia in 2024. Only 17 of 33 controllers accepted the move despite $100,000 relocation bonuses, leaving operations short-staffed. Rather than build new STARS servers in Philadelphia, the FAA opted to send radar data over 130 miles of commercial copper telephone lines.

The remote feeds have experienced approximately 10 minutes of downtime over 10 months -- exceeding the agency's reliability standards and occurring 200 times more frequently than the FAA's internal analysis predicted. The agency simultaneously laid off over 100 maintenance technicians and telecommunications specialists in February, further straining an air traffic control system that suffers around 700 outages weekly nationwide while managing 16.8 million annual flights with 1990s-era technology.
Businesses

Fake Job Seekers Are Flooding US Companies (cnbc.com) 63

Fake job seekers using AI tools to impersonate candidates are increasingly targeting U.S. companies with remote positions, creating a growing security threat across industries. By 2028, one in four global job applicants will be fake, according to Gartner. These imposters use AI to fabricate photo IDs, generate employment histories, and provide interview answers, often targeting cybersecurity and cryptocurrency firms, CNBC reports.

Once hired, fraudulent employees can install malware to demand ransoms, steal customer data, or simply collect salaries they wouldn't otherwise obtain, according to Vijay Balasubramaniyan, CEO of Pindrop Security. The problem extends beyond tech companies. Last year, the Justice Department alleged more than 300 U.S. firms inadvertently hired impostors with ties to North Korea, including major corporations across various sectors.

Submission + - The UK is developing a murder prediction tool

toutankh writes: The UK government is developing a tool to predict murder.

The scheme was originally called the “homicide prediction project”, but its name has been changed to “sharing data to improve risk assessment”. The Ministry of Justice hopes the project will help boost public safety but campaigners have called it “chilling and dystopian”.

The existence of the project was uncovered by Statewatch rather than announced by the UK government. PR following this discovery looks like uncoordinated damage control: one stated goal is to "ultimately contribute to protecting the public via better analysis", but a spokeperson also said that it is "for research purpose only". One criticism is that such a system will inevitably reproduce existing bias from the police. What could go wrong?

Businesses

Shopify CEO Says Staffers Need To Prove Jobs Can't Be Done By AI Before Asking for More Headcount (cnbc.com) 106

Shopify CEO Tobi Lutke is changing his company's approach to hiring in the age of AI. Employees will be expected to prove why they "cannot get what they want done using AI" before asking for more headcount and resources, Lutke wrote in a memo to staffers that he posted to X. From a report: "What would this area look like if autonomous AI agents were already part of the team?" Lutke wrote in the memo, which was sent to employees late last month. "This question can lead to really fun discussions and projects." Lutke also said there's a "fundamental expectation" across Shopify that employees embrace AI in their daily work, saying it has been a "multiplier" of productivity for those who have used it.

"I've seen many of these people approach implausible tasks, ones we wouldn't even have chosen to tackle before, with reflexive and brilliant usage of AI to get 100X the work done," Lutke wrote. The company, which sells web-based software that helps online retailers manage sales and run their operations, will factor AI usage into performance reviews, he added.

Slashdot Top Deals

"Free markets select for winning solutions." -- Eric S. Raymond

Working...