Businesses

How Europe Crushes Innovation (economist.com) 153

European labor regulations enacted nearly a century ago now impose costs on companies that discourage investment in disruptive technologies. An American firm shedding workers incurs costs equivalent to seven months of wages per employee. In Germany the figure reaches 31 months. In France it reaches 38 months. The expense extends beyond severance pay and union negotiations. Companies retain unproductive workers they would prefer to dismiss.

New investments face delays of years as dismissed employees are gradually replaced. Olivier Coste, a former EU official turned tech entrepreneur, and economist Yann Coatanlem tracked these opaque restructuring costs and found that European firms avoid risky ventures because of them. Large companies typically finance ten risky projects where eight fail and require mass redundancies. Apple developed a self-driving car for years before abandoning the effort and firing 600 employees in 2024. The two successful projects generate profits worth many times the invested sums. This calculus works in America where failure costs remain low. In Europe the same bet becomes financially unviable.

European blue-chip firms sell products that are improved versions of what they sold in the 20th century -- turbines, shampoos, vaccines, jetliners. American star firms peddle AI chatbots, cloud computers, reusable rockets. Nvidia is worth more than the European Union's 20 biggest listed firms combined. Microsoft, Google, and Meta each fired over 10,000 staff in recent years despite thriving businesses. Satya Nadella called firing people during success the "enigma of success." Bosch and Volkswagen recently announced layoffs with timelines stretching to 2030.
AI

Testing the Viral AI Necklace That Promises Companionship But Delivers Confusion (fortune.com) 26

Fortune tested the AI Friend necklace for two weeks and found it struggled to perform its basic function. The $129 pendant missed conversations entirely during the author's breakup call and could only offer vague questions about "fragments" when she tried to ask for advice. The device lagged seven to ten seconds behind her speech and frequently disconnected. The author had to press her lips against the pendant and repeat herself multiple times to get coherent replies. After a week and a half the necklace forgot her name and later misremembered her favorite color.

The startup has raised roughly seven million dollars in venture capital for the product and spent a large portion on eleven thousand subway posters across the MTA system. Sales reached three thousand units but only one thousand have shipped. The company brought in slightly under four hundred thousand dollars in revenue. The startup told Fortune he deliberately "lobotomized" the AI's personality after receiving complaints. The terms of service require arbitration in San Francisco and grant the company permission to collect audio and voice data for AI training.
AI

Fake AI-Generated Actress Gets Agent - and a Very Angry Reaction from (Human) Actors Union (yahoo.com) 99

A computer-generated actress appearing in Instagram shorts now has a talent agent, reports the Los Angeles Times.

The massive screen actors union SAG-AFTRA "weighed in with a withering response." SAG-AFTRA believes creativity is, and should remain, human-centered. The union is opposed to the replacement of human performers by synthetics.

To be clear, "Tilly Norwood" is not an actor, it's a character generated by a computer program that was trained on the work of countless professional performers — without permission or compensation. It has no life experience to draw from, no emotion and, from what we've seen, audiences aren't interested in watching computer-generated content untethered from the human experience. It doesn't solve any "problem" — it creates the problem of using stolen performances to put actors out of work, jeopardizing performer livelihoods and devaluing human artistry.

Additionally, signatory producers should be aware that they may not use synthetic performers without complying with our contractual obligations, which require notice and bargaining whenever a synthetic performer is going to be used.

"They are taking our professional members' work that has been created, sometimes over generations, without permission, without compensation and without acknowledgment, building something new," SAG-AFTRA President Sean Astin told the Los Angeles Times in an interview: "But the truth is, it's not new. It manipulates something that already exists, so the conceit that it isn't harming actors — because it is its own new thing — ignores the fundamental truth that it is taking something that doesn't belong to them," Astin said. "We want to allow our members to benefit from new technologies," Astin said. "They just need to know that it's happening. They need to give permission for it, and they need to be bargained with...."

Some actors called for a boycott of any agents who decide to represent Norwood. "Read the room, how gross," In the Heights actor Melissa Barrera wrote on Instagram. "Our members reserve the right to not be in business with representatives who are operating in an unfair conflict of interest, who are operating in bad faith," Astin said.

But this week the head of a new studio from startup Luma AI "said all the big companies and studios were working on AI assisted projects," writes Deadline — and then claimed "being under NDA, she was not in a position to announce any of the details."
Businesses

Cory Doctorow Explains Why Amazon is 'Way Past Its Prime' (theguardian.com) 116

"It's not just you. The internet is getting worse, fast," writes Cory Doctorow. Sunday he shared an excerpt from his upcoming book Enshittification: Why Everything Suddenly Got Worse and What to Do About It.

He succinctly explains "this moment we're living through, this Great Enshittening" using Amazon as an example. Platforms amass users, but then abuse them to make things better for their business customers. And then they abuse those business customers too, abusing everybody while claiming all the value for themselves. "And become a giant pile of shit."

So first Amazon subsidized prices and shipping, then locked in customers with Prime shipping subscriptions (while adding the chains of DRM to its ebooks and audiobooks)... These tactics — Prime, DRM and predatory pricing — make it very hard not to shop at Amazon. With users locked in, to proceed with the enshittification playbook, Amazon needed to get its business customers locked in, too... [M]erchants' dependence on those customers allows Amazon to extract higher discounts from those merchants, and that brings in more users, which makes the platform even more indispensable for merchants, allowing the company to require even deeper discounts...

[Amazon] uses its overview of merchants' sales, as well as its ability to observe the return addresses on direct shipments from merchants' contracting factories, to cream off its merchants' bestselling items and clone them, relegating the original seller to page umpty-million of its search results. Amazon also crushes its merchants under a mountain of junk fees pitched as optional but effectively mandatory. Take Prime: a merchant has to give up a huge share of each sale to be included in Prime, and merchants that don't use Prime are pushed so far down in the search results, they might as well cease to exist. Same with Fulfilment by Amazon, a "service" in which a merchant sends its items to an Amazon warehouse to be packed and delivered with Amazon's own inventory. This is far more expensive than comparable (or superior) shipping services from rival logistics companies, and a merchant that ships through one of those rivals is, again, relegated even farther down the search rankings.

All told, Amazon makes so much money charging merchants to deliver the wares they sell through the platform that its own shipping is fully subsidised. In other words, Amazon gouges its merchants so much that it pays nothing to ship its own goods, which compete directly with those merchants' goods.... Add all the junk fees together and an Amazon seller is being screwed out of 45-51 cents on every dollar it earns there. Even if it wanted to absorb the "Amazon tax" on your behalf, it couldn't. Merchants just don't make 51% margins. So merchants must jack up prices, which they do. A lot... [W]hen merchants raise their prices on Amazon, they are required to raise their prices everywhere else, even on their own direct-sales stores. This arrangement is called most-favoured-nation status, and it's key to the U.S. Federal Trade Commission's antitrust lawsuit against Amazon...

If Amazon is taxing merchants 45-51 cents on every dollar they make, and if merchants are hiking their prices everywhere their goods are sold, then it follows you're paying the Amazon tax no matter where you shop — even the corner mom-and-pop hardware store. It gets worse. On average, the first result in an Amazon search is 29% more expensive than the best match for your search. Click any of the top four links on the top of your screen and you'll pay an average of 25% more than you would for your best match — which, on average, is located 17 places down in an Amazon search result.

Doctorow knows what we need to do:
  • Ban predatory pricing — "selling goods below cost to keep competitors out of the market (and then jacking them up again)."
  • Impose structural separation, "so it can either be a platform, or compete with the sellers that rely on it as a platform."
  • Curb junk fees, "which suck 45-51 cents on every dollar merchants take in."
  • End its most favoured nation deal, which forces merchants "to raise their prices everywhere else, too.
  • Unionise drivers and warehouse workers.
  • Treat rigged search results as the fraud they are.

These are policy solutions. (Because "You can't shop your way out of a monopoly," Doctorow warns.) And otherwise, as Doctorow says earlier, "Once a company is too big to fail, it becomes too big to jail, and then too big to care."

In the mean time, Doctorow also makes up a new word — "the enshitternet" — calling it "a source of pain, precarity and immiseration for the people we love.

"The indignities of harassment, scams, disinformation, surveillance, wage theft, extraction and rent-seeking have always been with us, but they were a minor sideshow on the old, good internet and they are the everything and all of the enshitternet."

Thanks to long-time Slashdot readers mspohr and fjo3 for sharing the article.


AI

Sora's Controls Don't Block All Deepfakes or Copyright Infringements (cnbc.com) 32

If you upload an image to serve as the inspiration for an AI-generated video from OpenAI's Sora, "the app will reject your image if it detects a face — any face," writes Mashable." (Unless that person has agreed to participate.) All Sora videos also include a watermark, notes PC Magazine, and Sora banned the creation of AI-generated videos showing public figures.

"But it turns out the policy doesn't apply to dead celebrities..." Unlike lower-quality deepfakes, many of the Sora videos appear disturbingly realistic and accurately mimic the voices and facial expressions of deceased celebrities. Some of the clips even contain licensed music... [A]ccording to OpenAI, the videos are fair game. "We don't have a comment to add, but we do allow the generation of historical figures," the company tells PCMag.
CNBC reported Saturday that Sora users have also "flooded the platform with artificial intelligence-generated clips of popular brands and animated characters." They noted Sora generated videos with clearly-copyrighted characters like Ronald McDonald, Simpsons characters, Pikachu, Patrick Star from "SpongeBob SquarePants," and Pikachu. (as Cracked.com puts it, "Ever wish 'South Park' was two minutes long and not funny?")

OpenAI's "opt-out" policy for copyright holders was unusual, CNBC writes, since "Typically, third parties have to get explicit permission to use someone's work under copyright law"" (as explained by Jason Bloom, partner/chair of the intellectual property litigation practice group at law firm Haynes Boone). "You can't just post a notice to the public saying we're going to use everybody's works, unless you tell us not to," he said. "That's not how copyright works." "A lot of the videos that people are going to generate of these cartoon characters are going to infringe copyright," Mark Lemley, a professor at Stanford Law School, said in an interview. "OpenAI is opening itself up to quite a lot of copyright lawsuits by doing this..."
Privacy

Amazon's Ring Plans to Scan Everyone's Face at the Door (msn.com) 106

Amazon will be adding facial recognition to its camera-equipped Ring doorbells for the first time in December, according to the Washington Post.

"While the feature will be optional for Ring device owners, privacy advocates say it's unfair that wherever the technology is in use, anyone within sight will have their faces scanned to determine who's a friend or stranger." The Ring feature is "invasive for anyone who walks within range of your Ring doorbell," said Calli Schroeder, senior counsel at the consumer advocacy and policy group Electronic Privacy Information Center. "They are not consenting to this." Ring spokeswoman Emma Daniels said that Ring's features empower device owners to be responsible users of facial recognition and to comply with relevant laws that "may require obtaining consent prior to identifying people..."

Other companies, including Google, already offer facial recognition for connected doorbells and cameras. You might use similar technology to unlock your iPhone or tag relatives in digital photo albums. But privacy watchdogs said that Ring's use of facial recognition poses added risks, because the company's products are embedded in our neighborhoods and have a history of raising social, privacy and legal questions... It's typically legal to film in public places, including your doorway. And in most of the United States, your permission is not legally required to collect or use your faceprint. Privacy experts said that Ring's use of the technology risks crossing ethical boundaries because of its potential for widespread use in residential areas without people's knowledge or consent.

You choose to unlock your iPhone by scanning your face. A food delivery courier, a child selling candy or someone walking by on the sidewalk is not consenting to have their face captured, stored and compared against Ring's database, said Adam Schwartz, privacy litigation director for the consumer advocacy group Electronic Frontier Foundation. "It's troubling that companies are making a product that by design is taking biometric information from people who are doing the innocent act of walking onto a porch," he said.

Ring's spokesperson said facial recognition won't be available some locations, according to the article, including Texas and Illinois, which passed laws fining companies for collecting face information without permission. But the Washington Post heard another possible worst-case scenario from Calli Schroeder, senior counsel at the consumer advocacy and policy group Electronic Privacy Information Center: databases of identified faces being stolen by cyberthieves, misused by Ring employees, or shared with outsiders such as law enforcement.

Amazon says they're "reuniting lost dogs through the power of AI," in their announcement this week, thanks to "an AI-powered community feature that enables your outdoor Ring cameras to help reunite lost dogs with their families... When a neighbor reports a lost dog in the Ring app, nearby outdoor Ring cameras automatically begin scanning for potential matches."

Amazon calls it an example of their vision for "tools that make it easier for neighbors to look out for each other, and create safer, more connected communities." They're also 10x zoom, enhanced low-light performance, 2K and 4K resolutions, and "advanced AI tuning" for video...
Android

Google Confirms Android Dev Verification Will Have Free and Paid Tiers, No Public List of Devs (arstechnica.com) 29

An anonymous reader quotes a report from Ars Technica: As we careen toward a future in which Google has final say over what apps you can run, the company has sought to assuage the community's fears with a blog post and a casual "backstage" video. Google has said again and again since announcing the change that sideloading isn't going anywhere, but it's definitely not going to be as easy. The new information confirms app installs will be more reliant on the cloud, and devs can expect new fees, but there will be an escape hatch for hobbyists.

Confirming app verification status will be the job of a new system component called the Android Developer Verifier, which will be rolled out to devices in the next major release of Android 16. Google explains that phones must ensure each app has a package name and signing keys that have been registered with Google at the time of installation. This process may break the popular FOSS storefront F-Droid. It would be impossible for your phone to carry a database of all verified apps, so this process may require Internet access. Google plans to have a local cache of the most common sideloaded apps on devices, but for anything else, an Internet connection is required. Google suggests alternative app stores will be able to use a pre-auth token to bypass network calls, but it's still deciding how that will work.

The financial arrangement has been murky since the initial announcement, but it's getting clearer. Even though Google's largely automated verification process has been described as simple, it's still going to cost developers money. The verification process will mirror the current Google Play registration fee of $25, which Google claims will go to cover administrative costs. So anyone wishing to distribute an app on Android outside of Google's ecosystem has to pay Google to do so. What if you don't need to distribute apps widely? This is the one piece of good news as developer verification takes shape. Google will let hobbyists and students sign up with only an email for a lesser tier of verification. This won't cost anything, but there will be an unclear limit on how many times these apps can be installed. The team in the video strongly encourages everyone to go through the full verification process (and pay Google for the privilege). We've asked Google for more specifics here.

Japan

Japan Saw Record Number Treated For Heatstroke in Hottest-Ever Summer (japantimes.co.jp) 39

More than 100,000 people were sent to hospitals due to heatstroke in Japan between May 1 and Sunday, according to preliminary data from the Fire and Disaster Management Agency. Bloomberg, via Japan Times: The number is the most on record, according to NHK. Transport to hospitals of patients linked to heatstroke over the period rose almost 3% to 100,143 from a year earlier as Japan saw its national temperature record broken twice in a matter of days. The country's average temperature during this summer was the highest since the statistic began being compiled in 1898, the nation's weather agency said last month.

Heat waves around the world are being made stronger and more deadly due to human-caused climate change. Government officials in August pledged to boost public health protections and encouraged the installation of more air conditioners in school gymnasiums and the use of cooling centers in communal spaces like libraries. New rules came into effect this summer that require employers to take adequate measures to protect workers from extreme temperatures.

Social Networks

OpenAI's New Social Video App Will Let You Deepfake Your Friends (theverge.com) 22

Alongside its updated Sora 2 AI video generator, OpenAI has launched an iPhone-only social app called Sora that lets users consent to have friends create deepfake-style cameos of them. The invite-only app works a lot like TikTok with short remixable videos but enforces restrictions on public figures and explicit content. The Verge reports: In a briefing with reporters on Monday, employees called it the potential "ChatGPT moment for video generation." The Sora app is currently only available to US and Canada users, with other countries set to follow, and when someone receives access, they also get four additional invites to share with friends. There's no word on when an Android version might be released.

Sora users can give their friends -- or, if they're feeling bold, everyone -- permission to create "cameos" with their own likeness using the new video model, which is dubbed Sora 2. The person whose likeness is being generated is a "co-owner" of that end result, OpenAI employees said, and they can delete it or revoke access to others at any time. Like TikTok, OpenAI's Sora app allows you to interact with other videos and trends using a "Remix" feature, but it only allows for the generation of 10-second videos for now.

Android

Open Source Android Repository F-Droid Says Google's New Rules Will Shut It Down (f-droid.org) 78

F-Droid has warned that Google's upcoming developer verification program will kill the free and open source app repository. Google announced plans several weeks ago to force all Android app developers to register their apps and identity with the company. Apps not validated by Google will not be installable on certified Android devices.

F-Droid says it cannot require developers to register with Google or take over app identifiers to register for them. The site operators say doing so would effectively take over distribution rights from app authors. Google plans to begin testing the verification scheme in the coming weeks and may charge registration fees. Unverified apps will start being blocked next year in Brazil, Indonesia, Singapore, and Thailand before expanding globally in 2027. F-Droid is calling on US and EU regulators to intervene.
AI

OpenAI's New Sora Video Generator To Require Copyright Holders To Opt Out (msn.com) 55

An anonymous reader shares a report: OpenAI is planning to release a new version of its Sora video generator that creates videos featuring copyrighted material unless copyright holders opt out of having their work appear, according to people familiar with the matter. OpenAI began alerting talent agencies and studios about the forthcoming product and its opt-out process over the last week and plans to release the new version in the coming days, the people said.

The new opt-out process means that movie studios and other intellectual property owners would have to explicitly ask OpenAI not to include their copyrighted material in videos Sora creates. While copyrighted characters will require an opt-out, the new product won't generate images of recognizable public figures without their permission, people familiar with OpenAI's thinking said.

AI

Culture Magazine Urges Professional Writers to Resist AI, Boycott and Stigmatize AI Slop (nplusonemag.com) 39

The editors of the culture magazine n + 1 decry the "well-funded upheaval" caused by a large and powerful coalition of pro-AI forces. ("According to the logic of market share as social transformation, if you move fast and break enough things, nothing can contain you...")

"An extraordinary amount of money is spent by the AI industry to ensure that acquiescence is the only plausible response. But marketing is not destiny." The AI bubble — and it is a bubble, as even OpenAI overlord Sam Altman has admitted — will burst. The technology's dizzying pace of improvement, already slowing with the release of GPT-5, will stall... [P]rofessional readers and writers: We retain some power over the terms and norms of our own intellectual life. We ought to stop acting like impotence in some realms means impotence everywhere. Major terrains remain AI-proofable. For publishers, editors, critics, professors, teachers, anyone with any say over what people read, the first step will be to develop an ear. Learn to tell — to read closely enough to tell — the work of people from the work of bots...

Whatever nuance is needed for its interception, resisting AI's further creep into intellectual labor will also require blunt-force militancy. The steps are simple. Don't publish AI bullshit. Don't even publish mealymouthed essays about the temptation to produce AI bullshit. Resist the call to establish worthless partnerships like the Washington Post's Ember, an "AI writing coach" designed to churn out Bezos-friendly op-eds. Instead, do what better magazines, newspapers, and journals have managed for centuries. Promote and produce original work of value, work that's cliché-resistant and unreplicable, work that tries — as Thomas Pynchon wrote in an oracular 1984 essay titled "Is It OK to Be a Luddite?" — "through literary means which are nocturnal and deal in disguise, to deny the machine...."

Punishing already overdisciplined and oversurveilled students for their AI use will help no one, but it's a long way from accepting that reality to Ohio State's new plan to mandate something called "AI fluency" for all graduates by 2029 (including workshops sponsored, naturally, by Google). Pedagogically, alternatives to acquiescence remain available. Some are old, like blue-book exams, in-class writing, or one-on-one tutoring. Some are new, like developing curricula to teach the limits and flaws of generative AI while nurturing human intelligence...

Our final defenses are more diffuse, working at a level of norms and attitudes. Stigmatization is a powerful force, and disgust and shame are among our greatest tools. Put plainly, you should feel bad for using AI. (The broad embrace of the term slop is a heartening sign of a nascent constituency for machine denial.) These systems haven't worked well for very long, and consensus about their use remains far from settled. That's why so much writing about AI writing sounds the way it does — nervous, uneven, ambivalent about the new regime's utility — and it means there's still time to disenchant AI, provincialize it, make it uncompelling and uncool...

As we train our sights on what we oppose, let's recall the costs of surrender. When we use generative AI, we consent to the appropriation of our intellectual property by data scrapers. We stuff the pockets of oligarchs with even more money. We abet the acceleration of a social media gyre that everyone admits is making life worse. We accept the further degradation of an already degraded educational system. We agree that we would rather deplete our natural resources than make our own art or think our own thoughts... A literature which is made by machines, which are owned by corporations, which are run by sociopaths, can only be a "stereotype" — a simplification, a facsimile, an insult, a fake — of real literature. It should be smashed, and can.

The 3,800-word article also argues that "perhaps AI's ascent in knowledge-industry workplaces will give rise to new demands and new reasons to organize..."
AI

Mistral's New Plan for Improving Its AI Models: Training Data from Enterprises (wsj.com) 11

Paris-based AI giant Mistral "is pushing to improve its models," reports the Wall Street Journal, "by looking inside legacy enterprises that hold some of the world's last untapped data reserves...." Mistral's approach will be to form partnerships with enterprises to further train existing models on their own proprietary data, a phenomenon known as post-training... [At Dutch chip-equipment company ASML], Mistral embeds its own solutions architects, applied AI engineers and applied scientists into the enterprise to work on improving models with the company's data. [While Mistral sells some models under a commercial license], this co-creation strategy allows Mistral to make money off the services side of its business and afford to give away its open source AI free of charge, while improving model performance for the customer with more industry context...

This kind of hand-holding approach is necessary for most companies to tackle AI successfully, said Arthur Mensch [co-founder and chief executive of Mistral]. "The very high-tech companies [and] a couple of banks are able to do it on their own. But when it comes to getting some [return on investment] from use cases, in general, they fail," he said. Mensch attributes that in part to a mismatch between expectations and reality. "The curse of AI is that it looks like magic. So you can very quickly make something that looks amazing to your boss," but it doesn't scale or work more broadly, he said. In other cases, enterprises simply might not know what to focus on. For example, it is a mistake to think equipping all employees with a chatbot will create meaningful gains on the bottom line, he said. Mensch said to fully take advantage of AI, companies will have to rethink organizational structures. With information flowing more easily, they could require fewer middle managers, for example.

There is a lot of work yet to do, Mensch said, but in a large sense, the future of AI development now lies inside the enterprise itself. "This is a pattern that we've seen with many of our customers: At some point, the capabilities of the frontier model can only be increased if we partner," he said.

The Internet

Europe's Cookie Law Messed Up the Internet. Brussels Wants To Fix It. (politico.eu) 102

In a bid to slash red tape, the European Commission wants to eliminate one of its peskiest laws: a 2009 tech rule that plastered the online world with pop-ups requesting consent to cookies. From a report: It's the kind of simplification ordinary Europeans can get behind. European rulemakers in 2009 revised a law called the e-Privacy Directive to require websites to get consent from users before loading cookies on their devices, unless the cookies are "strictly necessary" to provide a service. Fast forward to 2025 and the internet is full of consent banners that users have long learned to click away without thinking twice.

"Too much consent basically kills consent. People are used to giving consent for everything, so they might stop reading things in as much detail, and if consent is the default for everything, it's no longer perceived in the same way by users," said Peter Craddock, data lawyer with Keller and Heckman. Cookie technology is now a focal point of the EU executive's plans to simplify technology regulation. Officials want to present an "omnibus" text in December, scrapping burdensome requirements on digital companies. On Monday, it held a meeting with the tech industry to discuss the handling of cookies and consent banners.

China

Horror Film's Wedding Scene Digitally Altered for Chinese Audiences (theguardian.com) 47

Australian horror film Together, starring Dave Franco and Alison Brie, underwent digital alterations for its mainland China release on September 12. Chinese cinemagoers discovered that a wedding scene between two men had been modified using face-swapping technology to transform one male character into a female appearance. The change only became apparent after side-by-side screenshots from the original and altered versions circulated on social media platforms.

Chinese viewers are expressing outrage over the AI-powered modification, The Guardian reports, citing concerns about creative integrity and the difficulty of detecting such alterations compared to traditional scene cuts. The film's distributor halted the scheduled September 19 general release following the backlash. China's censorship authorities require all imported films to undergo approval before release.
The Almighty Buck

Vietnam Shuts Down Millions of Bank Accounts Over Biometric Rules (icobench.com) 23

Longtime Slashdot reader schwit1 shares a report from ICO Bench: As of September 1, 2025, banks across Vietnam are closing accounts deemed inactive or non-compliant with new biometric rules. Authorities estimate that more than 86 million accounts out of roughly 200 million are at risk if users fail to update their identity verification.

The State Bank of Vietnam has also introduced stricter thresholds for transactions:
- Facial authentication is mandatory for online transfers above 10 million VND (about $379).
- Cumulative daily transfers over 20 million VND ($758) also require biometric approval.

The policy is part of the central bank's broader "cashless" strategy, aimed at combating fraud, identity theft, and deepfake-enabled scams. [...] While many Vietnamese citizens have updated their biometric data without issue, the measure has disproportionately affected foreign residents and expatriates who cannot easily return to local branches and dormant accounts that had been left inactive for years.
schwit1 highlights a post on X from Bitcoin expert and TFTC.io founder Marty Bent: "If users don't comply by the 30th they'll lose their money. This is why we bitcoin."
United States

The Rush To Return to the Office Is Stalling (msn.com) 51

Major U.S. corporations are mandating more office time but seeing minimal compliance changes. Companies now require 12% more in-office days than in early 2024, according to Work Forward data tracking 9,000 employers. Yet Americans continue working from home approximately 25% of the time, unchanged from 2023, Stanford economist Nicholas Bloom's monthly survey of 10,000 Americans shows.

The New York Times ordered opinion and newsroom staff to four days weekly starting November. Microsoft mandates three days beginning February for Pacific Northwest employees. Paramount and NBCUniversal gave staff ultimatums: commit to five and four days respectively or take buyouts. Amazon faced desk and parking shortages after its full-time mandate, temporarily backpedaling in Houston and New York. Nearly half of senior managers would accept pay cuts to work remotely, a BambooHR survey of 1,500 salaried employees found.
Moon

Interlune Signs $300M Deal to Harvest Helium-3 for Quantum Computing from the Moon (msn.com) 60

An anonymous reader shared this report from the Washington Post: Finnish tech firm Bluefors, a maker of ultracold refrigerator systems critical for quantum computing, has purchased tens of thousands of liters of Helium-3 from the moon — spending "above $300 million" — through a commercial space company called Interlune. The agreement, which has not been previously reported, marks the largest purchase of a natural resource from space.

Interlune, a company founded by former executives from Blue Origin and an Apollo astronaut, has faced skepticism about its mission to become the first entity to mine the moon (which is legal thanks to a 2015 law that grants U.S. space companies the rights to mine on celestial bodies). But advances in its harvesting technology and the materialization of commercial agreements are gradually making this undertaking sound less like science fiction. Bluefors is the third customer to sign up, with an order of up to 10,000 liters of Helium-3 annually for delivery between 2028 and 2037...

Helium-3 is lighter than the Helium-4 gas featured at birthday parties. It's also much rarer on Earth. But moon rock samples from the Apollo days hint at its abundance there. Interlune has placed the market value at $20 million per kilogram (about 7,500 liters). "It's the only resource in the universe that's priced high enough to warrant going out to space today and bringing it back to Earth," said Rob Meyerson [CEO of Interlune and former president of Blue Origin]...

[H]eat, even in small doses, can cause qubits to produce errors. That's where Helium-3 comes in. Bluefors makes the cooling technology that allows the computer to operate — producing chandelier-type structures known as dilution refrigerators. Their fridges, used by quantum computer leader IBM, contain a mixture of Helium-3 and Helium-4 that pushes temperatures below 10 millikelvins (or minus-460 degrees Fahrenheit)... Existing quantum computers have been built with more than a thousand qubits, he said, but a commercial system or data center would need a million or more. That could require perhaps thousands of liters of Helium-3 per quantum computer. "They will need more Helium-3 than is available on planet Earth," said Gary Lai [a co-founder and chief technology officer of Interlune, who was previously the chief architect at Blue Origin]. Most Helium-3 on Earth, he said, comes from the decay of tritium (an isotope of hydrogen) in nuclear weapons stockpiles, but between 22,000 and 30,000 liters are made each year...

"We estimate there's more than a million metric tons of Helium-3 on the moon," Meyerson said. "And it's been accumulating there for 4 billion years." Now, they just need to get it.

Interlune CEO Meyerson tells the post "It's really all about establishing a resilient supply chain for this critical material" — adding that in the long-term he could also see Helium-3 being used for other purposes including fusion energy.
The Internet

Africa's Only Internet Cable Repair Ship Keeps the Continent Online (restofworld.org) 6

The Leon Thevenin, Africa's only permanently stationed cable repair ship, maintains over 60,000 kilometers of undersea internet infrastructure from Madagascar to Ghana. The 43-year-old vessel employs a 60-person crew who perform precision repairs on fiber-optic cables that carry data for Alphabet, Meta, and Amazon -- companies that consumed 3.6 billion megabits per second of bandwidth in 2023.

Operating costs range from $70,000 to $120,000 daily, according to owner Orange Marine. The ship has experienced increased demand due to unusual underwater landslides in the Congo Canyon causing frequent cable breaks. Cable jointer Shuru Arendse and his team spend up to 48 hours on repairs that require fusing hair-thin glass fibers in conditions where a speck of dust can ruin the joint. The vessel gained Starlink connectivity last year after decades of relying on satellite phones and shared computers for crew communication. Sixty-two cable repair ships operate globally to maintain the infrastructure supporting streaming media and AI applications.
United States

Pentagon Demands Journalists Pledge To Not Obtain Unauthorized Material (msn.com) 264

The Washington Post: The Trump administration unveiled a new crackdown Friday on journalists at the Pentagon, saying it will require them to pledge they won't gather any information - even unclassified - that hasn't been expressly authorized for release, and will revoke the press credentials of those who do not obey.

Under the policy, the Pentagon may revoke press passes for anyone it deems a security threat. Possessing confidential or unauthorized information, under the new rules, would be grounds for a journalist't press pass to be revoked.

"DoW remains committed to transparency to promote accountability and public trust," the document says, using an acronym for the newly rebranded Department of War. "However, DoW information must be approved for public release by an appropriate authorizing official before it is released, even if it is unclassified."

For months, Defense Secretary Pete Hegseth and his staff have been tightening restrictions on Pentagon reporters while limiting military personnel's direct communication with the press. Like many defense secretaries before him, Hegseth has been deeply irritated by leaks. His staff this year threatened to use polygraph tests to stop people from leaking information, until the White House intervened.

Slashdot Top Deals