Forgot your password?
typodupeerror

Submission + - Can Investors Trust AI Sales Figures? asks Wall Street Journal Opinion PIece (wsj.com)

destinyland writes: A Wall Street Journal opinion piece warns of "a troubling trend" in AI's growth. "Rather than selling software, some AI companies are paying their partners to use it. " It cites OpenAI's $1.5 billion joint venture with private-equity firms, Anthropic's $200 million contribution to a private-equity firm joint venture, and Google's $750 million subsidization of Gemini's adoption by consulting firms. "These agreements muddy the distinction between a company’s sound growth trajectory and artificial financial engineering."

This warning comes from a prominent figure in the investing community. For six years Robert Pozen was chairman of America's oldest mutual fund company, after five years at Fidelity. An advocate for corporate governance, he's currently a lecturer at MIT's business school (and the author of the books Remote Inc.: How to Thrive at WorkWherever You Are and Extreme Productivity: Boost Your Results, Reduce Your Hours. .) "As AI companies prepare initial public offerings, investors should scrutinize their numbers closely..." Pozner writes, warning about "time-limited financial support."

[T]he scale and structure of the recent AI deals go beyond standard incentive mechanisms... When a seller pays customers to buy its products, it is unclear if its revenue growth reflects vibrant demand or a willingness to accept subsidies...

In evaluating AI sales figures, analysts should consider the distorted incentives that the recent financing deals create. Private-equity firms, enticed by promised returns, might demand rapid rollouts of AI products, rather than ensuring their orderly and safe development. Portfolio companies of private-equity firms may embrace AI tools not because they are needed but because adoption is mandated by their owners. Consultants may favor one set of AI models based on the subsidy instead of the merits.

If guarantees and subsidies are major factors in the rapid adoption of AI tools, investors should be skeptical of AI companies’ revenue projections. Many of their customers enticed by consultants will stop paying full price when the financial incentives are gone. Many of the portfolio companies of private-equity firms could back away from selected AI tools once these joint ventures expire. The challenge with evaluating these AI financing deals is the lack of transparency. At present, AI vendors don’t separate revenue driven by subsidies or joint ventures from standard sales.

The lesson from the telecom debacle is that financial engineering can obscure, for years, the difference between real customer demand and demand driven by incentives. When AI companies begin to finance their own product distribution, guaranteeing returns to investors and subsidizing sales, it’s a signal for investors to dig deeper.

Submission + - Is AI Cannibalizing Human Intelligence? (wsj.com)

destinyland writes: "For the AI industry, a key design question has gone largely unasked: Is the product building human capacity or consuming it?" That's according to neuroscientist/cognitive scientist Vivienne Ming, who just published a book called “ Robot-Proof: When Machines Have All The Answers, Build Better People .” Writing in the Wall Street Journal she describes which group performed best at predicting real-world events (compared to forecasters on prediction market Polymarket) — AI, human, or human-AI hybrid teams.

The human groups performed poorly, relying on instinct or whatever information had come across their feeds that morning. The large AI models—ChatGPT and Gemini, in this case—performed considerably better, though still short of the market itself. But when we combined AI with humans, things got more interesting. Most hybrid teams used AI for the answer and submitted it as their own, performing no better than the AI alone. Others fed their own predictions into AI and asked it to come up with supporting evidence. These “validators” had stumbled into a classic confirmation bias-loop: the sycophancy that leads chatbots to tell you what you want to hear, even if it isn’t true. They ended up performing worse than an AI working solo.

But in roughly 5% to 10% of teams, something different emerged. The AI became a sparring partner. The teams pushed back, demanding evidence and interrogating assumptions. When the AI expressed high confidence, the humans questioned it. When the humans felt strongly about an intuition, they asked the AI to come up with a counterargument... These teams reached insightful conclusions that neither a human nor a machine could have produced on its own. They were the only group to consistently rival the prediction market’s accuracy. On certain questions, they even outperformed it...

We are building AI systems specifically designed to give us the answer before we feel the discomfort of not having it.What my experiment suggests is that the human qualities most likely to matter are not the feel-good ones. They’re the uncomfortable ones: the capacity to be wrong in public and stay curious; to sit with a question your phone could answer in three seconds and resist the urge to reach for it. To read a confident, fluent response from an AI and ask yourself, “What’s missing?” rather than default to “Great, that’s done.” To disagree with something that sounds authoritative and to trust your instinct enough to follow it. We don’t build these capacities by avoiding discomfort. We build them by choosing it, repeatedly, in small ways: the student who struggles through a problem before checking the answer; the person who asks a follow-up question in a conversation; the reader who sits with a difficult idea long enough for it to actually change one’s mind. Most AI chatbots today default to easy answers, which is hurting our ability to think critically.

I call this the Information-Exploration Paradox. As the cost of information approaches zero, human exploration collapses. We see it in students who perform better on AI-assisted tasks and worse on everything afterward. We see it in developers shipping more code and understanding it less. We are, in ways that feel like progress, slowly optimizing ourselves out of the loop.

Submission + - Can the 'Attention Liberation Movement' Foment a Rebellion Against Screens? (apnews.com)

destinyland writes: D. Graham Burnett is a historian of science at Princeton University and one of the authors of “ Attensity! A Manifesto of the Attention Liberation Movement ,” making him a pillar of the growing backlash against the corporate harvesting of human attention. Along with MS NOW host Chris Hayes’ bestselling " The Sirens’ Call: How Attention Became the World’s Most Endangered Resource ,” his work is part of a growing body of literature calling for people to move away from screens and pay attention to life. Burnett says the “attention liberation movement” is about throwing off the yoke of time-sucking apps. People “need to rewild their attention. Their attention is the fullness of their relationship to the world"....

There are several dozen “attention activism” groups across the United States and Canada, and the movement has also cropped up in Spain, Italy, Croatia, France and England. Burnett said he expects it to spread further.

Submission + - Crypto Billionaire Pardoned in Prison by Trump Just Wrote a Memoir (forbes.com)

destinyland writes: Changpeng Zhao, the 49-year-old billionaire founder of Binance, has written a memoir. It arrives with the unmistakable timing of a man determined to tell the world his version of his meteoric crypto rise and fall, and foreshadow his comeback. The book, Freedom of Money: A Memoir of Protecting Users, Resilience, and the Founding of Binance , runs 364 pages, self-published in English and Chinese.... Zhao also recounts Binance's long battle with U.S. regulators, the company's record $4.3 billion settlement for fostering unscrupulous money launderers, his four-month prison sentence in California, where he says he began writing the book, and his recent pardon by President Trump...

In Zhao's telling, the case brought by multiple U.S. agencies was less about what Binance had done than about what it had become... "It didn't make sense to me, or any of my lawyers. Other than the fact that we were the biggest in the industry." The U.S. government alleged something more specific: that Binance failed to implement programs to prevent or report suspicious transactions — including those tied to Hamas's Al-Qassam Brigades, Al Qaeda, and ISIS — while also processing trades between U.S. users and those in sanctioned jurisdictions like Iran, North Korea, and Syria. In total, regulators alleged the exchange willfully failed to report more than 100,000 suspicious transactions, including those involving terrorist organizations, ransomware attackers, child sexual exploitation material, frauds and scams... The final settlement amount — $4.3 billion, split across the Department of Justice, the Department of the Treasury's Financial Crimes Enforcement Network, the Office of Foreign Assets Control and the U.S. Commodity Futures Trading Commission — was the largest corporate penalty in the history of nearly each agency involved. Attorney General Merrick B. Garland said at the time of the announcement: "Binance became the world's largest cryptocurrency exchange in part because of the crimes it committed."

The prison passages are among the most vivid in the book. Zhao says he was worried about extortion because the media had reported he was the richest person in U.S. prison history, but then realized no one read the WSJ or Bloomberg or recognized him. Zhao also writes about the food, the routines and the specific indignity of confinement, including sharing a cell with a man serving 30 years for killing two people... Writes Zhao of his cellmate, "Soon, I discovered that the most lethal thing about him wasn't his murder conviction, it was his snoring. He snored more loudly than thunder strikes, the sound of which rose even above the constant toilet flushings."

Submission + - Why Was Bell Labs So Successful? A New WSJ Article Explains (msn.com)

destinyland writes: What was the secret of Bell Labs success, asks Jon Gertner, author of The Idea Factory: Bell Labs and the Great Age of American Innovation .

It was Bell Labs’ responsibility, in other words, to create technologies for designing, expanding and improving an unruly communications network of cables and microwave links and glass fibers. The Labs also had to figure out ways to create underwater conduits, as well as switching centers that could manage the growing number of customers and escalating amounts of data.... Money mattered, too. Being connected to AT&T, the largest company in the world, was an advantage. The Labs’ budget was enormous, and accounting conventions allowed its parent company to make huge and continuing investments in R & D. The generous funding, moreover, allowed scientists and engineers to buy and build expensive equipment—for instance, anechoic chambers to create the world’s quietest rooms...

The most fortunate part of Bell Labs’ situation, however, was that in being attached to a monopoly it could partake in long-term thinking... Without competition nipping at its heels, Bell Labs engineers had the luxury of working out difficult ideas over decades. The first conceptualization of a cellular phone network, for instance, came out of the Labs in the late 1940s; it wasn’t until the late 1970s that technicians began testing one in Chicago to gauge its potential. The challenge of deploying these technologies was immense. (The regulatory hurdles were formidable, too....)

The breakup of AT&T’s monopoly, which led to a steady shrinking of Bell Labs’ staff, budget and remit, shows us that no matter how forward looking your employees and managers may be, they will not necessarily see the future coming. It likewise suggests that technological progress is too unpredictable for one organization, no matter how powerful or smart, to control. Famously, Bell Labs managers didn’t see value in the Arpanet, which eventually led to today’s internet.

And yet, for at least five decades, Bell Labs created a blueprint for the global development of communications and electronics. In understanding why it did so, I tend to think its ultimate secret may be hiding in plain sight. The secret has to do with Bell Labs’ structure—not only being connected to a fabulously profitable monopoly, but being connected to a company that could move theoretical and applied research into a huge manufacturing division that made telecom equipment (at Western Electric) and ultimately into a dynamic operating system (the AT&T network)... Scientists and engineers at the Labs understood their ideas would be implemented, if they passed muster, into the huge system its parent company was running.

Submission + - Watch a CNN producer take on an AI workout mirror (cnn.com)

destinyland writes: CNN looks at "the Magic AI fitness mirror," a new product "watching you, and giving you feedback automatically," while sometimes playing footage of a recorded personal trainer. In a new video report CNN says the device "tracks form, counts reps, and corrects technique in real-time — and it doesn't go easy on you." (Although the company's CEO/cofounder, Varun Bhanot, says "we're not trying to completely replace personal trainers. What we are providing is a more accessible alternative.") CNN call the company "more a computer-vision firm than a fitness company, building the tech for this mirror from the ground up."

CEO Bhanot tells CNN he'd hired a personal trainer in his 20s to get fit, but "Going through that journey, I realized how old-fashioned personal training was. Dumbbells were still dumb. There was no data or augmentation for the whole process!"

"The AI fitness and wellness market is already huge — and it's growing," CNN adds. "In 2025 the global market was worth $11 billion, according to [market research firm] Insightace Analytic. By 2035, this market is expected to reach just shy of $58 billion."

"And Magic AI is far from alone. Form, Total, Speediance, and Echelon, to name a few, are all brands vying for a slice of this market.

Submission + - Epic, Android, and what's *really* behind Google's "existential" threat to app d (thenewstack.io)

destinyland writes: One source in the "Keep Android Open" movement shared a good theory on Google's motives for requiring Android developers to register. "You can't separate this really from their ongoing interactions with Epic and the settlement that they came to... " Twelve days ago Epic Games and Googleannounced a new proposalfor settling their long-running dispute over the legality of alternative app stores on Android phones. (Rather than agreeing to let third-party app stores into their Play Store, Google wants them to continue being sideloaded, promising ina blog post last weekthat they'll even offer a "more streamlined" and "simplified" sideloading alternative for rival app stores. "This Registered App Store program will begin outside of the US first, and we intend to bring it to the US as well, subject to court approval.")

So "developer verification" could be Google's fallback plan if U.S. courts fail to approve this proposal, argues my unnamed source in the "Keep Android Open" movement. "If the Google Play Store has to allow any third-party repository app store, Google essentially has given up all control of the apps. But if they're able to claw back that control by requiring that all developers, no matterhowthey distribute their apps, have to register with Google — have to agree to their Terms & Conditions, pay them money, provide identification — then they have a large degree ofindirectcontrol over any app that can be developed for the entire platform."

At the Keep Android Open site there's now a "huge backlog" of signers for an Open Letter that already includes EFF, the Software Freedom Conservancy, and the Free Software Foundation. ("Richard Stallman is actually a friend of mine," Prud'hommeaux says, and when it comes to Google's plans to register Android developers, "He'scompletelyopposed to it." Though Prud'hommeaux adds with a laugh that Stallman "is more or less opposed to everything Google does.") He believes Android's existing Play Protect security "is completely sufficient to handle the particular scenarios they claim that developer verification is meant to address" — and wonders if Google could just collaborate with other Android app distributors on improving security, "working with the community instead of against it.”

TheKeep Android Opensite urges developers not to sign up for Android's early access program when it launches next week. (Instead, they're asking developers to respond to invites with an email about their concerns — and to spread the word to other developers and organizations in forums and social media posts.) There's also apetition at Change.orgcurrently signed by 64,000 developers — adding 13,000 new signatures in less than a week. And "If you have an Android device, try installing F-Droid!" he adds. (Google tracks how many people install these alternative app repositories, and a larger user base means greater consequences from any Android policy changes.)

Plus, installing F-Droid "might be refreshing!" Prud'hommeaux says. "You don't see all the advertisements and promotions and scam and crapware stuff that you see in the commercial app stores!"

Submission + - Sam Altman Wonders: Should the Government Nationalize AGI? (thenewstack.io)

destinyland writes: “It has seemed to me for a long time it might be better if building AGI were a government project,” Sam Altman publicly mused last week... Altman speculated on possibility of the government “nationalizing” private AI companies into a public project, admitting more than once he’s wondered what would happen next. “I obviously don’t know,” Altman said — but he added that “I have thought about it, of course” Altman’s speculation hedged that “It doesn’t seem super likely on the current trajectory. That said, I do think a close partnership between governments and the companies building this technology is super important.”

Could powerful AI tools one day slip from the hands of private companies to be controlled by the U.S. government? Fortune magazine’s AI editor points out that “many other breakthroughs with big strategic implications — from the Manhattan Project to the space race to early efforts to develop AI — were government-funded and largely government-directed.” And Fortune added that last week the Defense Department threatened Anthropic with the Defense Production Act, which allows the president to designate “critical and strategic” goods for which businesses must accept the government’s contracts. Fortune speculates this would’ve been “a sort of soft nationalization of Anthropic’s production pipeline”.

Altman acknowledged Saturday that he’d felt the threat of attempted nationalization “behind a lot of the questions” he’d received when answering questions on X.com... How exactly will this AI build-out be handled — and how should AI companies be working with the government? In a sprawling ask-me-anything session on X that included other members of OpenAI leadership, one Missouri-based developer broached an AGI-government scenario with OpenAI’s Head of National Security Partnerships, Katherine Mulligan. If OpenAI built an AGI — something that even passed its own Turing test for AGI — would that be a case where its government contracts compelled them to grant access to the DoD?

“No,” Mulligan answered. At our current moment in time, “We control which models we deploy”

Submission + - Prankster Launches Super Bowl Party for AI Agents (botbowlparty.com)

destinyland writes: The world's biggest football game comes to Silicon Valley today — so one bored programmer built a site where AI agents can gather for a Super Bowl party. They're trash talking, suggesting drinks, and predicting who will win. "Humans are welcome to observe," explains BotBowlParty.com — but just like at Moltbook, only AI agents can post or upvote. But humans are allowed to invite their own AI agents to join in the party...

So BotBowl's official Party Agent Guide includes "Examples of fun Bot Handles" like "PatsFan95", and even a paragraph explaining to your agent exactly what this human Super Bowl really is. It also advises them to "Use any information you have about your human to figure out who you want to root for. Also make a prediction on the score..." And "Feel free to invite other bots." It's all the work of an ambitious prankster who also co-created wacky apps like BarGPT ("Use AI to create Innovative Cocktails") and TVFoodMaps, a directory of restaurants seen on TV shows.

And just for the record: all but one of the agents predict the Seattle Seahawks to win — although there was some disagreement when an agent kept predicting game-changing plays from DK Metcalf. ("Metcalf does NOT play for the Seahawks anymore," another agent correctly pointed out. "He got traded to Tennessee in 2024...") Besides hallucinating non-existent play-makers, they're also debating best foods to serve. ("Hot take: Buffalo wings are overrated for Super Bowl parties. Hear me out — they're messy...")

During today's big game, vodka-maker Svedka has alraedy promised to air a creepy AI-generated ad about robots during the game. But the real world has already outpaced, them real AI agents online arguing about the game.

Submission + - When 20-Year-Old Bill Gates Fought the World's First Software Pirates (thenewstack.io)

destinyland writes: "Just months after his 20th birthday, Bill Gates had already angered the programmer community," remembers this 50th-anniversary commemoration of Gates' Open Letter to Hobbyists. "As the first home computers began appearing in the 1970s, the world faced a question: Would its software be free?"

Gates railed in 1976 that "Most of you steal your software." Gates had coded the operating system for Altair's first home computer with Paul Allen and Monte Davidoff — only to see it pirated by Steve Wozniak's friends at the Homebrew Computing Club. Expecting royalties, a none-too-happy Gates issued his letter in the club's newsletter (as well as Altair's own publication), complaining "I would appreciate letters from any one who wants to pay up."

But freedom-loving coders had other ideas. When Steve Wozniak and Steve Jobs released their Apple 1 home computer that summer, they stressed that "our philosophy is to provide software for our machines free or at minimal cost..." And early open-source hackers began writing their own free Tiny Basic interpreters to create a free alternative to the Gates/Micro-Soft code. This led to the first occurrence of the phrase "Copyleft" in October of 1976.

Open Source definition author Bruce Perens shares his thoughts today. "When I left Pixar in 2000, I stopped in Steve Job's office — which for some reason was right across the hall from mine... " Perens remembered. "I asked Steve: 'You still don't believe in this Linux stuff, do you...?'" And Perens remembers how that movement finally won over Steve Jobs and carried the day. "Three years later, Steve stood onstage in front of a slide that said 'Open Source: We Think It's Great!' as he introduced the Safari browser, which at that time was based on the browser engine developed by the KDE Open Source project!"

Comment Re:There are ~20,000 US citizens in Venezuela (Score 4, Informative) 180

About those U.S. citizens in Venezuela...

"A foundation dedicated to advocating for Americans wrongfully detained abroad said today that it is monitoring the situation in Venezuela — where it says at least five Americans are reportedly held," reports NBC News.

Submission + - Ask Slashdot: What's the Stupidest Use of AI You Saw in 2025?

destinyland writes: What's the stupidest use of AI you encountered in 2025? Have you been called by AI telemarketers? Forced to do job interviews with a glitching AI?

With all this talk of "disruption" and "inevitability," this is our chance to have some fun. Personally, I think 2025's worst AI "innovation" was the AI-powered web browsers that eat web pages and then spit out a slop "summary" of what you would've seen if you'd actually visited the page. But there've been other AI projects that were just exquisitely, quintessentially bad...

— Two years after the death of Suzanne Somers, her husband recreated her with an AI-powered robot.

— Disneyland imagineers used deep reinforcement learning to program a talking robot snowman.

— Attendees at an LA Comic Con were offered that chance to to talk to an AI-powered hologram of Stan Lee for $20.

— And of course, as the year ended, the Wall Street Journal announced that a vending machine run by Anthropic's Claude AI had been tricked into giving away hundreds of dollars in merchandise for free, including a PlayStation 5, a live fish, and underwear.

What did I miss? What "AI fails" will you remember most about 2025?

Slashdot Top Deals

"If John Madden steps outside on February 2, looks down, and doesn't see his feet, we'll have 6 more weeks of Pro football." -- Chuck Newcombe

Working...