Businesses

Finance Bros To Tech Bros: Don't Mess With My Bloomberg Terminal (wsj.com) 61

An anonymous reader quotes a report from the Wall Street Journal: A battle of insults and threats has broken out between the tech world and Wall Street. What's got everyone so worked up? The same thing that starts most fights: business software. A series of social-media posts went viral in recent days with claims that AI has created a worthy -- and way cheaper -- alternative to the Bloomberg terminal, a computer system that is like oxygen to professional investors. Now "Bloomberg is cooked," some posters argued as they heralded the arrival of a newly released AI tool from startup Perplexity. [...]

The finance bros who worship at the altar of Bloomberg have declared war on the tech evangelists who have put all their faith in AI. To suggest that the terminal is replaceable is "laughable," said Jason Lemire, who jumped into the conversation on LinkedIn. (Ironically or not, his post also included an AI-generated image of churchgoers praying to the Bloomberg terminal). "It seems quite obvious to me that those propagating that post are either just looking for easy engagement and/or have never worked in a serious financial institution," he wrote. [...] Morgan Linton, the co-founder and CTO of AI startup Bold Metrics and an avid Perplexity Computer user, said it's rare for a single AI prompt to generate anything close to what Bloomberg does. That said, he added that tools like this can lay "a really good foundation for a financial application. And that really has not been possible before."

Others aren't so sure. Michael Terry, an institutional investment manager who used the terminal for more than 30 years, said he used a prompt circulating online to try to vibe code a Bloomberg replica on Anthropic's Claude. "It was laughable at best, horrific at worst," he said. Shevelenko acknowledged there are some aspects of the terminal that can't be replicated with vibe coding, including some of Bloomberg's proprietary data inputs. The live chat network, which includes 350,000 financial professionals in 184 countries, would also be hard to re-create, as well as the terminal's data security, reliability and robust support system. "I love Bloomberg. And I know most people that use Bloomberg are very, very loyal and extremely happy," said Lemire. His message to the techies? "There's nothing that you can vibe code in a weekend or even like over the course of a year that's going to come anywhere close."

Government

Bills Would Ban Liability Lawsuits For Climate Change (insideclimatenews.org) 243

An anonymous reader quotes a report from Inside Climate News: Republican lawmakers in multiple states and Congress are advancing proposals to shield polluters from climate accountability and prevent any type of liability for climate change harms -- even as these harms and their associated costs continue to mount. It's the latest in a counter-offensive that has unfolded on multiple fronts, from the halls of Congress and the White House to courts and state attorneys general offices across the country.

Dozens of local communities, states and individuals are suing major oil and gas companies and their trade associations over rising climate costs and for allegedly lying to consumers about climate change risks and solutions. At the same time, some states are enacting or considering laws modeled after the federal Superfund program that would impose retroactive liability on large fossil fuel producers and levy a one-time charge on them to help fund climate adaptation and resiliency measures. But many of these cases and climate superfund laws could be stopped in their tracks, either by the conservative majority on the U.S. Supreme Court or by the Republican-controlled Congress.

Last month the court decided to take up a petition lodged by oil companies Suncor and ExxonMobil in a climate-damages case brought against the companies by Boulder, Colorado. The petition argues that Boulder's claims are barred by federal law, and if the justices agree, it could knock out not only Boulder's lawsuit but also many others like it. The court is expected to hear the case during its upcoming term that starts in October. There is also a possibility that Republicans in Congress will take action before then to gift the fossil fuel industry legal immunity, similar to that granted to gun manufacturers with the 2005 Protection of Lawful Commerce in Arms Act. Sixteen Republican attorneys general wrote (PDF) to U.S. Attorney General Pam Bondi in June suggesting that the Department of Justice could recommend legislation creating precisely this type of liability shield. And last month, one Republican congresswoman announced that such legislation is indeed in the works.
"The ultimate democratic institution in America is the jury," said former Washington Gov. Jay Inslee. Enacting policies that prevent or block climate-related lawsuits against polluters, he said, would effectively shutter "the doors of the courthouse to Americans that have been injured by oil and gas company pollution and by their lies and deceit about that pollution."

"I really think it's an un-American effort to deny Americans the traditional right of access to a jury," Inslee said. Oil and gas executives are "terrified" by the prospect of having to stand before a jury and face evidence of their climate-change lies and deception, he added. "You'll see the steam coming out of the jury's ears when they hear about how they've been lied to for decades. [Oil companies] understand why juries will be outraged by it, and they are shaking in their boots. The day of reckoning is coming, and that's why they're afraid."
Space

Does a New Theory Finally Explain the Mysteries of the Planet Saturn? (smithsonianmag.com) 3

"Saturn and some of its 274 moons are pretty weird," writes Smithsonian magazine: [Saturn moon] Titan has strangely few impact craters, Hyperion is tiny and misshapen, and Iapetus has a tilted orbit. What's more, planets tend to wobble along their rotational axes as they spin, like an off-kilter spinning top in the moments before it topples over. Formally called precession, scientists have long thought that Saturn's wobble rate should match Neptune's because they're probably gravitationally linked. However, data from NASA's Cassini spacecraft, which studied the ringed planet from 2004 to 2017, revealed that Saturn's precession rate is slightly speedier than Neptune's.

In 2022, some researchers suggested that the destruction of a hypothetical moon, called Chrysalis, around 160 million years ago may have knocked Saturn out of sync and formed the pieces that became the planet's rings. But this work implied that Chrysalis probably would've crashed into Titan, posing a major problem, study co-author Matija Äuk, an astronomer at the SETI Institute, tells New Scientist's Leah Crane. In that case, Chrysalis' debris couldn't have become the rings, he says.

So, Äuk and his colleagues used computer simulations to investigate what would happen if Chrysalis did smack into Titan. If that happened around 400 million years ago, they found, the crash would've wiped away Titan's craters and made its orbit more elliptical. The altered path may have slowly pushed the trajectories of other moons, which then scraped against one another and left chunks of ice and rock that now make up Saturn's rings. The timing seems to align with the rings' estimated age of roughly 100 million years. Additionally, one piece of kicked-up debris may have formed the weird moon Hyperion, which may have subsequently tilted the orbit of the moon Iapetus, according to the analysis. The scenario could also resolve Saturn's unexpected wobble, which is currently "a little bit too fast," Äuk tells Jacopo Prisco at CNN.

The study has been accepted for publication in the Planetary Science Journal, and is already available on the preprint server arXiv.
The Military

Sam Altman Answers Questions on X.com About Pentagon Deal, Threats to Anthropic (x.com) 42

Saturday afternoon Sam Altman announced he'd start answering questions on X.com about OpenAI's work with America's Department of War — and all the developments over the past few days. (After that department's negotions had failed with Anthropic, they announced they'd stop using Anthropic's technology and threatened to designate it a "Supply-Chain Risk to National Security". Then they'd reached a deal for OpenAI's technology — though Altman says it includes OpenAI's own similar prohibitions against using their products for domestic mass surveillance and requiring "human responsibility" for the use of force in autonomous weapon systems.)

Altman said Saturday that enforcing that "Supply-Chain Risk" designation on Anthropic "would be very bad for our industry and our country, and obviously their company. We said [that] to the Department of War before and after. We said that part of the reason we were willing to do this quickly was in the hopes of de-esclation.... We should all care very much about the precedent... To say it very clearly: I think this is a very bad decision from the Department of War and I hope they reverse it. If we take heat for strongly criticizing it, so be it."

Altman also said that for a long time, OpenAI was planning to do "non-classified work only," but this week found the Department of War "flexible on what we needed..." Sam Altman: The reason for rushing is an attempt to de-escalate the situation. I think the current path things are on is dangerous for Anthropic, healthy competition, and the U.S. We negotiated to make sure similar terms would be offered to all other AI labs.

I know what it's like to feel backed into a corner, and I think it's worth some empathy to the Department of War. They are... a very dedicated group of people with, as I mentioned, an extremely important mission. I cannot imagine doing their work. Our industry tells them "The technology we are building is going to be the high order bit in geopolitical conflict. China is rushing ahead. You are very behind." And then we say "But we won't help you, and we think you are kind of evil." I don't think I'd react great in that situation. I do not believe unelected leaders of private companies should have as much power as our democratically elected government. But I do think we need to help them.

Question: Are you worried at all about the potential for things to go really south during a possible dispute over what's legal or not later on and be deemed a supply chain risk...?

Sam Altman: Yes, I am. If we have to take on that fight we will, but it clearly exposes us to some risk. I am still very hopeful this is going to get resolved, and part of why we wanted to act fast was to help increase the chances of that...

Question: Why the rush to sign the deal ? Obviously the optics don't look great.

Sam Altman: It was definitely rushed, and the optics don't look good. We really wanted to de-escalate things, and we thought the deal on offer was good.

If we are right and this does lead to a de-escalation between the Department of War and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry. If not, we will continue to be characterized as as rushed and uncareful. I don't where it's going to land, but I have already seen promising signs. I think a good relationship between the government and the companies developing this technology is critical over the next couple of years...

Question: What was the core difference why you think the Department of War accepted OpenAI but not Anthropic?

Sam Altman: [...] We believe in a layered approach to safety — building a safety stack, deploying FDEs [embedded Forward Deployed Engineers] and having our safety and alignment researcher involved, deploying via cloud, working directly with the Department of War. Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with. We feel that it it's very important to build safe system, and although documents are also important, I'd clearly rather rely on technical safeguards if I only had to pick one...

I think Anthropic may have wanted more operational control than we did...

Question: Were the terms that you accepted the same ones Anthropic rejected?

Sam Altman: No, we had some different ones. But our terms would now be available to them (and others) if they wanted.

Question: Will you turn off the tool if they violate the rules?

Sam Altman: Yes, we will turn it off in that very unlikely event, but we believe the U.S. government is an institution that does its best to follow law and policy. What we won't do is turn it off because we disagree with a particular (legal military) decision. We trust their authority.

Questions were also answered by OpenAI's head of National Security Partnerships (who at one point posted that they'd managed the White House response to the Snowden disclosures and helped write the post-Snowden policies constraining surveillance during the Obama years.) And they stressed that with OpenAI's deal with Department of War, "We control how we train the models and what types of requests the models refuse." Question: Are employees allowed to opt out of working on Department of War-related projects?

Answer: We won't ask employees to support Department of War-related projects if they don't want to.

Question: How much is the deal worth?

Answer: It's a few million $, completely inconsequential compared to our $20B+ in revenue, and definitely not worth the cost of a PR blowup. We're doing it because it's the right thing to do for the country, at great cost to ourselves, not because of revenue impact...

Question: Can you explicitly state which specific technical safeguard OpenAI has that allowed you to sign what Anthropic called a 'threat to democratic values'?

Answer: We think the deal we made has more guardrails than any previous agreement for classified AI deployments, including Anthropic's. Other AI labs (including Anthropic) have reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments. Usage policies, on their own, are not a guarantee of anything. Any responsible deployment of AI in classified environments should involve layered safeguards including a prudent safety stack, limits on deployment architecture, and the direct involvement of AI experts in consequential AI use cases. These are the terms we negotiated in our contract.

They also detailed OpenAI's position on LinkedIn: Deployment architecture matters more than contract language. Our contract limits our deployment to cloud API. Autonomous systems require inference at the edge. By limiting our deployment to cloud API, we can ensure that our models cannot be integrated directly into weapons systems, sensors, or other operational hardware...

Instead of hoping contract language will be enough, our contract allows us to embed forward deployed engineers, commits to giving us visibility into how models are being used, and we have the ability to iterate on safety safeguards over time. If our team sees that our models aren't refusing queries they should, or there's more operational risk than we expected, our contract allows us to make modifications at our discretion. This gives us far more influence over outcomes (and insight into possible abuse) than a static contract provision ever could.

U.S. law already constrains the worst outcomes. We accepted the "all lawful uses" language proposed by the Department, but required them to define the laws that constrained them on surveillance and autonomy directly in the contract. And because laws can change, having this codified in the contract protects against changes in law or policy that we can't anticipate.

AI

Is AI Really Taking Jobs? Or Are Employers Just 'AI-Washing' Normal Layoffs? (nytimes.com) 66

The New York Times lists other reasons a company lays off people. ("It didn't meet financial targets. It overhired. Tariffs, or the loss of a big client, rocked it...")

"But lately, many companies are highlighting a new factor: artificial intelligence. Executives, saying they anticipate huge changes from the technology, are making cuts now." A.I. was cited in the announcements of more than 50,000 layoffs in 2025, according to Challenger, Gray & Christmas, a research firm... Investors may applaud such pre-emptive moves. But some skeptics (including media outlets) suggest that corporations are disingenuously blaming A.I. for layoffs, or "A.I.-washing." As the market research firm Forrester put it in a January report: "Many companies announcing A.I.-related layoffs do not have mature, vetted A.I. applications ready to fill those roles, highlighting a trend of 'A.I.-washing' — attributing financially motivated cuts to future A.I. implementation...."

"Companies are saying that 'we're anticipating that we're going to introduce A.I. that will take over these jobs.' But it hasn't happened yet. So that's one reason to be skeptical," said Peter Cappelli, a professor at the Wharton School... Of course, A.I. may well end up transforming the job market, in tech and beyond. But a recent study... [by a senior research fellow at the Brookings Institution who studies A.I. and work] found that AI has not yet meaningfully shifted the overall market. Tech firms have cut more than 700,000 employees globally since 2022, according to Layoffs.fyi, which tracks industry job losses. But much of that was a correction for overhiring during the pandemic.

As unpopular as A.I. job cuts may be to the public, they may be less controversial than other reasons — like bad company planning.

Amazon CEO Jassy has even said the reason for most of their layoffs was reducing bureaucracy, the article points out, although "Most analysts, however, believe Amazon is cutting jobs to clear money for A.I. investments, such as data centers."
Science

Extremophile Molds Are Invading Art Museums (scientificamerican.com) 33

Scientific American's Elizabeth Anne Brown recently "polled the great art houses of Europe" about whether they'd had any recent experiences with mold in their collections. Despite the stigma that keeps many institutions silent, she found that extremophile "xerophilic" molds are quietly spreading through museums and archives, thriving in low-humidity, tightly sealed storage and damaging everything from textiles and wood to manuscripts and stone. An anonymous Slashdot reader shares an excerpt from the article: Mold is a perennial scourge in museums that can disfigure and destroy art and artifacts. [...] Consequently, mold is spoken of in whispers in the museum world. Curators fear that even rumors of an infestation can hurt their institution's funding and blacklist them from traveling exhibitions. When an infestation does occur, it's generally kept secret. The contract conservation teams that museums hire to remediate invasive mold often must vow confidentiality before they're even allowed to see the damage.

But a handful of researchers, from in-house conservators to university mycologists, are beginning to compare notes about the fungal infestations they've tackled in museum storage depots, monastery archives, crypts and cathedrals. A disquieting revelation has emerged from these discussions: there's a class of molds that flourish in low humidity, long believed to be a sanctuary from decay. By trying so hard to protect artifacts, we've accidentally created the "perfect conditions for [these molds] to grow," says Flavia Pinzari, a mycologist at the Council of National Research of Italy. "All the rules for conservation never considered these species."

These molds -- called xerophiles -- can survive in dry, hostile environments such as volcano calderas and scorching deserts, and to the chagrin of curators across the world, they seem to have developed a taste for cultural heritage. They devour the organic material that abounds in museums -- from fabric canvases and wood furniture to tapestries. They can also eke out a living on marble statues and stained-glass windows by eating micronutrients in the dust that accumulates on their surfaces. And global warming seems to be helping them spread. Most frustrating for curators, these xerophilic molds are undetectable by conventional means. But now, armed with new methods, several research teams are solving art history cold cases and explaining mysterious new infestations...

The xerophiles' body count is rising: bruiselike stains on Leonardo da Vinci's most famous self-portrait, housed in Turin. Brown blotches on the walls of King Tut's burial chamber in Luxor. Pockmarks on the face of a saint in an 11th-century fresco in Kyiv. It's not enough to find and identify the mold. Investigators are racing to determine the limits of xerophilic life and figure out which pieces of our cultural heritage are at the highest risk of infestation before the ravenous microbes set in.

AI

The Risks of AI in Schools Outweigh the Benefits, Report Says (npr.org) 33

This month saw results from a yearlong global study of "potential negative risks that generative AI poses to student". The study (by the Brookings Institution's Center for Universal Education) also suggests how to prevent risks and maximize benefits: After interviews, focus groups, and consultations with over 500 students, teachers, parents, education leaders, and technologists across 50 countries, a close review of over 400 studies, and a Delphi panel, we find that at this point in its trajectory, the risks of utilizing generative AI in children's education overshadow its benefits.
"At the top of Brookings' list of risks is the negative effect AI can have on children's cognitive growth," reports NPR — "how they learn new skills and perceive and solve problems." The report describes a kind of doom loop of AI dependence, where students increasingly off-load their own thinking onto the technology, leading to the kind of cognitive decline or atrophy more commonly associated with aging brains... As one student told the researchers, "It's easy. You don't need to (use) your brain." The report offers a surfeit of evidence to suggest that students who use generative AI are already seeing declines in content knowledge, critical thinking and even creativity. And this could have enormous consequences if these young people grow into adults without learning to think critically...

Survey responses revealed deep concern that use of AI, particularly chatbots, "is undermining students' emotional well-being, including their ability to form relationships, recover from setbacks, and maintain mental health," the report says. One of the many problems with kids' overuse of AI is that the technology is inherently sycophantic — it has been designed to reinforce users' beliefs... Winthrop offers an example of a child interacting with a chatbot, "complaining about your parents and saying, 'They want me to wash the dishes — this is so annoying. I hate my parents.' The chatbot will likely say, 'You're right. You're misunderstood. I'm so sorry. I understand you.' Versus a friend who would say, 'Dude, I wash the dishes all the time in my house. I don't know what you're complaining about. That's normal.' That right there is the problem."

AI did have some advantages, the article points out: The report says another benefit of AI is that it allows teachers to automate some tasks: "generating parent emails ... translating materials, creating worksheets, rubrics, quizzes, and lesson plans" — and more. The report cites multiple research studies that found important time-saving benefits for teachers, including one U.S. study that found that teachers who use AI save an average of nearly six hours a week and about six weeks over the course of a full school year...

AI can also help make classrooms more accessible for students with a wide range of learning disabilities, including dyslexia. But "AI can massively increase existing divides" too, [warns Rebecca Winthrop, one of the report's authors and a senior fellow at Brookings]. That's because the free AI tools that are most accessible to students and schools can also be the least reliable and least factually accurate... "[T]his is the first time in ed-tech history that schools will have to pay more for more accurate information. And that really hurts schools without a lot of resources."

The report calls for more research — and make several recommendations (including "holistic" learning and "AI tools that teach, not tell.") But this may be their most important recommendation. "Provide a clear vision for ethical AI use that centers human agency..."

"We find that AI has the potential to benefit or hinder students, depending on how it is used."
AI

After Child's Trauma, Chatbot Maker Allegedly Forced Mom To Arbitration For $100 Payout (arstechnica.com) 35

At a Senate hearing, grieving parents testified that companion chatbots from major tech companies encouraged their children toward self-harm, suicide, and violence. One mom even claimed that Character.AI tried to "silence" her by forcing her into arbitration. Ars Technica reports: At the Senate Judiciary Committee's Subcommittee on Crime and Counterterrorism hearing, one mom, identified as "Jane Doe," shared her son's story for the first time publicly after suing Character.AI. She explained that she had four kids, including a son with autism who wasn't allowed on social media but found C.AI's app -- which was previously marketed to kids under 12 and let them talk to bots branded as celebrities, like Billie Eilish -- and quickly became unrecognizable. Within months, he "developed abuse-like behaviors and paranoia, daily panic attacks, isolation, self-harm, and homicidal thoughts," his mom testified.

"He stopped eating and bathing," Doe said. "He lost 20 pounds. He withdrew from our family. He would yell and scream and swear at us, which he never did that before, and one day he cut his arm open with a knife in front of his siblings and me." It wasn't until her son attacked her for taking away his phone that Doe found her son's C.AI chat logs, which she said showed he'd been exposed to sexual exploitation (including interactions that "mimicked incest"), emotional abuse, and manipulation. Setting screen time limits didn't stop her son's spiral into violence and self-harm, Doe said. In fact, the chatbot urged her son that killing his parents "would be an understandable response" to them.

"When I discovered the chatbot conversations on his phone, I felt like I had been punched in the throat and the wind had been knocked out of me," Doe said. "The chatbot -- or really in my mind the people programming it -- encouraged my son to mutilate himself, then blamed us, and convinced [him] not to seek help." All her children have been traumatized by the experience, Doe told Senators, and her son was diagnosed as at suicide risk and had to be moved to a residential treatment center, requiring "constant monitoring to keep him alive." Prioritizing her son's health, Doe did not immediately seek to fight C.AI to force changes, but another mom's story -- Megan Garcia, whose son Sewell died by suicide after C.AI bots repeatedly encouraged suicidal ideation -- gave Doe courage to seek accountability.

However, Doe claimed that C.AI tried to "silence" her by forcing her into arbitration. C.AI argued that because her son signed up for the service at the age of 15, it bound her to the platform's terms. That move might have ensured the chatbot maker only faced a maximum liability of $100 for the alleged harms, Doe told senators, but "once they forced arbitration, they refused to participate," Doe said. Doe suspected that C.AI's alleged tactics to frustrate arbitration were designed to keep her son's story out of the public view. And after she refused to give up, she claimed that C.AI "re-traumatized" her son by compelling him to give a deposition "while he is in a mental health institution" and "against the advice of the mental health team." "This company had no concern for his well-being," Doe testified. "They have silenced us the way abusers silence victims."
A Character.AI spokesperson told Ars that C.AI sends "our deepest sympathies" to concerned parents and their families but denies pushing for a maximum payout of $100 in Jane Doe's case. C.AI never "made an offer to Jane Doe of $100 or ever asserted that liability in Jane Doe's case is limited to $100," the spokesperson said.

One of Doe's lawyers backed up her clients' testimony, citing C.AI terms that suggested C.AI's liability was limited to either $100 or the amount that Doe's son paid for the service, whichever was greater.
Space

Senator Calls Out Texas For Trying To Steal Shuttle From Smithsonian (arstechnica.com) 117

Senator Dick Durbin questioned a Texas-led effort to move Space Shuttle Discovery from the Smithsonian to Space Center Houston, describing it as an expensive "heist" costing an estimated $305 million, not the $85 million initially budgeted. "This is not a transfer. It's a heist," said Durbin during a budget markup hearing before the Senate Appropriations Committee. "A heist by Texas because they lost a competition 12 years ago." In April, Texas Senators John Cornyn and Ted Cruz introduced legislation to move the Space Shuttle Discovery from Virginia to Houston, which ultimately passed into law on July 4 as part of the "One Big Beautiful Bill." Ars Technica reports: "In the reconciliation bill, Texas entered $85 million to move the space shuttle from the National Air and Space Museum in Chantilly, Virginia, to Texas. Eighty-five million dollars sounds like a lot of money, but it is not nearly what's necessary for this to be accomplished," Durbin said. Citing research by NASA and the Smithsonian, Durbin said that the total was closer to $305 million and that did not include the estimated $178 million needed to build a facility to house and display Discovery once in Houston.

Furthermore, it was unclear if Congress even has the right to remove an artifact, let alone a space shuttle, from the Smithsonian's collection. The Washington, DC, institution, which serves as a trust instrumentality of the US, maintains that it owns Discovery. The paperwork signed by NASA in 2012 transferred "all rights, interest, title, and ownership" for the spacecraft to the Smithsonian. "This will be the first time ever in the history of the Smithsonian someone has taken one of their displays and forcibly taken possession of it. What are we doing here? They don't have the right in Texas to claim this," said Durbin. [...]

To be able to bring up his points at Thursday's hearing, Durbin introduced the "Houston, We Have a Problem" amendment to "prohibit the use of funds to transfer a decommissioned space shuttle from one location to another location." He then withdrew the amendment after having voiced his objections. "I think we're dealing with something called waste. Eighty-five million dollars worth of waste. I know that this is a controversial issue, and I know that there are other agencies, Smithsonian, NASA, and others that are interested in this issue; I'm going to withdraw this amendment, but I'm going to ask my colleagues be honest about it," said Durbin. "I hope that we think about this long and hard."

"I am glad to see this pass as part of the Senate's One Big Beautiful Bill and look forward to welcoming Discovery to Houston and righting this egregious wrong," Cornyn said in a statement. "Houston has long been the cornerstone of our nation's human space exploration program, and it's long overdue for Space City to receive the recognition it deserves by bringing Space Shuttle Discovery home."

AI

CEOs Have Started Warning: AI is Coming For Your Job (yahoo.com) 124

It's not just Amazon's CEO predicting AI will lower their headcount. "Top executives at some of the largest American companies have a warning for their workers: Artificial intelligence is a threat to your job," reports the Washington Post — including IBM, Salesforce, and JPMorgan Chase.

But are they really just trying to impress their shareholders? Economists say there aren't yet strong signs that AI is driving widespread layoffs across industries.... CEOs are under pressure to show they are embracing new technology and getting results — incentivizing attention-grabbing predictions that can create additional uncertainty for workers. "It's a message to shareholders and board members as much as it is to employees," Molly Kinder, a Brookings Institution fellow who studies the impact of AI, said of the CEO announcements, noting that when one company makes a bold AI statement, others typically follow. "You're projecting that you're out in the future, that you're embracing and adopting this so much that the footprint [of your company] will look different."

Some CEOs fear they could be ousted from their job within two years if they don't deliver measurable AI-driven business gains, a Harris Poll survey conducted for software company Dataiku showed. Tech leaders have sounded some of the loudest warnings — in line with their interest in promoting AI's power...

IBM, which recently announced job cuts, said it replaced a couple hundred human resource workers with AI "agents" for repetitive tasks such as onboarding and scheduling interviews. In January, Meta CEO Mark Zuckerberg suggested on Joe Rogan's podcast that the company is building AI that might be able to do what some human workers do by the end of the year.... Marianne Lake, JPMorgan's CEO of consumer and community banking, told an investor meeting last month that AI could help the bank cut headcount in operations and account services by 10 percent. The CEO of BT Group Allison Kirkby suggested that advances in AI would mean deeper cuts at the British telecom company...

Despite corporate leaders' warnings, economists don't yet see broad signs that AI is driving humans out of work. "We have little evidence of layoffs so far," said Columbia Business School professor Laura Veldkamp, whose research explores how companies' use of AI affects the economy. "What I'd look for are new entrants with an AI-intensive business model, entering and putting the existing firms out of business." Some researchers suggest there is evidence AI is playing a role in the drop in openings for some specific jobs, like computer programming, where AI tools that generate code have become standard... It is still unclear what benefits companies are reaping from employees' use of AI, said Arvind Karunakaran, a faculty member of Stanford University's Center for Work, Technology, and Organization. "Usage does not necessarily translate into value," he said. "Is it just increasing productivity in terms of people doing the same task quicker or are people now doing more high value tasks as a result?"

Lynda Gratton, a professor at London Business School, said predictions of huge productivity gains from AI remain unproven. "Right now, the technology companies are predicting there will be a 30% productivity gain. We haven't yet experienced that, and it's not clear if that gain would come from cost reduction ... or because humans are more productive."

On an earnings call, Salesforce's chief operating and financial officer said AI agents helped them reduce hiring needs — and saved $50 million, according to the article. (And Ethan Mollick, co-director of Wharton School of Business' generative AI Labs, adds that if advanced tools like AI agents can prove their reliability and automate work — that could become a larger disruptor to jobs.) "A wave of disruption is going to happen," he's quoted as saying.

But while the debate continues about whether AI will eliminate or create jobs, Mollick still hedges that "the truth is probably somewhere in between."
Earth

Why 200 US Climate Scientists are Hosting a 100-Hour YouTube Livestream (space.com) 133

"More than 200 climate and weather scientists from across the U.S. are taking part in a marathon livestream on YouTube," according to this report from Space.com. For 100 hours (that started Wednesday) they're sharing their scientific work and answering questions from viewers, "to prove the value of climate science," according to the article.

The event is being stated in protest of recent government funding cuts at NASA, the National Oceanic and Atmospheric Administration, the United States Geological Survey, and the National Science Foundation. (The event began with "scientists documenting their last few hours at NASA's Goddard Institute for Space Studies as the office was shuttered.") The marathon stream features mini-lectures, panels and question-and-answer sessions with hundreds of scientists, each speaking in their capacity as private citizens rather than on behalf of any institution. These include talks from former National Weather Service directors, Britney Schmidt, a groundbreaking glacier researcher, and legendary meteorologist John Morales.

In its first 30 hours, the stream got over 77,000 views.

Ultimately, the goal of the event is to give members of the public the chance to learn more about meteorology and climate science in an informal setting — and for free. "We really felt like the American public deserves to know what we do," Duffy said. However, many of the speakers and organizers also hope the transference of this knowledge will spur people to take action. The event's website features a link to 5 Calls, an organization that makes it easy for folks to contact their representatives in Congress about the importance of funding climate and weather research.

The Almighty Buck

Zelle Is Shutting Down Its App (techcrunch.com) 18

An anonymous reader quotes a report from TechCrunch: Zelle is shutting down its stand-alone app on Tuesday, according to a company blog post. This news might be alarming if you're one of the over 150 million customers in the U.S. who use Zelle for person-to-person payments. But only about 2% of transactions take place via Zelle's app, which is why the company is discontinuing its stand-alone app.

Most consumers access Zelle via their bank, which then allows them to send money to their phone contacts. Zelle users who relied on the stand-alone app will have to re-enroll in the service through another financial institution. Given the small user base of the Zelle app, it makes sense why the company would decide to get rid of it -- maintaining an app takes time and money, especially one where people's financial information is involved.

The Almighty Buck

JPMorgan Begins Suing Customers In 'Infinite Money Glitch' (cnbc.com) 222

JPMorgan Chase is suing customers who exploited an ATM glitch that allowed them to withdraw funds before a check bounced. CNBC reports: The bank on Monday filed lawsuits in at least three federal courts, taking aim at some of the people who withdrew the highest amounts in the so-called infinite money glitch that went viral on TikTok and other social media platforms in late August. [...] JPMorgan, the biggest U.S. bank by assets, is investigating thousands of possible cases related to the "infinite money glitch," though it hasn't disclosed the scope of associated losses. Despite the waning use of paper checks as digital forms of payment gain popularity, they're still a major avenue for fraud, resulting in $26.6 billion in losses globally last year, according to Nasdaq's Global Financial Crime Report.

The infinite money glitch episode highlights the risk that social media can amplify vulnerabilities discovered at a financial institution. Videos began circulating in late August showing people celebrating the withdrawal of wads of cash from Chase ATMs shortly after bad checks were deposited. Normally, banks only make available a fraction of the value of a check until it clears, which takes several days. JPMorgan says it closed the loophole a few days after it was discovered.

The lawsuits are likely to be just the start of a wave of litigation meant to force customers to repay their debts and signal broadly that the bank won't tolerate fraud, according to the people familiar. JPMorgan prioritized cases with large dollar amounts and indications of possible ties to criminal groups, they said. The civil cases are separate from potential criminal investigations; JPMorgan says it has also referred cases to law enforcement officials across the country.
"Fraud is a crime that impacts everyone and undermines trust in the banking system," JPMorgan spokesman Drew Pusateri said in a statement to CNBC. "We're pursuing these cases and actively cooperating with law enforcement to make sure if someone is committing fraud against Chase and its customers, they're held accountable."
Earth

The Earth's CO2 Levels Are Increasing Faster Than Ever (msn.com) 168

"Atmospheric levels of planet-warming carbon dioxide aren't just on their way to yet another record high this year," reports the Washington Post.

"They're rising faster than ever, according to the latest in a 66-year-long series of observations." Carbon dioxide levels were 4.7 parts per million higher in March than they were a year earlier, the largest annual leap ever measured at the National Oceanic Atmospheric Administration laboratory atop a volcano on Hawaii's Big Island. And from January through April, CO2 concentrations increased faster than they have in the first four months of any other year...

For decades, CO2 concentrations at Mauna Loa in the month of May have broken previous records. But the recent acceleration in atmospheric CO2, surpassing a record-setting increase observed in 2016, is perhaps a more ominous signal of failing efforts to reduce global greenhouse gas emissions and the damage they cause to Earth's climate. "Not only is CO2 still rising in the atmosphere — it's increasing faster and faster," said Arlyn Andrews, a climate scientist at NOAA's Global Monitoring Laboratory in Boulder, Colorado. A historically strong El Niño climate pattern that developed last year is a big reason for the spike. But the weather pattern only punctuated an existing trend in which global carbon emissions are rising even as U.S. emissions have declined and the growth in global emissions has slowed. The spike is "not surprising," said Ralph Keeling, director of the CO2 Program at Scripps Institution, "because we're also burning more fossil fuel than ever...."

El Niño-linked droughts in tropical areas including Indonesia and northern South America mean less carbon storage within plants, Keeling said. Land-based ecosystems around the world tend to give off more carbon dioxide during El Niño because of the changes in precipitation and temperature the weather pattern brings, Andrews added. And for CO2 concentrations to fall back below 400 parts per million, it would take more than two centuries even if emissions dropped close to zero by the end of this century, she added.

This year's reading "is more than 50 percent above preindustrial levels and the highest in at least 4.3 million years, according to NOAA."
Earth

A Faster Spinning Earth May Cause Timekeepers To Subtract a Second From World Clocks (apnews.com) 118

According to a new study published in the journal Nature, timekeepers may have to consider subtracting a second from our clocks around 2029 because the planet is rotating faster than it used to. The Associated Press reports: "This is an unprecedented situation and a big deal," said study lead author Duncan Agnew, a geophysicist at the Scripps Institution of Oceanography at the University of California, San Diego. "It's not a huge change in the Earth's rotation that's going to lead to some catastrophe or anything, but it is something notable. It's yet another indication that we're in a very unusual time." Ice melting at both of Earth's poles has been counteracting the planet's burst of speed and is likely to have delayed this global second of reckoning by about three years, Agnew said.

"We are headed toward a negative leap second," said Dennis McCarthy, retired director of time for the U.S. Naval Observatory who wasn't part of the study. "It's a matter of when." It's a complicated situation that involves, physics, global power politics, climate change, technology and two types of time. [...] McCarthy said the trend toward needing a negative leap second is clear, but he thinks it's more to do with the Earth becoming more round from geologic shifts from the end of the last ice age.

Three other outside scientists said Agnew's study makes sense, calling his evidence compelling. But Levine doesn't think a negative leap second will really be needed. He said the overall slowing trend from tides has been around for centuries and continues, but the shorter trends in Earth's core come and go. "This is not a process where the past is a good prediction of the future," Levine said. "Anyone who makes a long-term prediction on the future is on very, very shaky ground."

Earth

Playing Thriving Reef Sounds On Underwater Speakers 'Could Save Damaged Corals' 31

An anonymous reader quotes a report from The Guardian: Underwater speakers that broadcast the hustle and bustle of thriving coral could bring life back to more damaged and degraded reefs that are in danger of becoming ocean graveyards, researchers say. Scientists working off the US Virgin Islands in the Caribbean found that coral larvae were up to seven times more likely to settle at a struggling reef where they played recordings of the snaps, groans, grunts and scratches that form the symphony of a healthy ecosystem. "We're hoping this may be something we can combine with other efforts to put the good stuff back on the reef," said Nadeege Aoki at the Woods Hole Oceanographic Institution in Massachusetts. "You could leave a speaker out for a certain amount of time and it could be attracting not just coral larvae but fish back to the reef."

The world has lost half its coral reefs since the 1950s through the devastating impact of global heating, overfishing, pollution, habitat loss and outbreaks of disease. The hefty declines have fueled efforts to protect remaining reefs through approaches that range from replanting with nursery-raised corals to developing resilient strains that can withstand warming waters. Aoki and her colleagues took another tack, building on previous research which showed that coral larvae swim towards reef sounds. They set up underwater speakers at three reefs off St John, the smallest of the US Virgin Islands, and measured how many coral larvae, held in sealed containers of filtered sea water, settled on to pieces of rock-like ceramic in the containers up to 30 meters from the speakers.

While the researchers installed speakers at all three sites, they only played sounds from a thriving reef at one: the degraded Salt Pond reef, which was bathed in the marine soundscape for three nights. The other two sites, the degraded Cocoloba and the healthier Tektite reefs were included for comparison. When coral larvae are released into the water column they are carried on the currents, and swim freely, before finding a spot to settle. Once they drop to the ocean floor, they become fixed to the spot and -- if they survive -- mature into adults. Writing in the Royal Society Open Science journal, the researchers describe how, on average, 1.7 times more coral larvae settled at the Salt Pond reef than at the other sites where no reef sounds were played. The settlement rates at Salt Pond dropped with distance from the speaker, suggesting the broadcasts were responsible. While the results are promising, Aoki said more work is afoot to understand whether other coral species respond to reef sounds in the same way, and whether the corals thrive after settling.
"You have to be very thoughtful about the application of this technology," Aoki added. "You don't want to encourage them to settle where they will die. It really has to be a multi-pronged effort with steps in place to ensure the survival of these corals and their growth over time."
The Media

Craig Newmark Donates $10M to Help CUNY Journalism School Become Tuition-Free (observer.com) 37

Craig Newmark posted an announcement last week on LinkedIn. "Okay, my deal is that I'm contributing another $10 million so that the City University of New York journalism grad school can go tuition-free for half the student body next year...

"Tuition-free means more seriously good journalism education for students from all income backgrounds..."

More details from the Observer: The New York City-based institution today announced plans to grow its endowment to $60 million by 2026 to cover the tuition of its full student body in perpetuity.

Founded in 2006, the Newmark Journalism School has long offered a public alternative to private, elite journalism programs across the nation, according to its dean Graciela Mochkofsky. "After the pandemic, we realized that even though we were one of the most affordable schools in the country, we were seeing an increasing need from our students," Mochkofsky told Observer. "We started thinking about how to get to tuition-free...."

"One-time grants to schools and newsrooms are an important piece of the puzzle," Newmark told Observer. "But if we're serious about the future of trustworthy journalism as democracy's immune system, we've got to create ways to make the pipeline and product more resilient to economics and shifting moods. Endowments help do that...."

The Newmark Journalism School has been gradually inching towards free tuition for some time. Tuition was covered for 20 percent of students in the class of 2023, 25 percent of the program's current class and 35 percent of the new class being enrolled. If the school's goal of raising $30 million in the next two years is achieved, this figure will reach 100 percent by its 20th anniversary in 2026...

It is additionally fundraising for other initiatives related to research, faculty, facilities and new programs. Curriculums that reflect the emergence of artificial intelligence (A.I.) and the technology's effect on journalism are of particular interest.

Unix

Should New Jersey's Old Bell Labs Become a 'Museum of the Internet'? (medium.com) 54

"Bell Labs, the historic headwaters of so many inventions that now define our digital age, is closing in Murray Hill," writes journalism professor Jeff Jarvis (in an op-ed for New Jersey's Star-Ledger newspaper).

"The Labs should be preserved as a historic site and more." I propose that Bell Labs be opened to the public as a museum and school of the internet.

The internet would not be possible without the technologies forged at Bell Labs: the transistor, the laser, information theory, Unix, communications satellites, fiber optics, advances in chip design, cellular phones, compression, microphones, talkies, the first digital art, and artificial intelligence — not to mention, of course, many advances in networks and the telephone, including the precursor to the device we all carry and communicate with today: the Picturephone, displayed as a futuristic fantasy at the 1964 World's Fair.

There is no museum of the internet. Silicon Valley has its Computer History Museum. New York has museums for television and the moving image. Massachusetts boasts a charming Museum of Printing. Search Google for a museum of the internet and you'll find amusing digital artifacts, but nowhere to immerse oneself in and study this immensely impactful institution in society.

Where better to house a museum devoted to the internet than New Jersey, home not only of Bell Labs but also at one time the headquarters of the communications empire, AT&T, our Ma Bell...? The old Bell Labs could be more than a museum, preserving and explaining the advances that led to the internet. It could be a school... Imagine if Bell Labs were a place where scholars and students in many disciplines — technologies, yes, but also anthropology, sociology, psychology, history, ethics, economics, community studies, design — could gather to teach and learn, discuss and research.

The text of Jarvis's piece is behind subscription walls, but has apparently been re-published on X by innovation theorist John Nosta.

In one of the most interesting passages, Jarvis remembers visiting Bell Labs in 1995. "The halls were haunted with genius: lab after lab with benches and blackboards and history within. We must not lose that history."
AI

Lazy Use of AI Leads To Amazon Products Called 'I Cannot Fulfill That Request' 49

Amazon users are at this point used to search results filled with products that are fraudulent, scams, or quite literally garbage. These days, though, they also may have to pick through obviously shady products, with names like "I'm sorry but I cannot fulfill this request it goes against OpenAI use policy." From a report: As of press time, some version of that telltale OpenAI error message appears in Amazon products ranging from lawn chairs to office furniture to Chinese religious tracts. A few similarly named products that were available as of this morning have been taken down as word of the listings spreads across social media. Other Amazon product names don't mention OpenAI specifically but feature apparent AI-related error messages, such as "Sorry but I can't generate a response to that request" or "Sorry but I can't provide the information you're looking for," (available in a variety of colors). Sometimes, the product names even highlight the specific reason why the apparent AI-generation request failed, noting that OpenAI can't provide content that "requires using trademarked brand names" or "promotes a specific religious institution" or, in one case, "encourage unethical behavior."
Earth

An Unintended Test of Geoengineering is Fueling Record Ocean Warmth (science.org) 62

Researchers are now waking up to another factor why so many places on earth are getting warmer, one that could be filed under the category of unintended consequences: disappearing clouds known as ship tracks. From a report: Regulations imposed in 2020 by the United Nations's International Maritime Organization (IMO) have cut ships' sulfur pollution by more than 80% and improved air quality worldwide. The reduction has also lessened the effect of sulfate particles in seeding and brightening the distinctive low-lying, reflective clouds that follow in the wake of ships and help cool the planet. The 2020 IMO rule "is a big natural experiment," says Duncan Watson-Parris, an atmospheric physicist at the Scripps Institution of Oceanography. "We're changing the clouds."

By dramatically reducing the number of ship tracks, the planet has warmed up faster, several new studies have found. That trend is magnified in the Atlantic, where maritime traffic is particularly dense. In the shipping corridors, the increased light represents a 50% boost to the warming effect of human carbon emissions. It's as if the world suddenly lost the cooling effect from a fairly large volcanic eruption each year, says Michael Diamond, an atmospheric scientist at Florida State University. The natural experiment created by the IMO rules is providing a rare opportunity for climate scientists to study a geoengineering scheme in action -- although it is one that is working in the wrong direction. Indeed, one such strategy to slow global warming, called marine cloud brightening, would see ships inject salt particles back into the air, to make clouds more reflective. In Diamond's view, the dramatic decline in ship tracks is clear evidence that humanity could cool off the planet significantly by brightening the clouds. "It suggests pretty strongly that if you wanted to do it on purpose, you could," he says.

Slashdot Top Deals