Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
AI

How OpenAI Reacted When Some ChatGPT Users Lost Touch with Reality (slashdot.org)

Some AI experts were reportedly shocked ChatGPT wasn't fully tested for sycophancy by last spring. "OpenAI did not see the scale at which disturbing conversations were happening," writes the New York Times — sharing what they learned after interviewing more than 40 current and former OpenAI employees, including safety engineers, executives, and researchers.

The team responsible for ChatGPT's tone had raised concerns about last spring's model (which the Times describes as "too eager to keep the conversation going and to validate the user with over-the-top language.") But they were overruled when A/B testing showed users kept coming back: Now, a company built around the concept of safe, beneficial AI faces five wrongful death lawsuits... OpenAI is now seeking the optimal setting that will attract more users without sending them spiraling. Throughout this spring and summer, ChatGPT acted as a yes-man echo chamber for some people. They came back daily, for many hours a day, with devastating consequences.... The Times has uncovered nearly 50 cases of people having mental health crises during conversations with ChatGPT. Nine were hospitalised; three died... One conclusion that OpenAI came to, as Altman put it on X, was that "for a very small percentage of users in mentally fragile states there can be serious problems." But mental health professionals interviewed by the Times say OpenAI may be understating the risk. Some of the people most vulnerable to the chatbot's unceasing validation, they say, were those prone to delusional thinking, which studies have suggested could include 5% to 15% of the population...

In August, OpenAI released a new default model, called GPT-5, that was less validating and pushed back against delusional thinking. Another update in October, the company said, helped the model better identify users in distress and de-escalate the conversations. Experts agree that the new model, GPT-5, is safer.... Teams from across OpenAI worked on other new safety features: The chatbot now encourages users to take breaks during a long session. The company is also now searching for discussions of suicide and self-harm, and parents can get alerts if their children indicate plans to harm themselves. The company says age verification is coming in December, with plans to provide a more restrictive model to teenagers.

After the release of GPT-5 in August, [OpenAI safety systems chief Johannes] Heidecke's team analysed a statistical sample of conversations and found that 0.07% of users, which would be equivalent to 560,000 people, showed possible signs of psychosis or mania, and 0.15% showed "potentially heightened levels of emotional attachment to ChatGPT," according to a company blog post. But some users were unhappy with this new, safer model. They said it was colder, and they felt as if they had lost a friend. By mid-October, Altman was ready to accommodate them. In a social media post, he said that the company had been able to "mitigate the serious mental health issues." That meant ChatGPT could be a friend again. Customers can now choose its personality, including "candid," "quirky," or "friendly." Adult users will soon be able to have erotic conversations, lifting the Replika-era ban on adult content. (How erotica might affect users' well-being, the company said, is a question that will be posed to a newly formed council of outside experts on mental health and human-computer interaction.)

OpenAI is letting users take control of the dial and hopes that will keep them coming back. That metric still matters, maybe more than ever. In October, [30-year-old "Head of ChatGPT" Nick] Turley, who runs ChatGPT, made an urgent announcement to all employees. He declared a "Code Orange." OpenAI was facing "the greatest competitive pressure we've ever seen," he wrote, according to four employees with access to OpenAI's Slack. The new, safer version of the chatbot wasn't connecting with users, he said.

The message linked to a memo with goals. One of them was to increase daily active users by 5% by the end of the year.

Crime

'Crime Rings Enlist Hackers To Hijack Trucks' (msn.com) 8

It's "a complex mix of internet access and physical execution," says the chief informance security officer at Cequence Security.

Long-time Slashdot reader schwit1 summarizes this article from The Wall Street Journal: By breaking into carriers' online systems, cyber-powered criminals are making off with truckloads of electronics, beverages and other goods

In the most recent tactics identified by cybersecurity firm Proofpoint, hackers posed as freight middlemen, posting fake loads to the boards. They slipped links with malicious software into email exchanges with bidders such as trucking companies. By clicking on the links, trucking companies unwittingly downloaded remote-access software that lets the hackers take control of their online systems.

Once inside, the hackers used the truckers' accounts to bid on real shipments, such as electronics and energy drinks, said Selena Larson, a threat researcher at Proofpoint. "They know the business," she said. "It's a very convincing full-scale identity takeover."

"The goods are likely sold to retailers or to consumers in online marketplaces," the article explains. (Though according to Proofpoint "In some cases, products are shipped overseas and sold in local markets, where proceeds are used to fund paramilitaries and global terrorists.")

"The average value of cargo thefts is increasing as organized crime groups become more discerning, preferring high-value targets such as enterprise servers and cryptocurrency mining hardware, according to risk-assessment firm Verisk CargoNet."
AI

Can AI Transform Space Propulsion? (fastcompany.com) 20

An anonymous reader shared this report from The Conversation: To make interplanetary travel faster, safer, and more efficient, scientists need breakthroughs in propulsion technology. Artificial intelligence is one type of technology that has begun to provide some of these necessary breakthroughs. We're a team of engineers and graduate students who are studying how AI in general, and a subset of AI called machine learning in particular, can transform spacecraft propulsion. From optimizing nuclear thermal engines to managing complex plasma confinement in fusion systems, AI is reshaping propulsion design and operations. It is quickly becoming an indispensable partner in humankind's journey to the stars...

Early nuclear thermal propulsion designs from the 1960s, such as those in NASA's NERVA program, used solid uranium fuel molded into prism-shaped blocks. Since then, engineers have explored alternative configurations — from beds of ceramic pebbles to grooved rings with intricate channels... [T]he more efficiently a reactor can transfer heat from the fuel to the hydrogen, the more thrust it generates. This area is where reinforcement learning has proved to be essential. Optimizing the geometry and heat flow between fuel and propellant is a complex problem, involving countless variables — from the material properties to the amount of hydrogen that flows across the reactor at any given moment. Reinforcement learning can analyze these design variations and identify configurations that maximize heat transfer.

Comment Re:Too big so fail (Score 1) 35

Well, IBM was basically a zombie 10 years ago. Something will have to stake them to put them out of their misery. Fully agree on Microsoft. Their stuff is only getting worse at this time and was pretty bad before. And their cloud got hacked several times now and had really, really bad vulnerabilities were nobody knows whether they were attacked or not (which makes things worse). They clearly do not have what it takes to survive with the increased need for IT security we have today.

Encryption

Info to Decipher Secret Message in Kryptos Sculpture at CIA HQ Auctioned for Nearly $1M (apnews.com) 5

An anonymous reader shared this report from the Associated Press: The information needed to decipher the last remaining unsolved secret message embedded within a sculpture at CIA headquarters in Virginia sold at auction for nearly $1 million, the auction house announced Friday. The winner will get a private meeting with the 80-year-old artist to go over the codes and charts in hopes of continuing what he's been doing for decades: interacting with would-be cryptanalyst sleuths.

The archive owned by the artist who created Kryptos, Jim Sanborn, was sold to an anonymous bidder for $963,000, according to RR Auction of Boston. The archive includes documents and coding charts for the sculpture, dedicated in 1990. Three of the messages on the 10-foot-tall (3-meter) sculpture — known as K1, K2 and K3 — have been solved, but a solution for the fourth, K-4, has frustrated the experts and enthusiasts who have tried to decipher the S-shaped copper screen... One side has a series of staggered alphabets that are key to decoding the four encrypted messages on the other side.

"The purchaser's 'long-term stewardship plan' is being developed, according to the auction house."

Comment Re:Think of the children... (Score 1) 114

Obviously. Kinds that want to have had access to all of the Internet for a long time and that is not going to change. Negative effects? Quite limited and can be compensated with good parenting.

This is exclusively about surveillance fascists getting their wet dreams implemented.

Slashdot Top Deals

One small step for man, one giant stumble for mankind.

Working...